article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
for complex dynamical systems consisting of many interacting subsystems it is a general challenge to reduce the high dimensionality to a few dominating variables that characterize the system .cluster analysis is a method to group elements according to their similarity .however , there is an ambiguity as different algorithms may lead to different clusters .the dynamical behavior of complex systems may reduce their complexity by self - organization . herethe high - dimensional dynamics generates a few order parameters evolving slowly on a strongly fluctuating background . with the help of stochastic methods it is possible to show such simplified dynamics and to estimate a langevin equation directly from the data , _i.e. _ the data are analyzed as a stochastic process with drift and diffusion term .we show that cluster analysis and stochastic methods can be combined in a fruitful way .cluster analysis does not aim at grasping dynamical effects . by combining it with stochastic methodswe show that an improved dynamical cluster classification can be obtained .furthermore we extract dynamical cluster features of a complex system as emerging and disappearing clusters .related ideas were put forward by hutt _ for detecting fixed points in spatiotemporal signals . here , we focus on financial data , for which quasi - stationary market states were identified in the correlation structure using cluster analysis .we briefly sketch these findings before developing our combined analysis .we study the daily closing prices of the s&p 500 stocks for the period 1992 2012 , which aggregate to 5189 trading days .only the 306 stocks that were continuously traded during the whole period are considered .we calculate the relative price changes , _i.e. _ the returns for each stock for a time horizon of one trading day ( td ) . to avoid an estimation bias due to time - varying trends and fluctuations ,we perform a local normalization of the returns before estimating their correlations .we denote the locally normalized returns by .we then calculate the pearson correlation coefficients here denotes the average over 42 trading days .we obtain a correlation matrix for each time by moving the calculation window in steps of one trading day through the time series .all of the considered companies are classified by ten industry sectors according to the global industry classification standard ( gics ) . to reduce the estimation noise, we average the correlation coefficients within each sector and between different sectors , leading to matrices . while this sector - averaged matrix is symmetric , its diagonal is not trivial .thus , it contains independent entries .the crucial idea is to identify each averaged correlation matrix with an element in the real -dimensional euclidian space , so that a similarity , _ i.e. _ a distance , between any two correlation matrices can be calculated . throughout this paper all distancesare measured in the euclidean norm which we denote by . as the next step we use the bisecting _k_-means clustering algorithm . at the beginning of the clustering procedureall of the correlation matrices are considered as one cluster , which is then divided into two sub clusters using the _ k_-means algorithm with .this separation procedure is repeated until the size of each cluster in terms of the mean distance of the cluster members to the cluster center is smaller than a given threshold .we choose the mean distance to be smaller than to achieve 8 clusters as in ref .the market is said to be in a market state at time , if the corresponding correlation matrix is in the cluster . in figure [ f.dendo+states ] ( a )the corresponding clustering tree is shown .figure [ f.dendo+states ] ( b ) depicts the temporal evolution of the financial market .new clusters form and existing clusters vanish in the course of time . and while the first 1000 trading days are dominated by state 1 and only occasional jumps to state 3 , we observe more frequent jumps and less stable behavior in more recent times .+ to quantify the market situation at time , the distance of the correlation matrix to the eight cluster centers ( ) is calculated .two of the eight resulting time series are shown in fig .[ f.states ] .these time series depict the temporal evolution of the system seen from the respective cluster centers .figure [ f.states ] shows that the market is in the beginning close to cluster center 1 and far away from cluster center 6 .this changes over time in accordance with fig .[ f.dendo+states ] ( b ) : cluster 6 occurs later while cluster 1 is present during the first half of the time period . ) of the system to cluster center 1 ( dashed , red ) and cluster center 6 ( black ) . ]a wide class of dynamical systems from different fields are modeled as stochastic processes and thus described by a stochastic differential equation , the langevin equation which describes the time evolution of the system variable as a sum of a deterministic function and a stochastic term .the stochastic force is gaussian distributed with and . for stationary continuous markov processes_ and friedrich _ et al . _ showed that it is possible to extract drift and diffusion functions directly from measured time series using the kramers - moyal coefficients , which are defined as conditional moments here denotes the value of the stochastic variable at which the drift function ( ) and the diffusion function ( ) are evaluated .the average in eq .( [ eq.condmom ] ) is performed over all realizations of for which the condition holds .equation ( [ eq.condmom ] ) expresses therefore ( modulo the exponent ) the mean increment of the variable after time step if starting at the given value at time .the derivative of this mean increment with respect to at is equal to the value of the drift function for and the diffusion function for at , as defined in eq .( [ eq.kramers ] ) .we refer to ref . for further details . in the present work we estimate the drift functions on time windows of 1000 trading days , on which the estimated quantities are treated as time independent. the time dependency is obtained by comparing the estimated drift functions on a sliding time window . especially for small datasets the estimation of the conditional moments ( [ eq.condmom ] ) might be tedious . additionally to the original estimation procedure proposed by siegert __ and friedrich _ et al . _ we use here a kernel based regression as proposed in ref . . instead of analyzing the drift function itself ,it is more convenient to consider the potential function defined as the negative primitive integral of the drift function .the minus sign is a convention .the dynamics of the system is encoded in the shape of : the local minima of the potential function correspond to quasi - stable equilibria , or quasi - stable fixed points , around which the system oscillates .in contrast , local maxima correspond to unstable fixed points .we note that , due to definition , potential functions are defined up to an additive constant which is here set to zero .we further note that both the drift function and the potential function have the dimension of inverse time .one drawback of clustering is that the number of clusters is in general not known _ a priori_.therefore a threshold criterion is used .furthermore , clustering is based only on geometrical properties like positions and distances between elements , _i.e. _ their similarity .the dynamics of the system is not involved . as a new approachwe combine the clustering analysis with the stochastic analysis .the aim is to extract dynamical attributes from time series and derive stability features and quasi - stable fixed points .from the eight time series as defined in eq .( [ eq.timeseries ] ) we calculate the deterministic potentials ( ) as defined in eq .( [ eq.phi ] ) . to grasp the time evolution of the clusters , _i.e. _ market states , we calculate on a window of 1000 data points ( four trading years ) which we shift by 21 data points ( one trading month ) , resulting in 199 deterministic potentials for each cluster .figure [ f.hit_centers ] shows a sample of these potentials for different time windows .the dotted vertical lines denote the distance to the cluster centers which are labeled at the abscissa .each potential in the five figures shows a clear minimum .hence , the market dynamics expressed by the correlation matrices performs a noisy dynamics around the attractive fixed point , which is defined by the minimum .most interestingly the position of the minima of these potentials coincide quite well with the distances to the cluster centers obtained from the cluster analysis . additionally to the positions of the fixed points of the system , the potentials provide information about the stability of the market in the analyzed time window .potential functions with more than only one clear local minimum , e.g. fig . [ f.hit_centers ]( a ) and ( b ) , reflect an unstable dynamics .in contrast an isolated and deep minimum as in fig . [ f.hit_centers ] ( e ) , corresponds to a stable dynamics . in fig .[ f.hit_centers ] ( c ) - ( e ) a transition from state 4 and 5 , which are very close , to state 2 is shown .the quasi - stable fixed point of the potential function changes its position in time and moves closer to the center of the first cluster .we note that in the intermediate state , as shown in fig .[ f.hit_centers ] ( d ) , the width of the potential function is increased . instead of a clear local minimum it has a rather flat plateau .the time evolution of the market reflected in the position change of the local minimum of the potential function is thus taken as a state transition .the ability of our approach to describe multi - stable and transitional behavior has also been shown by mcke _ in the context of wind energy . stability of market states as well as state transitions are studied in detail in stepanov _besides the cases where the positions of the minima of the potentials coincide clearly with the distances to cluster centers , there are also less clear situations as shown in fig . [ f.merge ] . in the considered time window ] ( expressed in trading days ) shows a deep minimum at as seen from the center of the first market state .the data points are therefore distributed approximately around .the set of all possible fixed points is restricted to the points on the -dimensional hypersphere of radius around . to find the empirical fixed point for this setting we minimize ( [ eq.sumdist ] ) under the condition which we solve by the method of lagrange multipliers .we note that for the euclidian norm the problem always has a unique minimum and maximum , unless the distance ( [ eq.timeseries ] ) is constant .the empirical solution of the problem , denoted by , differs slightly from the center of the merged cluster defined in the previous section , .this is because the market does not exclusively stay in the merged state 5 * during the analyzed time period , as observed in fig .[ f.evo_states_6 ] ( a ) .we now quantify the deviation of the obtained fixed point from the averaged correlation matrix by looking at the difference of the distances from to and , respectively . the market is closer to than to whenever holds . the time series is shown in fig . [ f.evo_states_6 ] ( b ) .it is switching between positive and negative values .the grey background highlights the time points at which the market occupies the merged state 5 * , which remarkably coincidences with the negative values of .we almost exactly quantify the merged market states during which falls rapidly down from positive values . not only is the market state identified but also the dynamics of the market within the state . as seen from fig .[ f.evo_states_6 ] ( b ) , the values of continuously decrease and then increase again while the market occupies the merged state . in contrast is fluctuating around a positive value while the market is not in the merged state . as a consistency check, we applied the same algorithm with the center of the third cluster , as well the overall mean correlation matrix as reference points . in all caseswe obtained the same fixed point .the combination of the cluster analysis and the stochastic process analysis allows to characterize the dynamics of a dynamical system as a noise process between different quasi - stationary states . for financial marketsthese market states are defined in terms of correlation matrices which reflect the dependence structure of the stock market .especially , while being in a given market state , the market is fluctuating around the center of the corresponding cluster .a threshold criterion can produce geometrically distinct states , which are a single quasi - stationary state of the system .the deviation of the market situation at time from individual market states is reflected in the distance between the correlation matrix and the respective cluster centers .this distance is taken as the new low - dimensional order parameter of the complex system .the stochastic analysis provides evidence of how the market dynamics is guided by a changing potential landscape with temporally changing stability of the market states . emerging new quasi - stable statesare found . in this way we present a method with which the dynamics of a high - dimensional complex system is projected to low - dimensional collective dynamics .furthermore we address the high dimensionality of the data set by an optimization problem .this problem is well defined and is robust against the choice of reference points .we obtain high - dimensional quasi - stable fixed points of financial markets explicitly and see good chances to apply this method also to other complex states .
we propose a combination of cluster analysis and stochastic process analysis to characterize high - dimensional complex dynamical systems by few dominating variables . as an example , stock market data are analyzed for which the dynamical stability as well as transitions between different stable states are found . this combined method also allows to set up new criteria for merging clusters to simplify the complexity of the system . the low - dimensional approach allows to recover the high - dimensional fixed points of the system by means of an optimization procedure .
we are interested in describing the process of fish schooling by the ordinary differential equations .a model written in terms of ode is very useful .first , the rules of behavior of individual animals can be described precisely .second , many techniques which have been developed in the theory of ode can directly be available to analyse their solutions including asymptotic behavior and numerical computations .we will regard the fish as particles in the space .the direction in which a fish proceeds is regarded as its forward direction . as for the assumptions of modeling, we will follow the idea presented by camazine - deneubourg - franks - sneyd - theraulaz - bonabeau which is also based on empirical results aoki , huth - wissel and warburton - lazarus . in the monograph ( * ? ? ?* chapter 11 ) , they have made the following assumptions : 1 .the school has no leaders and each fish follows the same behavioral rules .2 . to decide where to move , each fish uses some form of weighted average of the position and orientation of its nearest neighbors .3 . there is a degree of uncertainty in the individual s behavior that reflects both the imperfect information - gathering ability of a fish and the imperfect execution of the fish s actions .we remark that similar assumptions , but deterministic ones , were also introduced by reynolds .as seen in section 2 , we formulate the motion of each individual by a system of deterministic and stochastic differential equations .the weight of average is taken analogously to the law of gravitation .that is , for the -th fish at position , the interacting force with the -th one at is given by (x_i - x_j),\ ] ] where are some fixed exponents and is a critical radius .this means that if and are far enough that , then the interaction is attractive ; conversely , if it is opposite , then the interaction is repulsive . the exponents and the radius may depend on the species of animal .the larger and are , the shorter the relative range of interactions between two individuals .a similar weight of average is used for the orientation matching , too , i.e. , (v_i - v_j).\ ] ] here , and denote velocities of the -th and -th animals , respectively .several kinds of mathematical models have already been presented , including difference or differential models .vicsek et al . introduced a simple difference model , assuming that each particle is driven with a constant absolute velocity and the average direction of motion of the particles in its neighborhood together with some random perturbation .oboshi et al . presented another difference model in which an individual selects one basic behavioral pattern from four based on the distance between it and its nearest neighbor .finally , olfati - saber and d'orsogna et al . constructed a deterministic differential model using a generalized morse and attractive / repulsive potential functions , respectively . in this paper , after introducing the model equations , we shall prove local existence of solutions and in some particular cases global existence , too .we shall also present some numerical examples which show robustness of the behavioral rules introduced in ( * ? ? ?* chapter 11 ) for forming a swarm against the uncertainty of individual s information processing and executing its actions . in the forthcoming paper, we are going to construct a particle swarm optimization scheme on the basis of the behavioral rules of swarming animals which can spontaneously and successfully find their feeding stations .the organization of the present paper is as follows . in the next section, we show our model equations .section 3 is devoted to proving local existence of solutions .section 4 gives global existence for both deterministic and stochastic cases but the number of animal is only two .some numerical examples that suggest global existence is not true in general are presented in section 5 .we consider motion of fish . they are regarded as moving particles in the space .the position of the -th particle is denoted by .its velocity is denoted by .our model is then given by (x_i - x_j ) \\ & { } \qquad\quad\enskip - \beta \sum_{j=1,\ , j\not = i}^n \big [ \frac1{(\|x_i - x_j\|/r)^p } + \frac1{(\|x_i - x_j\|/r)^q } \big](v_i - v_j ) \\ & { } \qquad\quad\enskip + f_i(t , x_i , v_i ) \big\ } dt .\end{aligned } \right.\ ] ] the first equation is a stochastic equation on , where denotes a noise resulting from the imperfectness of information - gathering and action of the fish .in fact , are independent -dimensional brownian motions defined on a complete probability space with filtration satisfying the usual conditions .the second one is a deterministic equation on , where are fixed exponents , is a fixed radius and are positive constants .finally , denotes an external force at time which is a given function defined for with values in .it is assumed that are locally lipschitz continuous . in what follows , for simplicity, we shall put then , the system is rewritten in the form (x_i - x_j)\\ & { } \qquad\quad\enskip -\beta_1 \sum_{j=1 , j\ne i}^n\big[\frac{1}{||x_i - x_j||^p}+\frac{\gamma}{||x_i - x_j||^q}\big](v_i - v_j ) \\ & { } \qquad\quad\enskip + f_i(t , x_i , v_i)\big\}dt , \end{aligned}\right.\ ] ] for set the phase space since all the functions in the right hand side of are locally lipschitz continuous in , the existence and uniqueness of local solutions to starting from points belonging to this phase space are obvious in both deterministic and stochastic cases , see for instance .thus , we have [ thm0 ] for any initial value has a unique local solution defined on an interval with values in , where and if it is an explosion time .in this section , we shall consider the case where and prove global existence for .first , the deterministic case ( i.e. , ) is treated with null external forces .second , the stochastic case ( i.e. , ) is treated but under the restriction that and satisfy the relations and ( therefore , in particular , ) .the system has the form where [ th1 ] let and .then , for any initial value has a unique global solution with values in . as stated in theorem[ thm0 ] , there is a unique solution to defined on an interval where denotes the explosion time . on ,is equivalent to (x_1-x_2 ) \\ & \enskip\enskip -2\left[\frac{\beta_1}{||x_1-x_2||^p}+\frac{\beta_1 \gamma } { ||x_1-x_2||^q}\right](v_1-v_2 ) .\end{aligned } \end{cases}\ ] ] thus , +x_1(0)+x_2(0 ) , \\v_1(t)+v_2(t ) & = v_1(0)+v_2(0 ) , \\ \frac{d(x_1-x_2)}{dt } & = v_1-v_2 , \\\frac{d(v_1-v_2)}{dt } & = -2\left[\frac{\alpha_1}{||x_1-x_2||^p } - \frac{\alpha_1\gamma}{||x_1-x_2||^q } \right](x_1-x_2 ) \\ & \enskip\enskip -2\left[\frac{\beta_1}{||x_1-x_2||^p } + \frac{\beta_1 \gamma}{||x_1-x_2||^q}\right](v_1-v_2 ) . \end{aligned } \end{cases}\ ] ]so we put and . in order to prove that , it suffices to show that the solution starting in of the following system is global .obviously , is the explosion time of , too .suppose that .on , we put .then , it is easy to verify that satisfies and also satisfies the following equations furthermore , =\infty.\ ] ] by introducing a function with a sufficiently large , we observe that \\ & + 2 m z[y-2\alpha_1 x^{p-2}+2\alpha_1\gamma x^{q-2}-2 ( \beta_1 x^p+\beta_1 \gamma x^q ) z ] \\ & -4\alpha_1 x^pz+4\alpha_1\gamma x^qz-4 ( \beta_1 x^p+\beta_1 \gamma x^q)y+4x^{-2}z\\ = & -(q-2 ) x^qz-8 \alpha_1 x^pyz+8 \alpha_1 \gamma x^qyz + 2myz - 4m\alpha_1 x^{p-2 } z \\ & + ( 4m\alpha_1\gamma - q+4 ) x^{q-2 } z -4 \alpha_1 x^p z + 4\alpha_1 \gamma x^qz\\ & -8(\beta_1 x^p+\beta_1 \gamma x^q)y^2 - 4 m ( \beta_1 x^p+\beta_1 \gamma x^q)z^2\\ & -4 ( \beta_1 x^p+\beta_1 \gamma x^q)y+4x^{-2}z .\end{aligned}\ ] ] it is easily seen that , for a sufficient small , it holds true that in addition , it is clear that then it follows that there exists such that for is estimated by on .therefore , by the comparison theorem , we obtain for all .thus , due to , therefore , the solution of must be global . in this subsection , we consider the stochastic case .the system becomes (x_i - x_j ) \\ & { } \qquad\quad -\left[\frac{\beta_1}{||x_i - x_j||^p } + \frac{\beta_1 \gamma}{||x_i - x_j||^q}\right](v_i - v_j)\big\}dt , \end{aligned } \right.\ ] ] where for the situation is not similar to that of the deterministic case .precisely , if and then the global existence is shown , while if or then some solution may explode at a finite time . [ th3 ] let and .then , for any initial value has a unique global solution in . from theorem [ thm0 ] , there exists a local solution of defined on where is an explosion time . in that intervalwe have +\sigma_1 w_1(t)+\sigma_2 w_2(t ) + x_1(0)+x_2(0 ) , \\ d(x_1-x_2 ) & = ( v_1-v_2)dt + \sigma_1 dw_1(t)-\sigma_2 dw_2(t ) , \\d(v_1-v_2 ) & = \big\ { -2\left[\frac{\alpha_1}{||x_1-x_2||^p } -\frac{\alpha_1\gamma}{||x_1-x_2||^q } \right](x_1-x_2 ) \\ & { } \quad -2\left[\frac{\beta_1}{||x_1-x_2||^p } + \frac{\beta_1 \gamma}{||x_1-x_2||^q}\right](v_1-v_2)\big\ } dt .\end{aligned } \end{cases}\ ] ] then becomes an explosion time of the following system , \end{aligned } \end{cases}\ ] ] too , where and ] and .figure 1 illustrates positions of particles and their velocity vectors at in .figure 2 does the same at in . for the case , we set and . an initial value is generated randomly in ^ 2 ] and .figure 4 illustrates behavior of the distance of the two particles and .99 i. aoki , _ a simulation study on the schooling mechanism in fish _japanese soc .scientific fisheries * 48 * ( 1982 ) , 1081 - 1088. l. arnold , _ stochastic differential equations : theory and applications _ , wiley , new york , 1972 .s. camazine , j. l. deneubourg , n. r franks , j. sneyd , g. theraulaz and e. bonabeau , _ self - organization in biological systems _ , princeton university press , 2001 .m. r. d'orsogna , y. chuang , a. bertozzi , l. chayes , _ self - propelled particles with soft - core interactions : patterns , stability and collapse _ ,* 96 * ( 2006 ) , 104302 .a. friedman , _ stochastic differential equations and their applications _ , academic press , new york , 1976 .a. huth and c. wissel , _ the simulation of the movement of fish school _ , j. theor . biol . *156 * ( 1992 ) , 365 - 385 .t. oboshi , s. kato , a. mutoh , h. itoh , _ collective or scattering : evolving schooling behaviors to escape from predator _ , artificial life , mit press , cambridge , ma , * viii * ( 2002 ) , 386 - 389 . r. olfati - saber , _ flocking for multi - agent dynamic systems : algorithms and theory _ , ieee trans. automat .control , * 51 * ( 2006 ) , 401 - 420 . c. w. reynolds , _ flocks , herds , and schools : a distributed behavioral model _ , computer graphics * 21 * ( 1987 ) , 25 - 34 .t. vicsek , a. czirk , e. ben - jacob , i. cohen , and o. shochet , _ novel type of phase transition in a system of self - driven particles _ ,, * 75 * ( 1995 ) , 1226 - 1229 . k. warburton and j. lazarus , _ tendency - distance models of social cohesion in animal groups _ , j. theor* 150 * ( 1991 ) , 473 - 488 .
this paper presents a stochastic differential equation model for describing the process of fish schooling . the model equation always possesses a unique local solution , but global existence can be shown only in some particular cases . some numerical examples show that the global existence may fail in general .
in the study of networked systems such as social , biological , and technological networks , centrality is one of the most fundamental of metrics .centrality quantifies how important or influential a node is within a network .the simplest of centrality measures , the _ degree centrality _ , or simply degree , is the number of connections a node has to other nodes . in a social network of acquaintances , for example , someone who knows many people is likely to be more influential than someone who knows few or none .eigenvector centrality is a more sophisticated variant of the same idea , which recognizes that not all acquaintances are equal .you are more influential if the people you know are themselves influential .eigenvector centrality defines a centrality score for each node in an undirected network , which is proportional to the sum of the scores of the node s network neighbors , where is a constant and the sum is over all nodes . here is an element of the adjacency matrix of the network having value one if there is an edge between nodes and and zero otherwise . defining a vector whose elements are the , we then have , meaning that the vector of centralities is an eigenvector of the adjacency matrix . if we further stipulate that the centralities should all be nonnegative , it follows by the perron frobenius theorem that must be the leading eigenvector ( the vector corresponding to the most positive eigenvalue ) .eigenvector centrality and its variants are some of the most widely used of all centrality measures .they are commonly used in social network analysis and form the basis for ranking algorithms such as the hits algorithm and the eigenfactor metric . as we argue in this paper, however , eigenvector centrality also has serious flaws .in particular , we show that , depending on the details of the network structure , the leading eigenvector of the adjacency matrix can undergo a localization transition in which most of the weight of the vector concentrates around one or a few nodes in the network . while there may be situations , such as the solution of certain physical models on networks , in which localization of this kind is useful or at least has some scientific interest , in the present case it is undesirable , significantly diminishing the effectiveness of the centrality as a tool for quantifying the importance of nodes . moreover , as we will show , localization can happen under common real - world conditions , for instance in networks with power - law degree distributions . as a solution to these problems ,we propose a new centrality measure based on the leading eigenvector of the hashimoto or nonbacktracking matrix .this measure has the desirable properties of ( 1 ) being closely equal to the standard eigenvector centrality in dense networks , where the latter is well behaved , while also ( 2 ) avoiding localization , and hence giving useful results , in cases where the standard centrality fails .a number of numerical studies of real - world networks have shown evidence of localization phenomena in the past . in this paperwe formally demonstrate the existence of a localization phase transition in the eigenvector centrality and calculate its properties using techniques of random matrix theory .the fundamental cause of the localization phenomenon we study is the presence of `` hubs '' within networks , nodes of unusually high degree , which are a common occurrence in many real - world networks .consider the following simple undirected network model consisting of a random graph plus a single hub node , which is a special case of a model introduced previously in . in a network of nodes , of them form a random graph in which every distinct pair of nodes is connected by an undirected edge with independent probability , where is the mean degree .the node is the hub and is connected to every other node with independent probability , so that the expected degree of the hub is . in the regime where it is known that ( with high probability ) the spectrum of the random graph alone has the classic wigner semicircle form , centered around zero , plus a single leading eigenvalue with value and corresponding leading eigenvector equal to the uniform vector plus random gaussian noise of width .thus the eigenvector centralities of all vertices are with only modest fluctuations .no single node dominates the picture and the eigenvector centrality is well behaved .if we add the hub to the picture , however , things change .the addition of an extra vertex naturally adds one more eigenvalue and eigenvector to the spectrum , whose values we can calculate as follows . let denote the adjacency matrix of the random graph alone and let the vector be the first elements of the final row and column , representing the hub .( the last element is zero . )thus the full adjacency matrix has the form let be an eigenvalue of and let be the corresponding eigenvector , where represents the first elements and is the last element . then , multiplying out the eigenvector equation , we find rearranging the first of these , we get and substituting into the second we get where is the identity . writing the matrix inverse in terms of its eigendecomposition , where is the eigenvector of and is the corresponding eigenvalue , eq . becomes where we have explicitly separated the largest eigenvalue and the remaining eigenvalues , which follow the semicircle law .although we do nt know the values of the quantities appearing in eq . , the left - hand side as a function of clearly has poles at each of the eigenvalues and a tail that goes as for large .moreover , for properly normalized the numerator of the first term in the equation is and hence this term diverges significantly only when is also , i.e. , when is very close to the leading eigenvalue .hence the qualitative form of the function must be as depicted in fig .[ fig : solution ] and solutions to the full equation correspond to the points where this form crosses the diagonal line representing the right - hand side of the equation .these points are marked with dots in the figure . as the geometry of the figure makes clear, the solutions for , which are the eigenvalues of the full adjacency matrix of our model including the hub vertex , must fall in between the eigenvalues of the matrix , and hence satisfy an interlacing condition of the form , where we have numbered both sets of eigenvalues in order from largest to smallest . in the limit where the network becomes large and the eigenvalues form a continuous semicircular band , this interlacing imposes tight bounds on the solutions to , such that they must follow the same semicircle distribution .moreover , the leading eigenvalue has to fall within of , and hence in the large size limit .( marked by the vertical dashed lines ) .the diagonal line represents the right - hand side and the points where the two cross , marked by dots , are the solutions of the equation for .,width=321 ] this leaves just two unknown eigenvalues , lying above the semicircular band and lying below it . in the context of the eigenvector centrality it is the one at the top that we care about . in fig .[ fig : solution ] this eigenvalue is depicted as lying below the leading eigenvalue , but it turns out that this is not always the case , as we now show .consider eq .for any value of well away from , so that the first term on the left can be neglected ( meaning that is not within of ) .the vector for is uncorrelated with and hence the product is a gaussian random variable with variance and , averaging over the randomness , the equation then simplifies to the quantity is a standard one in the theory of random matrices it is the so - called stieltjes transform of , whose value for a symmetric matrix with iid elements such as this one is known to be combining eqs . and and solving for we find the eigenvalue we are looking for : depending on the degree of the hub , this eigenvalue may be either smaller or larger than the other high - lying eigenvalue . writing and rearranging , we see that the hub eigenvalue becomes the leading eigenvalue when i.e. , when the hub degree is roughly the square of the mean degree . below this point ,the leading eigenvalue is the same as that of the random graph without the hub and the eigenvector centrality is given by the corresponding eigenvector , which is well behaved , so the centrality has no problems . above this point, however , the leading eigenvector is the one introduced by the hub , and this eigenvector , as we now show , has severe problems .if the eigenvector is normalized to unity then eq. implies that ,\ ] ] and hence where is again the stieltjes transform , eq . ,and is its derivative . performing the derivative and setting , we find that which is constant and does not vanish as .in other words , a finite fraction of the weight of the vector is concentrated on the hub vertex .the neighbors of the hub also receive significant weight : the average of their values is given by thus they are smaller than the hub centrality , but still constant for large . finally , defining the -element uniform vector , the average of all non - hub vector elements is where we have used eq . again .averaging over the randomness and noting that and are independent and that the average of is , we then get which falls off as for large . thus , in the regime above the transition defined by , where the eigenvector generated by adding the hub is the leading eigenvector , a non - vanishing fraction of the eigenvector centrality falls on the hub vertex and its neighbors , while the average vertex in the network gets only an vanishing fraction in the limit of large , much less than the fraction received by the average vertex below the transition .this is the phenomenon we refer to as localization : the abrupt focusing of essentially all of the centrality on just a few vertices as the degree of the hub passes above the critical value . in the localized regimethe eigenvector centrality picks out the hub and its neighbors clearly , but assigns vanishing weight to the average node .if our goal is to determine the relative importance of non - hub nodes , the eigenvector centrality will fail in the localized regime . as a demonstration of the localization phenomenon, we show in fig .[ fig : hub ] plots of the centralities of nodes in networks generated using our model .each plot shows the average centrality of the hub , its neighbors , and all other nodes for a one - million - node network with .the top two plots show the situation for the standard eigenvector centrality for two different values of the hub degree and .the former lies well within the regime where there is no localization , while the latter is in the localized regime . the difference between the two is striking in the first the hub andits neighbors get higher centrality , as they should , but only modestly so , while in the second the centrality of the hub vertex becomes so large as to dominate the figure . the extent of the localization can be quantified by calculating an inverse participation ratio . in the regime below the transition where there is no localization and all elements we have .but if one or more elements are , then also .hence if there is a localization transition in the network then , in the limit of large , will go from being zero to nonzero at the transition in the classic manner of an order parameter .[ fig : transition ] shows a set of such transitions in our model , each falling precisely at the expected position of the localization transition .so far we have looked only at the localization process in a simple model network , but localization occurs in more realistic networks as well .in general , we expect it to be a problem in networks with high - degree hubs or in very sparse networks , those with low average degree , where it is relatively easy for the degree of a typical vertex to exceed the localization threshold .many real - world networks fall into these categories .consider , for example , the common case of a network with a power - law degree distribution , such that the fraction of nodes with degree goes as for some constant exponent .we can mimic such a network using the so - called configuration model , a random graph with specified degree distribution .there are again two different ways a leading eigenvalue can be generated , one due to the average behavior of the entire network and one due to hub vertices of particularly high degree . in the first case the highest eigenvalue for the configuration model is known to be equal to the ratio of the second and first moments of the degree distribution in the limit of large network size and large average degree . at the same time , the leading eigenvalue must satisfy the rayleigh bound for any real vector , with better bounds achieved when better approximates the true leading eigenvector .if denotes the highest degree of any hub in the network and we choose an approximate eigenvector of form similar to the one in our earlier model network , having elements for the hub , for neighbors of the hub , and zero otherwise , then the rayleigh bound implies .thus the eigenvector generated by the hub will be the leading eigenvector whenever ( possibly sooner , but not later ) .as a function of hub degree for networks generated using the model described in the text with vertices and average degree ranging from 4 to 11 .the solid curves are eigenvector centrality ; the horizontal dashed curves are the nonbacktracking centrality .the vertical dashed lines are the expected positions of the localization transition for each curve , from eq ..,width=321 ] in a power - law network with vertices and exponent , the highest degree goes as and hence increases with increasing , while and for the common case of .thus we will have for large provided .so we expect the hub eigenvector to dominate and the eigenvector centrality to fail due to localization when , something that happens in many real - world networks .( similar arguments have also been made by chung _ et al . _ and by goltsev _ et al . _we give empirical measurements of localization in a number of real - world networks in table [ tab : power ] below .so if eigenvector centrality fails to do its job , what can we do to fix it ?qualitatively , the localization effect arises because a hub with high eigenvector centrality gives high centrality to its neighbors , which in turn reflect it back again and inflate the hub s centrality .we can make the centrality well behaved again by preventing this reflection . to achieve thiswe propose a modified eigenvector centrality , similar in many ways to the standard one , but with an important change .we define the centrality of node to be the sum of the centralities of its neighbors as before , but the neighbor centralities are now calculated _ in the absence of node . this is a natural definition in many ways when i ask my neighbors what their centralities are in order to calculate my own , i want to know their centrality due to their other neighbors , not myself .this modified eigenvector centrality has the desirable property that when typical degrees are large , so that the exclusion or not of any one node makes little difference , its value will tend to that of the standard eigenvector centrality .but in sparser networks of the kind that can give problems , it will be different from the standard measure and , as we will see , better behaved .our centrality measure can be calculated using the hashimoto or nonbacktracking matrix , which is defined as follows .starting with an undirected network with edges , one first converts it to a directed one with edges by replacing each undirected edge with two directed ones pointing in opposite directions .the nonbacktracking matrix is then the non - symmetric matrix with one row and one column for each directed edge and elements where is the kronecker delta .thus a matrix element is equal to one if edge points into the same vertex that edge points out of and edges and are not pointing in opposite directions between the same pair of vertices , and zero otherwise .note that , since the nonbacktracking matrix is not symmetric , its eigenvalues are in general complex , but the largest eigenvalue is always real , as is the corresponding eigenvector .the element of the leading eigenvector of the nonbacktracking matrix now gives us the centrality of vertex ignoring any contribution from , and the full nonbacktracking centrality of vertex is defined to be the sum of these centralities over the neighbors of : in principle one can calculate this centrality directly by calculating the leading eigenvector of and then applying eq . .in practice , however , one can perform the calculation faster by making use of the so - called ihara ( or ihara bass ) determinant formula , from which it can be shown that the vector of centralities is equal to the first elements of the leading eigenvector of the matrix where is the adjacency matrix as previously , is the identity matrix , and is the diagonal matrix with the degrees of the vertices along the diagonal . since only has marginally more nonzero elements than the adjacency matrix itself ( for a network with edges and vertices , versus for the adjacency matrix ) , finding its leading eigenvector takes only slightly longer than the calculation of the ordinary eigenvector centrality . to see that the nonbacktracking centrality can indeed eliminate the localization transition ,consider again our random - graph - plus - hub model and , as before , let us first consider the random graph on its own , without the hub. our goal will be to calculate the leading eigenvalue of the nonbacktracking matrix for this random graph and then demonstrate that no other eigenvalue ever surpasses it even when the hub is added into the picture , and hence that there is no transition of the kind that occurs with the standard eigenvector centrality .since all elements of the nonbacktracking matrix are real and nonnegative , the leading eigenvalue and eigenvector satisfy the perron frobenius theorem , meaning the eigenvalue is itself real and nonnegative as are all elements of the eigenvector for appropriate choice of normalization .note moreover that at least one element of the eigenvector must be nonzero , so the average of the elements is strictly positive . making use of the definition of the nonbacktracking matrix in eq ., the eigenvector equation takes the form or where we have changed variables from to for future convenience .expressed in words , this equation says that times the centrality of an edge emerging from vertex is equal to the sum of the centralities of the other edges feeding into . for an uncorrelated , locallytree - like random graph of the kind we are considering here , i.e. , a network where the source and target of a directed edge are chosen independently and there is a vanishing density of short loops , the centralities on the incoming edges are drawn at random from the distribution over all edges the fact that they all point to vertex has no influence on their values in the limit of large graph size . bearing this in mind ,let us calculate the average of the centralities over all edges in the network , which we do in two stages .first , making use of eq ., we calculate the sum over all edges originating at vertices whose degree takes a particular value : where is the number of vertices with degree and we have in the third line made use of the fact that has the same distribution as values in the graph as whole to make the replacement in the limit of large graph size .now we sum this expression over all values of and divide by the total number of edges to get the value of the average vector element : thus for any vector we must either have , which as we have said can not happen for the leading eigenvector , or for the particular case of the poisson random graph under consideration here , this gives a leading eigenvalue of , the average degree .this result has been derived previously by other means but the derivation given here has the advantage that it is easy to adapt to the case where we add a hub vertex to the network .doing so adds just a single term to eq .thus : ,\ ] ] where is the degree of the hub , as previously .hence the leading eigenvalue is for constant and constant ( or growing ) average degree , however , the term in becomes negligible in the limit of large and we recover the same result as before .thus no new leading eigenvalue is introduced by the hub in the case of the nonbacktracking matrix , and there is no phase transition as eigenvalues cross for any value of .it is worth noting , however , that there are other mechanisms by which high - lying eigenvalues can be generated .for instance , if a network contains a large clique ( a complete subgraph in which every node is connected to every other ) it can generate an outlying eigenvalue of arbitrary size , as we can see by making use of the so - called collatz wielandt formula , a corollary of the perron frobenius theorem that says that for any vector the leading eigenvalue satisfies }{v_i}.\ ] ] choosing a whose elements are one for edges within the clique and zero elsewhere , we find that a clique of size implies , which can supersede any other leading eigenvalue for sufficiently large .the corresponding eigenvector is localized on the clique vertices , potentially causing trouble once again for the eigenvector centrality .this localization on cliques would be an interesting topic for further investigation . as a test of our nonbacktracking centrality ,we show in the lower two panels of fig .[ fig : hub ] results for the same networks as the top two panels .as the figure makes clear , the measure now remains well behaved in the regime beyond the former position of the localization transition there is no longer a large jump in the value of the centrality on the hub or its neighbors as we pass the transition .similarly , the dashed curves in fig . [ fig : transition ] show the inverse participation ratio for the nonbacktracking centrality and again all evidence of localization has vanished .llrcc & & & & non- + & network & & eigenvector & backtracking + 90 & planted hub , & 1000001 & & + & planted hub , & 1000001 & 0.2567 & + & power law , & 1000000 & 0.0089 & 0.0040 + & power law , & 1000000 & 0.2548 & 0.0011 + 90 & physics collaboration & 12008 & 0.0039 & 0.0039 + & word associations & 13356 & 0.0305 & 0.0075 + & youtube friendships & 1138499 & 0.0479 & 0.0047 + & company ownership & 7253 & 0.2504 & 0.0161 + & ph.d . advising & 1882 & 0.2511 & 0.0386 + & electronic circuit & 512 & 0.1792 & 0.0056 + & amazon & 334863 & 0.0510 & 0.0339 the inverse participation ratio also provides a convenient way to test for localization in other networks , both synthetic and real .table [ tab : power ] summarizes results for eleven networks , for both the traditional eigenvector centrality and the nonbacktracking version .the synthetic networks are generated using the random - graph - plus - hub model of this paper and the configuration model with power - law degree distribution , and in each case there is evidence of localization in the eigenvector centrality in the regimes where it is expected and not otherwise , but no localization at all , in any case , for the nonbacktracking centrality . a similar picture is seen in the real - world networks typically either localization in the eigenvector centrality but not the nonbacktracking version , or localization in neither case .figure [ fig : circuit ] illustrates the situation for one of the smaller real - world networks , where the values on the highest - degree vertex and its neighbors are overwhelmingly large for the eigenvector centrality ( left panel ) but not for the nonbacktracking centrality ( right panel ) .in this paper we have shown that the widely used network measure known as eigenvector centrality fails under commonly occurring conditions because of a localization transition in which most of the weight of the centrality concentrates on a small number of vertices .the phenomenon is particularly visible in networks with high - degree hubs or power - law degree distributions , which includes many important real - world examples .we propose a new spectral centrality measure based on the nonbacktracking matrix which rectifies the problem , giving values similar to the standard eigenvector centrality in cases where the latter is well behaved , but avoiding localization in cases where the standard measure fails .the new measure is found to give significant decreases in localization on both synthetic and real - world networks .moreover , the new measure can be calculated almost as quickly as the standard one , and hence is practical for the analysis of very large networks of the kind common in recent studies .the nonbacktracking centrality is not the only possible solution to the problem of localization .for example , in studies of other forms of localization in networks it has been found effective to introduce a regularizing `` teleportation '' term into the adjacency and similar matrices , i.e. , to add a small amount to every matrix element as if there were a weak edge between every pair of vertices .this strategy is reminiscent of google s pagerank centrality measure , a popular variant of eigenvector centrality that includes such a teleportation term , and recent empirical studies suggest that pagerank may be relatively immune to localization .it would be a worthwhile topic for future research to develop theory similar to that presented here to describe localization ( or lack of it ) in pagerank and related measures .the authors thank cris moore , elchanan mossel , raj rao nadakuditi , romualdo pastor - satorras , lenka zdeborov , and pan zhang for useful conversations .this work was funded in part by the national science foundation under grants dms1107796 and dms1407207 and by the air force office of scientific research ( afosr ) and the defense advanced research projects agency ( darpa ) under grant fa95501210432 .
eigenvector centrality is a common measure of the importance of nodes in a network . here we show that under common conditions the eigenvector centrality displays a localization transition that causes most of the weight of the centrality to concentrate on a small number of nodes in the network . in this regime the measure is no longer useful for distinguishing among the remaining nodes and its efficacy as a network metric is impaired . as a remedy , we propose an alternative centrality measure based on the nonbacktracking matrix , which gives results closely similar to the standard eigenvector centrality in dense networks where the latter is well behaved , but avoids localization and gives useful results in regimes where the standard centrality fails .
convolutional neural networks ( cnns ) show state - of - the - art performance on many problems in computer vision , natural language processing and other fields . at the same time , cnns require millions of floating point operations to process an image and therefore real - time applications need powerful cpu or gpu devices .moreover , these networks contain millions of trainable parameters and consume hundreds of megabytes of storage and memory bandwidth .thus , cnns are forced to use ram instead of solely relying on the processor cache orders of magnitude more energy efficient memory device which increases the energy consumption even more .these reasons restrain the spread of cnns on mobile devices . to address the storage and memory requirements of neural networks , used tensor decomposition techniques to compress fully - connected layers .they represented the parameters of the layers in the tensor train format and learned the network from scratch in this representation .this approach provided enough compression to move the storage bottleneck of vgg-16 from the fully - connected layers to convolutional layers . for a more detailed literature overview ,see sec .[ sec : related - works ] . in this paper, we propose a tensor factorization based method to compress convolutional layers .our contributions are : * we experimentally show that applying the tensor train decomposition the compression technique used in directly to the tensor of a convolution yields poor results ( see sec .[ sec : experiments ] ) .we explain this behavior and propose a way to reshape the 4-dimensional kernel of a convolution into a multidimensional tensor to fully utilize the compression power of the tensor train decomposition ( see sec .[ sec : tt - conv ] ) .* we experimentally show that the proposed approach allows compressing a network that consists only of convolutions up to times with accuracy decrease ( sec .[ sec : experiments ] ) .* we combine the proposed approach with the fully - connected layers compression of . compressing both convolutional and fully - connected layers of a network yields network compression with accuracy drop ,see sec .[ sec : experiments ] .a convolutional network is a type of feed - forward architecture that transforms an input image to the final class scores using a sequence of layers .the main building block of such networks is a convolutional layer , that transforms the -dimensional input tensor into the output tensor by _ convolving _ with the kernel tensor : to improve the computational performance , many deep learning frameworks reduce the convolution to a matrix - by - matrix multiplication ( see fig .[ fig : conv - to - mat ] ) .we exploit this matrix formulation to motivate a particular way of applying the tensor train format to the convolutional kernel ( see sec .[ sec : tt - conv ] ) . in the rest of this section, we introduce the notation needed to reformulate convolution as a matrix - by - matrix multiplication . ,scaledwidth=70.0% ] for convenience , we denote and .let us reshape the output tensor into a matrix of size in the following way let us introduce a matrix of size , the -th row of which corresponds to the patch of the input tensor that is used to compute the -th row of the matrix where , , . finally , we reshape the kernel tensor into a matrix of size using the matrices defined above, we can rewrite the convolution definition as .note that the compression approach presented in the rest of the paper works with other types of convolutions , such as convolutions with padding , stride larger than , or rectangular filters .but for clarity , we illustrate the proposed idea on the basic convolution .the tt - decomposition ( or tt - representation ) of a tensor is the set of matrices \in \mathbb{r}^{r_{k-1}\times r_{k}}, ] are called _ tt - cores _ .the tt - format requires parameters to represent a tensor which has elements .the tt - ranks control the trade - off between the number of parameters versus the accuracy of the representation : the smaller the tt - ranks , the more memory efficient the tt - format is .an attractive property of the tt - format is the ability to efficiently perform basic linear algebra operations on tensors by working on the tt - cores of the tt - format , i.e. without materializing the tensor itself . for a matrix a -dimensional tensor the tt - decomposition coincides with the matrix low - rank decompositionto represent a matrix more compactly than in the low - rank format , the matrix tt - format is defined in a special way .let us consider a matrix of size where , and reshape it into a tensor of size by defining bijective mappings and .the mapping maps row index into a -dimensional vector index , where -th dimension varies from to .the bijection maps column index into a -dimensional vector index , where -th dimension varies from to .thus , using these mappings , we can form the tensor , whose -th dimension is indexed by the compound index and consider its tt - representation : \ldots{\mathbold{g}}_d[(\mu_d(\ell),\nu_d(t))].\ ] ]in this section , we propose two ways to represent a convolutional kernel in the tt - format .one way is to apply the tt - decomposition to the tensor directly . to see the drawbacks of this approach ,consider a convolution , which is a small fully - connected layer applied to the channels of the input image in each pixel location .the kernel of such convolution is essentially a -dimensional array , and the tt - decomposition of -dimensional arrays coincides with the matrix low - rank format .but for fully - connected layers , the matrix tt - format proved to be more efficient than the matrix low - rank format .thus , we seek for a decomposition that would coincide with the matrix tt - format on convolutions . taking into account that a convolutional layer can be formulated as a matrix - by - matrix multiplication ( see sec . [sec : conv ] ) , we reshape the 4-dimensional kernel tensor into a matrix of size , where .then we apply the matrix tt - format ( see sec .[ sec : tt - format ] ) to the matrix , i.e. reshape it into a tensor and convert it into the tt - format . to reshape the matrix into a tensor , we assume that its dimensions factorize : and .this assumption us not restrictive since we can always add some dummy channels filled with zeros to increase the values of and .then we can define a -dimensional tensor , where -th dimension has the length for and for .thus we obtain the following representation of the matrix {\mathbold{g}}_1[c_1,s_1 ] \ldots { \mathbold{g}}_d[c_d , s_d],\end{gathered}\ ] ] where and .to simplify the notation , we index the -th core with and : = \widetilde{{\mathbold{g}}}_0[\ell(y-1)+x,1],$ ] where finally , substituting into , we obtain the following decomposition of the convolution kernel {\mathbold{g}}_1[c_1,s_1] ... {\mathbold{g}}_d[c_d , s_d].\ ] ] to summarize our pipeline starting from an input tensor ( an image ) : the tt - convolutional layer firstly reshapes the input tensor into a -dimensional tensor of size ; then , the layer transforms the input tensor into the output tensor of size in the following way { \mathbold{g}}_1[c_1,s_1]\ldots { \mathbold{g}}_d[c_d , s_d].\end{aligned}\ ] ] note that for a convolution ( ) , the and indices vanish from the decomposition . the convolutional kernel collapses into a matrix and the decomposition for this matrix coincides with the tensor train format for the fully - connected layer proposed in . to train a network with tt - conv layers , we treat the elements of the tt - cores as the parameters of the layer and apply stochastic gradient descent with momentum to themto compute the necessary gradients we use automatic differentiation implemented in tensorflow .fully - connected layers of neural networks are traditionally considered as the memory bottleneck and numerous works focused on compressing these layers .however , several state - of - the - art neural networks are either bottlenecked by convolutional layers , or their fully - connected layers can be compressed to move the bottleneck to the convolutional layers .this leads to a number of works focusing on compressing and speeding up the convolutional layers .one approach to compressing a convolutional layer is based on either pruning less important weights from the convolutional kernel , or restricting possible variation of the weights ( quantization ) , or both .our approach is compatible with the quantization technique : one can quantize the elements of the tt cores of the decomposition .some works also add huffman coding on top of other compression techniques , which is also compatible with the proposed method .another approach is to use tensor or matrix decompositions .cp - decomposition and kronecker product factorization allow to speed up the inference time of convolutions and compress the network as a side effect .we evaluated the compressing strength of the proposed approach on cifar-10 dataset , which has train images and test images .in all the experiments , we used stochastic gradient descent with momentum with coefficient , trained for epochs starting from the learning rate of and decreased it after each epochs . to make the experiments reproducible, we released the codebase .we used two architectures as references : the first one is dominated by the convolutions ( they occupy parameters of the network ) , and the second one is dominated by the fully - connected layers ( they occupy parameters of the network ) .[ [ convolutional - network . ] ] convolutional network .+ + + + + + + + + + + + + + + + + + + + + + the first network has the following architecture : conv ( output channels ) ; bn ; relu ; conv ( output channels ) ; bn ; relu ; max - pool ( with stride ) ; conv ( output channels ) ; bn ; relu ; conv ( output channels ) ; bn ; relu ; max - pool ( with stride ) ; conv ( output channels ) ; bn ; relu ; conv ( output channels ) ; avg - pool ( ) ; fc ( ) , where bn stands for batch normalization and all convolutional filters are of size . to compress the network we replace each convolutional layer excluding the first one ( it contains less than of the network parameters ) with the tt - conv layer ( see sec .[ sec : tt - conv ] ) . for training ,we initialize the tt - cores of the tt - conv layers with random noise and train the whole network from scratch .we compare the proposed tt - convolution against the naive approach directly applying the tt - decomposition to the -dimensional convolutional kernel ( see sec.[sec : tt - conv ] ) .we report that on the compression level the proposed approach ( loss of accuracy ) outperforms the naive baseline ( loss of accuracy ) , for details see tbl .[ tbl : compression]a . .45 .compressing the second baseline ( ` conv - fc ' ) , which is dominated by fully - connected layers . ` conv - tt - fc ' : only the fully - connected part of the network is compressed ; ` tt - conv - tt - fc ' : fully - connected and convolutional parts are compressed . [ cols="<,<,<",options="header " , ] [ [ network - with - convolutions - and - fully - connected - layers . ] ] network with convolutions and fully - connected layers .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the second reference network was obtained from the first one by replacing the average pooling with two fully - connected layers of size and . to compress the second network, we replace all layers excluding the first and the last one ( they occupy less than of parameters ) with tt - conv and tt - fc layers . to speed up the convergence ,we trained the network in two stages : first we replaced only the convolutional layers with tt - conv layers and trained the network ; then we replaced the fully - connected layers with randomly initialized tt - fc layers and fine - tuned the whole model . to compare against we include the results of compressing only fully - connected layers ( tbl .[ tbl : compression]b ) .initially , the fully - connected part was the memory bottleneck and it was more fruitful to compress it while leaving the convolutions untouched : we obtained network compression with accuracy drop by compressing only fully - connected layers , and compression with accuracy drop by compressing both fully - connected and convolutional layers .but after the first gains , the bottleneck moved to the convolutional part and the fully - connected layers compression capped at about network compression . at this point , by additionally factorizing the convolutions we raised the network compression up to while losing of accuracy ( tbl .[ tbl : compression]b ) .in this paper , we proposed a tensor decomposition approach to compressing convolutional layers of a neural network . by combing this convolutional approach with the work for fully - connected layers , we compressed a convolutional network times .these results make a step towards the era of embedding compressed models into smartphones to allow them constantly look and listen to its surroundings .
convolutional neural networks excel in image recognition tasks , but this comes at the cost of high computational and memory complexity . to tackle this problem , developed a tensor factorization framework to compress fully - connected layers . in this paper , we focus on compressing convolutional layers . we show that while the direct application of the tensor framework to the 4-dimensional kernel of convolution does compress the layer , we can do better . we reshape the convolutional kernel into a tensor of higher order and factorize it . we combine the proposed approach with the previous work to compress both convolutional and fully - connected layers of a network and achieve network compression rate with accuracy drop on the cifar-10 dataset .
traffic congestions and the related problems such as traffic safety problems , environment pollution problems and energy crisis and so forth are significant for the national economy and the people s livelihood and commonly exist in most large cities all over the world . to uncover the traffic nature and clarify the occurrence of various phenomena in diverse road types , numerous researchers are devoted to developing traffic flow models . andsubstantial progress has been achieved in understanding the origin of many empirically observed features for roadways with mainly motorized vehicles(thereafter m - vehicle , including car , bus , truck ) or homogeneous traffic , primarily reflecting the traffic condition in developed countries .these achievements help to utilize efficiently limited construction budget and guide traffic planning and designing , management and control .the typical example is lincoln tunnel , where the flow goes up twenty percent after control . up to now, considerable attentions have been focused on traffic flow theory to provide reasonable advices on alleviating traffic congestion .however , in developing countries , e.g. china , india , bangladesh and indonesia , m - vehicles come in increasing numbers , and simultaneously nonmotorized vehicles(thereafter nm - vehicle , including bicycle , three - wheeler , motorcycle ) are still prevalent for most short - distance trips due to low income levels or convenient parking .thus , m - vehicles and nm - vehicles always blend on roads without isolations between motorized lanes(m - lane ) and non - motorized lanes(nm - lane ) or intersections .the mix traffic or heterogeneous traffic with both m - vehicles and nm - vehicles will persist for further years , since some governments advocate that citizens take bicycles for a short distance instead of driving cars to release issues on lack of energy and environmental pollution .it is noted that the mix or heterogeneous traffic in the following text means traffic with the mixture of m - vehicles and nm - vehicles .the chief difference between m - vehicles and nm - vehicles is that behaviors of m - vehicles are lane - based , while nm - vehicles do not follow each other within lanes but move in both longitudinal and lateral direction .the prominent characters of nm - vehicles are much flexible , low - speed and unsubstantial . when two kinds of vehicles mix somewhere , m - vehicles should concede nm - vehicles to guarantee the security of drivers of nm - vehicles . apparently , the mix traffic flow would be much more complicated than the homogeneous flow . in motorized traffic flow theory , there are mainly two kinds of microscopic traffic models , cellular automaton(ca ) models and car - following(cf ) models .it is reported that few devotes have been done on expanding these models into investigating the problem of the mixed traffic flow .inhomogeneous ca models based on non - identical particle size are presented to characterize the behaviors of vehicular movements in a mixed traffic environments with various motorized vehicles .the model applicable to the cases of car - bicycle following are investigated by faghri .oketch incorporated car - following rules and lateral movement to model mixed - traffic flow .wu and dai et al . introduced a ca model for mix traffic flow with m - vehicles and motorcycles . however , either ca models or cf models can not be suitable to exhibit both lane - based behaviors of m - vehicles and non - lane - based bahaviors of nm - vehicles . cho and wu proposed a model of motorcycles with longitudinal and lateral movement , and pointed out that this model could be the basis of bicycle or pedestrian flow model.obviously , it is inappropriate to directly extend motorized traffic flow theory into mixed traffic systems , because complicate interferences between m - vehicles and nm - vehicles can not be rightly described in the models for m - vehicles .it is a long - standing tradition to neglect the mix traffic mode that represents the status of transportation in developing countries . to get deep insights into the mixed traffic flow ,it is important to develop appropriate models describing the general feature of mix traffic flow and disclose the basic discipline , furthermore to enhance transporting efficiency in mix traffic systems and provide the firm infrastructure for the sustainable growth of national economics . in the previous studies ,many works about mix traffic flow models were done on extending only one kind of motorized traffic models into describing the characters of heterogeneous vehicles .we think it is more reasonable for mix traffic flow models to integrate models for nonmotorized transportation modes with models for motorized modes .two key problems should be solved .one is how to choose suitably models that may be comparable , the other is how to establish the relationship among different types of models .as for the former , ca models may be good choices for m - vehices , owing to the relatively simple rules in expressions and better descriptions of most realistic traffic phenomena . at the same time , multi - value ca ( mca ) models developed by nishinari and takahashi can depict the multi - lane traffic without explicitly considering the lane - changing rule , and can be used to describe the non - motorized traffic flow .ca and mca models are similar in the following two aspects .both two models are discrete in the time and space , and states of vehicles are updated related rules in forwards motion .so the two models can be perfectly comparable . to solve the latter , identical dimensions of sites are used in different kind of ca models . in this paper ,the aim is to establish a novel approach for modelling mix traffic flow based on the combination of the ca model for motorized traffic flow and the mca model for non - motorized one .so it is referred as the combined ca ( cca ) model.the model is applied to simulate a special mixed system , where the bus stop inserted into the nm - lane , and nm - vehicles and m - vehicles mix near the stop .the simulation results indicate that the cca model can not only correctly character both nonmotorized and motorized transportation modes but also properly display their interactions .thus , it is reasonable to depict the chief properties of mix - traffic flow . in this paper , we choose the typical nasch cellular automaton ( nca ) model for the ca model and the burgers cellular automaton ( bca ) model for the mca model .furthermore , other improved ca models and mca models or other suitable traffic flow models can be used in the proposed approach .the approach also can be generalized to other cases of the mixed traffic systems , such as the intersection , the roads without the isolation between the motorized lane(m - lane ) and the nonmotorized lane(nm - lane ) et al .the remaining parts of the paper is organized as follows .the mixed traffic system is introduced and the cca models is presented in section 2 .section 3 presents the simulation results and the discussions .finally , the summary and the further studies are addressed .the basic idea of the approach for modelling mix traffic flow is to unify mca models for m - vehicles with bca models for nm - vehicles .the main issues are to pick up models for two transportation modes and connect them to reflect interactions between two kinds of vehicles .since the ca and the mca model can reproduce basic phenomena of m - vehicles and nm - vehicles respectively and have the similar manner , it is expected that the two models can better be combined and their combination can exhibit the characteristics of both two kinds of vehicles . andspecial lane - changing rules between m - vehicles and nm - vehicles are designed to build up connections between two kinds of models and interactions among vehicles .then we present a simple model to characterize the traffic flow with the mixture of motorized and non - motorized vehicles .the new model , combined the nca and the bca model , is named as combined ca ( cca ) model .bus stops are essential infrastructures for public transport systems . in undeveloped countries ,most bus stops have no special stop bay , and are set on nm - lanes .thus , near these bus stops , buses occupy the nm - lane and block lots of nm - vehicles . according to rules for nm - vehicles in china , nm - vehiclesis only permitted to run on nm - lanes , however nm - vehicles can employ the neighboring m - lane under the guarantee of safety when nm - vehicles on nm - lanes are blocked by hindrances .thus some nm - vehicles would rush into the adjacent m - lane if buses dwells at the stop on nm - lanes .the case mentioned above is a typical example for the mixed traffic flow . here, the mixed traffic system near these bus stops will be investigated .consider the mixed traffic system with two lanes including a nm - lane and a m - lane .the traffic system is sketched in fig .the road configuration with two lanes , is split into five sections , section a , b , c , d and e. section a and e are the entrance and exit region of the road , respectively .section c on the nm - lane is the bus stop .section b and d on the nm - lane are the upstream part and downstream part of the bus stop , respectively .each lane is divided into l sites with identical size , which are named as nca sites for nm - vehicles and bca sites and for m - vehicles , respectively .the uniform dimension of different sites can simplify the computation process .assume that each bca site can hold m nm - vehicles at most , and each nca site may be either empty or occupied by one m - vehicles .movement of vehicles includes forward motions and lane - changing motions .m - vehicles in nca sites move forwards according to the evolution rules of the nca model , and nm - vehicles in bca sites according to the rules of the bca model .the speeds of all vehicles are integer values .the maximal speed of m - vehicles(nm - vehicles ) is ( ) . in the following ,the variables with the superscript ( ) denote those of m - vehicles(nm - vehicles ) .the system contains two types of m - vehicles and one type of nm - vehicles .the m - vehicles have cars with one - site length and buses with two - sites length .the mixing probability stands the proportion of buses in m - vehicles .all buses must halt at the bus stop , and is called as stopping buses . denotes the dwell time of stoping buses at the stop .after the dwelling procedure , stopping buses is regarded as non - stop buses .the nca model is typically employed to control the forward motion of m - vehicles . at each discrete timestep , the state of each vehicle is updated by the following rules : 1 .acceleration , ; 2 .deceleration , ; 3 .randomization , with probability ; 4 .motion , . here , and are the head position and velocity of m - vehicle in time step , is the number of empty sites between vehicle and its nearest preceding site occupied by vehicles . is the randomization probability in time step . for simplicity ,only the determined nca model with is used in the following simulations .if vehicle is the nearest vehicle behind the bus stop and a stopping bus , the gap is computed as , where is the rightmost position of section c , namely the end of the bus stop . for the stopping bus at the bus stop ,if its dwelling time is less than , then it continues to halt at the stop , and the dwelling time is updated as .otherwise , the bus becomes a non - stop bus . in the bca model ,the lane - changing rule is neglected . for simplicitythe maximal speed is set at .of course , other values for can also be considered , and the complexity of the computation increases .the number of vehicles in each site evolves as follows where represents the number of nm - vehicles at site and time , .if the site in front of the current site is occupied by m - vehicles at time , then .only nm - vehicles and buses near the bus stop are permitted to change lanes .buses in section b and c will change lanes asymmetrically . for simplicity ,the lane - changing rules similar to those for off - ramp traffic systems in ref.[12 ] are used in this paper .this is because the lane - changing behaviors of vehicles on the main road , which leave the main road and enter the off - ramp , are similar to those of buses entering and leaving the bus stop .the drivers of buses are willing to run on the nm - lane to stop conveniently when they are close to the bus stop .these buses will change from the left lane to the right lane as long as conditions on the right lane are not worse than those on the left lane .namely , if the following condition is satisfied , \ \mbox { and } \ d_{j , back}>v_{ob},\end{aligned}\ ] ] the stopping bus will change from the m - lane to the nm - lane with the probability in sections b and c. represents the number of empty sites between vehicle and its nearest preceding ( back ) unempty site on the destination lane at time . represents the velocity of its back vehicle on the destination lane , and when its nearest following vehicles are nm - vehicles .condition means that there is no gap to move forward on both lanes in the next time step ; condition means that the road situation on the present lane is not much better than that on its neighbor .if a stopping bus can not change to the destination lane , until it approaches , it would stop to wait for the change chance ( e.g. it will change the lane as soon as the corresponding position on its right - side lane is empty ) . in sections b and c , stopping buses are prohibited changing from the nm - lane to the m - lane .the bus that has finished the dwelling procedure will become a non - stop bus .the same lane - changing rules in eq.(2 ) are used for lane - changes of non - stop buses from the nm - lane to the m - lane in sections c and d. non - stop buses in sections c and d are forbidden changing to the nm - lane .if non - stop buses on the nm - lane still can not change to the m - lane , when it reaches the rightmost position of section d , it would stop to wait for the change chance mentioned above .cars are forbidden running on the nm - lane . in sectionsb and c , nm - vehicles in the bca site may change from the nm - lane to the m - lane under the hindrance of the preceding m - vehicles , if the following conditions are fulfilled , all nm - vehicles in the current bca site change to the m - lane with the probability . is a safety distance to avoid crash , and is set to the maximal velocity of its back vehicle on the destination lane .under the situation that some nm - vehicles are running on the m - lane just in front of the current bca site , the possibility of the lane - changing behaviors of present nm - vehicles in site from the nm - lane to the m - lane may be increased , as long as the corresponding site on the m - lane still has space to hold nm - vehicles . because people would act in conformity with the majority .namely , if the following condition are met then no more than nm - vehicles in bca site can change from the nm - lane to the m - lane with the probability . represents the number of nm - vehicles in the corresponding site of the destination lane at time .generally , .this means that nm - vehicles would prefer to change to the m - lane in this case than other cases . the same changing rules in eq.(3 - 4 ) are used for lane - changes from the m - lane to the nm - lane in sections c and d. if nm - vehicles can not change from the m - lane to the nm - lane , when they approach , all or part of them will change to the nm - lane as soon as the corresponding position on its destination lane is nt completely occupied .in addition , to guarantee the avoidance of complete congestion around the bus stop , the nm - vehicles of the site on the nm - lane will give way to stopping buses with the probability .the simulations are carried out under open boundary condition . in each time step ,when the update of m - vehicles on the road is finished , we check the positions of the last m - vehicles on the entrance of the m - lane . if , a m - vehicle with velocity is injected with the inflow rate at the site . on the nm - lane , if the first site is nt full filled with nm - vehicles , nm - vehicles are inserted with the probability ( inflow rate ) at the first site . times of circulation will be done in each time step . in each circulation, a nm - vehicle will be added on the first site with the probability , if there is space on the first site .the leading vehicles on each lane go out of the system at and its following vehicle becomes the new leader .in this section , the characteristics of mixed traffic flow is discussed in the traffic system mentioned above .let us consider the road with sites , the lengthes of sections a , b , c , d and e are set as , , , and sites , respectively .each site corresponds to 7.5 , and each time step corresponds to 1 .the model parameters are set as follows : , , , , , where the superscript ( ) denotes the parameter of the car ( bus ) .according to the transit co - operative research program ( tcrp ) report 19 ( 1996 ) , the average peak - period dwell time exceeds per bus , so we set . the mix probability is .figs.2 - 3 display the relationships between the flow and the inflow rate in the cca model for m - vehicles and nm - vehicles in the case of , respectively . and represent the flow of the m - vehicles and the nm - vehicles , respectively .five virtual detectors are fixed on site , , , and of the road , where the numbers of nm - vehicles and m - vehicles passing through are recorded . is the average number of m - vehicles passing through five virtual detectors in each time step . is the average value that the number of penetrating nm - vehicles divided by .the first 50,000 time steps are discarded to avoid the transient behaviors .the flow is averaged by 100,000 time steps . from fig .3(a ) ) , we find that a critical inflow rate ( )(which is pointed out in fig.2(a)(fig.3(a ) ) only for ( ) ) divides the flow into two regions , the free - flow one and the saturated - flow one . 1 . in the region of ( ) ,the flow of m - vehicles ( nm - vehicles ) is free and ( ) only depends on itself inflow rate ( ) .2 . in the region ( ) , the flow of m - vehicles ( nm - vehicles ) is saturated .the flow ( ) is independent of itself rate ( ) , and reaches its saturation value ( ) .however , with the increase of ( ) , both the flow and the critical value ( and ) reduce until it reaches a minimum .this also can be obviously observed in fig .3(b ) ) , where the flow versus itself inflow rate ( ) in the cases of different ( ) is displayed . the saturated value and the critical value ( and ) decline from and ( and ) at ( ) to and ( and ) at ( ) .the drop ratios of and are about percent and percent .this suggests that the mixture of the nm - vehicles and m - vehicles has a negative effect on the saturated flow of two flows which descends in a wide range .thus , in the proposed cca model , the phase transition from free flow to the saturation for both two flows is observed .and the flow ( ) relies on not only itself inflow rate ( ) , but also ( ) . the mixture of the nm - vehicles and m - vehicles in the traffic system results in the drop of the saturated flows .therefore , the model can exhibit the interactions between nm - vehicles and m - vehicles in the mixed traffic system .it is interesting that the collective effect of the nm - vehicles and m - vehicles only appears when or surpasses its critical value .according to the two critical values , the phase diagram in space presented in fig .4(a ) is classified into four regions , where the flow and are related to or or both of them . in regions and , the m - flow is free , while it becomes saturated in regions and . in regions and , the nm - flow is in the state of free flow , while reaches its saturation in regions and .( ) is the boundary of regions ( ) and ( ) , which corresponds to the critical point .it can be found that decreases firstly and then maintains at a constant with the increase of the inflow rate of the nm - vehicles flow .this indicates that the mutual effect between the two flows grows gradually and becomes saturated at the cross point .the same can be observed in the curve of the critical value ( and ) . to get a deep insight into these regions ,space - time plots are depicted in fig .the left correspond to those of the m - lane , and the right correspond to the nm - lane .blue points and green points represent cars and buses on nca sites .red points , black points and magenta points denote bca sites with 1 - 2 , 3 - 4 , and 5 - 6 nm - vehicles . here, no time - space plots in regions is shown to decrease the size of the manuscript .the plots can be provided if readers send an email to me . 1 . in region ,the traffic flow on both two lanes is free flow , the and the depend on only itself inflow rate .region , where the only varies with the , and the gets saturated , the saturation only depends on the .the is very low , thus m - vehicles are sparse on the road .although the lane changing behavior of nm - vehicles from the nm - lane to the m - lane occurs and a short waiting queue forms upstream these nm - vehicles , the queue in the m - lane disappears within several time - steps .thus , the flow of m - vehicles wo nt be perturbed by these nm - vehicles . as the increases , nm - vehicles on the road become denser .most nca sites in the nm - lane are fully filled with nm - vehicles .so buses halting at the stop hinder the forward motion of nm - vehicles , and a long waiting - queue stretches far from the position of buses , leading to the reduce of .3 . region , where the flux is only dependant on the , and the reaches its saturated value , and the saturation flow decreases with .the situation is just contrary to that of region .some nm - vehicles in the nm - lane accumulate behind the buses , and dissolve soon after the buses start to run due to low . but the lane - changing behaviors of these nm - vehicles strongly interrupt the movement of m - vehicles in the m - lane , and cause the reduce of .region , where both and remain constant . to measurethe total flow of the mixed traffic system , a total flow ratio is defined as follows where ( ) is the saturation flow at ( ) .fig.5(a ) shows versus and .it can be seen that depends on both two inflow rates , and the corresponding phase diagram in space ( see fig .4(b ) ) also is divided into four regions similar to fig .4(a ) . 1. in region , both two flows are free , thus the linearly increases with the and the .2 . in region , since the nm - flow is saturated , just varies with .furthermore , region can be separated into two parts and , where is independent on . in , increases with , while decreases with in . for the convenience of analysis and comparison , and the corresponding lane - changing times of nm - vehicles with are shown in fig .5(b - c ) to investigate the feature of mixed flows .it is can be found that the lane - changing times of the nm - vehicles increase with the in , most of nm - vehicles can pass through the bus stop by utilizing the m - lane , and wo nt hinder m - vehicles due to large headway on the m - lane .therefore , increases with the as the lane - changing times of the nm - vehicles increase .whereas with the further increase of , the gap between neighboring vehicles gets smaller and less nm - vehicles can succeed in changing to the m - lane , thus drops with in .3 . in region , the m - flow is saturated , thus only relies on .also , region in fig .4(a ) can be separated into two parts and , , where is independent on . in , decreases with , while increases with in . fig .5(d - e ) show and the corresponding lane - changing times of nm - vehicles in region .since the flow of m - vehicles reaches the saturation , the increase of nm - vehicles changing to the m - lane will cause the great drop of , which is more than the increase of with . thus will decrease with in . with the further increase of , the decrease of lane - changing times of nm - vehiclesmake that the drop of is less than the increment of the .so starts to increase with in .4 . in region , both two flows reach the saturation values , thus the is independent on both two inflow rate and keeps a constant . from the discussions above , it can be concluded that lane - changing behaviors of nm - vehicles are helpful to the total flow of the traffic system , when the flow of m - vehicles is free . especially at the boundary between and , approaches a local maximum .contrarily , lane - changing behaviors of nm - vehicles are harmful to the total flow , when the flow of m - vehicles is saturated . at the boundary between and , approaches a local minimum .a combined cellular automaton ( cca ) model is presented to describe the mixed traffic system composed of m - vehicles and nm - vehicles .the cca model is based on the nasch ca ( nca ) model for m - vehicles and the burgers ca ( bca ) model for nm - vehicles . in the cca model , there are two types of sites with identical size , nca sites and bca sites .a nca site is defined as the site occupied by a m - vehicle , and updates according to the rules of nca model .a bca site contains a number of nm - vehicles , and its state evolves according to the rules of bca model .thus , the new model is convenient to perform on the computer .the model is applied to the mixed traffic system near the bus stop without the special stop bay , and special lane - changing rules are employed .firstly , for the nm - vehicles(m - vehicles ) flow , the phase transition from free flow to saturated flow can be observed at the critical value ( ) . according to the two critical values , the phase diagram in ( , ) spaceis categorized into four regions , including region (where both two flows are free ) , region ( where the flow of nm - vehicles is saturated , and that of m - vehicles is free ) , region ( where the flow of m - vehicles is saturated , and that of nm - vehicles is free ) , and region (where both flows reach the saturations ) . secondly , to measure the total flow of the mixed traffic system , a total flow ratio is introduced . according to the characteristics of , region and in the space of ( , )can be separated into two parts further , where the has a increasing or decreasing tendency due to the mixture of two flows , respectively . from these, it can be inferred that the proposed cca model could reflect feature of mixed traffic flow very well , and has great potentials on the practical application .it is noted that to improve the proposed model for mix traffic system and validating experimentally it , empirical data investigation and related calibration are in progress .the work is only the first step towards understanding characters of mixed traffic flow .there are numerous aspects that require further investigation , such as how to apply the proposed method in intersections and other traffic conditions to model mix traffic flow , how to consider the pin - effects of nm - vehicles and differences among various types of vehicles , how to improve negative effects induced by mixture of nm - vehicles and m - vehicles etc .we are planning to address these issues in our future work .this paper is financially supported by 973 program ( 2006cb705500 ) , project ( 70631001 and 70501004 ) of the national natural science foundation of china , and program for changjiang scholars and innovative research team in university(irt0605 ) . * figures * + * fig .1 * the sketch of the road in the mixed traffic system . +* fig . 2 * ( a ) the variation of the flow of m - vehicles with the entering probability and .( b ) the flow varies as the at fixed .the varies from to with identical interval in the -axis . + * fig .3 * ( a ) the variation of the flow of nm - vehicles with the entering probability and .( b ) the flow varies as the at fixed .the varies from to with identical interval in the -axis . +4 * ( a ) the phase diagram in the mixed traffic system .( b ) the redrawn phase diagram obtained by the variation of the . . + * fig .5 * the total flow rate (a ) versus the inflow rate and , ( b - c ) the and the corresponding lane changing times of nm - vehicles versus at in region ,(d - e ) the and the corresponding lane changing times of nm - vehicles at in region . .
in this study , we provide a novel approach for modelling the mixed traffic flow . the basic idea is to integrate models for nonmotorized vehicles ( nm - vehicles ) with models for motorized vehicles ( m - vehicles ) . based on the idea , a model for mix traffic flow is realized in in the following two steps . at a first step , the models that can be integrated should be chosen . the famous nasch cellular automata ( nca ) model for m - vehicles and the burgur cellular automata ( bca ) model for nm - vehicles are used in this paper , since the two models are similar and comparable . at a second step , we should study coupling rules between m - vehicles and nm - vehicles to represent their interaction . special lane changing rules are designed for the coupling process . the proposed model is named as the combined cellular automata ( cca ) model . the model is applied to a typical mixed traffic scenario , where a bus stop without special stop bay is set on nonmotorized lanes . the simulation results show that the model can describe both the interaction between the flow of nm - vehicles and m - vehicles and their characters .
brownian motion is a classical continuous - time model describing diffusion of particles in some fluid . besides physics , it has found many real - world applications , like in ecology , medicine , finance and many other fields .but in spite of many obvious advantages , the standard brownian diffusion can not model the real time series with apparent constant time periods ( called also trapping events ) , which are often observed in datasets recorded within various fields . therefore a rapid evolution of alternative models is observable in many areas of interest .especially anomalous diffusion models have found many practical applications .they were used in variety of physical systems , including charge carrier transport in amorphous semiconductors , transport in micelles , intracellular transport or motion of mrna molecules inside e. coli cells .the constant time periods can be also observed in processes corresponding to stock prices or interest rates , so models based on the subordinated processes might be also useful in modeling financial time series , .one of the most important issues that arises in the analysis of the subordinated processes is the description of waiting - times that correspond to the periods of constant values . finding a proper subordinator distribution allows to conclude on the properties of the whole process .the most popular subordinator distribution is the inverse , see for instance , but recent developments in this area indicate that another nonnegative infinitely divisible distribution can be also used to model the observed waiting - times , . the family of such distributions contains , besides one - sided lvy stable , also pareto , gamma , mittag - leffler or tempered stable . in this paperwe analyze the subordinated brownian motion with three types of the inverse subordinator distribution , namely , tempered stable and gamma .we show the differences between the distributions and present the main properties of the analyzed subordinated processes mainly expressed in the language of moments .moreover , we investigate the asymptotics of the mean squared displacement and show that in the gamma case it is linear for large , while for small it exhibits non - power law behavior .we start with introducing a general definition of the considered processes . the subordinated brownianmotion is defined as : where is the brownian motion and is an inverse subordinator of , i.e. : for increasing lvy process with the laplace transform given by : the function is called the lvy exponent and can be written in the following form here , is the drift parameter . if for simplicity , following , we assume , then is an appropriate lvy measure .moreover , and are assumed to be independent .the probability density function ( pdf ) of the process is characterized by the generalized fokker - planck equation : where is the dirac delta in point and - an integro - differential operator defined as : the function is called the memory kernel and is defined via its laplace transform , : classical anomalous diffusion type model given by the subordinated brownian motion ( [ sub1 ] ) defines subordinator as an inverse -stable process , see for instance .it implies that the lengths of the constant time periods are -stable distributed .however , in some applications also different distributions describing the lengths of the constant time periods might be useful . in this paper , besides -stable , we consider two other cases of subordinator distribution , namely tempered stable and gamma .we start with a brief review of the main properties of the considered distributions .since there is no closed form for the probability density function of the -stable distribution , it is usually more conveniently defined by it s characteristic function , given by +i\mu t \right\ } & \mbox { if } \alpha\neq 1,\\ \exp\left\{-\sigma|t|\left [ 1+i\beta \mbox{sign}(t ) \frac{2}{\pi } \ln|t|\right]+i\mu t \right\ } & \mbox { if } \alpha=1 , \end{array } \right.\ ] ] where ] is the skewness parameter , is the scale parameter and is the location parameter .note that if and the stable distribution becomes totally ( right ) skewed . since the subordinator should be a non - decreasing process , in the following we assume , and .moreover , for simplicity we assume .recall , that the family has two important properties .first , a sum of two independent -stable random variables with the same parameter is again -stable distributed .second , the tails of the stable distribution are governed by the power law behavior .the positive tempered stable random variable with parameters and is defined through the laplace transform in the above definition is the tempering parameter , while and are the stability and scale parameters , respectively .observe that if , then the random variable becomes simply with the scale parameter .the probability density function ( pdf ) of the tempered stable distribution with parameters and can be expressed in the following form : where and is the pdf of the -stable distribution with the stability index , scale parameter , skewness and shift , . because all moments of the tempered stable distribution are finite , it becomes attractive in many practical applications for instance in finance , biology and physics like anomalous diffusion , relaxation phenomena , turbulence or plasma physics , see also .the pdf of the gamma distribution is given by where is the gamma function defined as : it is interesting to note that for the gamma distribution becomes the exponential one .moreover , gamma distribution is infinitely divisible .for we have provided that are independent . in figure[ pdf ] we plot sample probability density functions , as well as , the tails of the considered distributions .the parameters in stable and tempered stable distributions are equal to .the parameters of the gamma distribution are chosen so it s mean is equal to the mean of the tempered stable distribution . ) -stable , ts( ) tempered stable and g(a , c ) gamma .the parameters in the stable and tempered stable distributions are equal to .the parameters of the gamma distribution are chosen so it s mean is equal to the mean of the tempered stable distribution .right panels display the right tails of the corresponding probability density functions in the double - logarithmic scale.,title="fig:",width=453][pdf ]in this section we examine the subordinated brownian motion defined in ( [ sub1 ] ) with three types of subordinator distribution , namely : , tempered stable and gamma .the -stable subordinator is a non - decreasing lvy process with the lvy measure and the following laplace transform : therefore the function , that appears in ( [ psi ] ) , takes the form this implicates the form of the memory kernel , namely the first two moments of the subordinated brownian motion defined in ( [ sub1 ] ) in the -stable case are given by : while the covariance function takes the form the tempered - stable subordinator is a lvy process with tempered stable increments ( i.e. with the lvy measure ) and the laplace transform given by : where , .let us point out that in case the operator is proportional to the fractional riemann - liouville derivative , therefore ( [ pdf ] ) tends to fractional fokker - planck equation ( see subsection -stable case ) , .the basic properties and the simulation procedure of the process defined in ( [ sub1 ] ) in the tempered stable case one can find in . observe that the memory kernel in the considered case can be calculated on the basis of the following equation as a consequence , the memory kernel takes the form : where is a generalized mittag - leffler function , .since for and , the generalized mittag - leffler function for and can be expressed as ( see theorem 2.3 in ) : where the memory kernel is given by the following formula : therefore we have the above limiting behavior is a simple consequence of the fact that for large the generalized mittag - leffler function can be written as : knowing the form of the memory kernel we can calculate the basic statistics of the process such as moments and autocovariance function ( see theorem 1 in ) , namely and however , in case of the tempered stable distribution such derivations require numerical approximations .the gamma subordinator is a lvy process with independent gamma distributed increments , i.e. with the lvy measure ) and the laplace transform given by : observe that in this case also the one - dimensional density of the process is given in a closed form , namely in this case the lvy exponent is given by : what implicates that the memory kernel can be expressed as : where is the inverse laplace transform of the function . in order to find a formula for the memory kernel , we use the following relation ( being a consequence of the proposition 1 in ) : on the other hand , we have where is the inverse subordinator .moreover , using the relation between subordinator and its inverse and the fact that for each the random variable is positive , we obtain therefore , in case of the gamma distribution , we get where is an incomplete gamma function defined as : finally , from ( [ eqn : var : m ] ) we have again , the basic statistics of the process can be calculated .observe that from ( [ eqn : y2:s ] ) and ( [ eqn : s ] ) we have and the main characteristics of the subordinated process defined in ( [ sub1 ] ) for the three considered cases of subordinator distribution are summarized in table [ tab : char ] ..characteristics of the subordinated process defined in ( [ sub1 ] ) for the three cases of the subordinator distribution . [cols="^,^,^,^ " , ]sample trajectories of the process defined in ( [ sub1 ] ) are plotted in figure [ traj ] .the chosen parameters correspond to the middle panels of figure [ pdf ] .observe visible differences in the character of constant time periods .with the three considered subordinator distributions : s( ) -stable , ts( ) tempered stable and g(a , c ) gamma .the parameters in the stable and tempered stable distributions are equal to .the parameters of the gamma distribution are chosen so it s mean is equal to the mean of the tempered stable distribution.,title="fig:",width=453][traj ] now , we focus on one of the most popular characteristic of the recorded process trajectories in experimental analysis , namely mean squared displacement . recall that the ensemble averaged mean squared displacement is defined as : where is the probability of finding a particle in a infinitesimal interval at time . on the other hand , the time averaged mean squared displacement is given by : where is the length of the analyzed time series . for a standard brownian motion the mean squared displacement ( msd ) scales as no matter if it is calculated as the ensemble or the time average .however , the behavior of the ensemble average changes under subordination scenario . in the -stable case ensemble averaged msd scales as , while in the tempered stable case as for and as for . it can be shown ( for a detailed derivation see appendix ) that in the gamma case the ensemble average scales as in figure [ msd ] we plot the mean squared displacement calculated as the ensemble average over 1000 simulated trajectories in the three considered cases .the chosen parameters are the same as on the corresponding panels of figure [ pdf ] .moreover , we fit a power law function to each of the obtained curves , except for small in the gamma case . in the -stable case the power lawis fitted for the whole range of , while in the tempered stable case separately for small and large and in the gamma case only for large .observe that the obtained values are close to the theoretical power laws .finally , we calculate the time averaged mean squared displacement .the obtained values calculated as the time average from a simulated trajectory of each of the three considered processes is plotted in figure [ msd_timeavg ] .observe that in all cases the obtained msd behaves as .with the three considered subordinator distributions : s( ) -stable , ts( ) tempered stable and g(a , c ) gamma .the parameters in the stable and tempered stable distributions are equal to .the parameters of the gamma distribution are chosen so it s mean is equal to the mean of the tempered stable distribution .the fitted power law functions are plotted with the corresponding gray lines.,title="fig:",width=453][msd ] with the three considered subordinator distributions : s( ) -stable , ts( ) tempered stable and g(a , c ) gamma .the parameters in the stable and tempered stable distributions are equal to .the parameters of the gamma distribution are chosen so it s mean is equal to the mean of the tempered stable distribution.,title="fig:",width=453][msd_timeavg ]in this paper we have examined the anomalous diffusion models based on the subordinated brownian motion with three types of subordinators distribution : , tempered stable and gamma .the main result is related to the properties of the analyzed processes .we have pointed at the asymptotic behavior of the mean squared displacement in three considered cases and showed that in gamma case for small values of the arguments we obtain completely different behavior ( non - power ) from this observed in two other cases .in order to show the asymptotic behavior of the ensemble mean squared displacement ( msd ) in the gamma case we use the proposition 3 in , namely if is finite , then for large , where and are the subordinator and its inverse defined in ( [ salpha ] ) , respectively . in the gamma case with parameters and , therefore when we have in order to show the asymptotic behavior of the msd function for small we use its explicit form : we can use the series expansion of the incomplete gamma function : where and is the modified bessel function defined as follows : the function can be for small approximated by therefore , when , the function under the integral in ( [ msdgam ] ) behaves like : what gives thus , when we obtain now , note that the function can be approximated as : for some that are independent of and satisfy the following relation : where is the riemann zeta function .as a consequence , we have : in order to simplify the notation denote .we have finally , let us consider the asymptotic behavior of the function . integrating by parts gives the recursive relation : therefore when the , what yields the asymptotic behavior of the for small , namely : where . substituting in ( [ msdg ] )we obtain : are deeply grateful to marcin magdziarz for stimulating discussions and his valuable suggestions .+ the work of j.j was partially financed by the european union within the european social fund .+ dd brownian motion : theory , modelling and applications , r. c. earnshaw , e. m. riley ( eds . ) , nova publishers ( 2011 ) .scher , h. , montroll , e. ( 1975 ) , phys .b * 12 * , 2455 - 2477 .scher , h. , lax , m. ( 1973 ) , phys .b * 7 * , 4491 - 4502 .pfister , g. , scher , h. ( 1978 ) , adv .* 27 * , 747 - 798 .ott , a. , bouchaud , j.p ., langevin , d. and urbach , w. ( 1990 ) , phys .lett . * 65 * , 2201 - 2204 .caspi , a. , granek , r. , elbaum , m. ( 2000 ) , phys .lett . * 85 * , 5655 - 5658 .golding , i. , cox , e.c .( 2006 ) , phys .96 * , 098102 .janczura , j. , wyomaska , a. ( 2009 ) , acta phys .b * 40*(5 ) , 1341 - 1351 . janczura , j. , orze , s. , wyomaska , a. ( 2011 ) , physica a * 390 * , 4379 - 4387 .orze , s. , wyomaska , a. ( 2011 ) , j. stat ., * 143*(3 ) , 447 - 454 .magdziarz , m. , weron , a. ( 2007 ) , phys .e , 75 , 056702 .gajda , j. , magdziarz , m. ( 2010 ) , phys .e , 82 , 011117 .magdziarz , m.(2009 ) , j. stat . phys . * 135 * , 763 - 772 .piryantiska , a. , saichev a.i . , woyczynski w.a .( 2005 ) , physica a * 349 * , 375 - 424 .magdziarz , m. , weron k. ( 2006 ) , physica a * 367 * , 1 - 6 .sokolov , i.m ., klafter j. ( 2006 ) , phys .97 * , 140602 .baumer , b. , meerschaert , m.m .( 2010 ) , j. comp .appl . math . *233 * , 2438 - 2448 .kim , y.s . ,rachev , s.t . ,bianchi , m.l . and fabozzi , f.j ., a new tempered stable distribution and its application to finance , georg bol , svetlozar t. rachev , and reinold wuerth ( eds . ) , risk assessment : decisions in banking and finance , physika verlag , springer ( 2007 ) .kim , y.s . , chung , d.m ., rachev , s.t . and bianchi , m.l .( 2009 ) , prob .math . statist . * 29*(1 ) , 91 - 117 .hougaard , p. ( 1986 ) , biometrika * 73*(3 ) , 671 - 678 .stanislavsky , a.a ., weron , k. , weron , a. ( 2008 ) , phys .e * 78 * 051106 .dubrulle , b. , laval , j .- ph . ( 1998 ) , eur .j. b. * 4 * , 143 - 146 .jha , r. , kaw , p.k . , kulkarni , d.r ., parikh , j.c . and team , a. ( 2003 ) , phys .plasmas * 10 * , 699 - 704 .sokolov , i.m ., chechkin , a.v . and klafter , j. ( 2004 ) , physica a * 336 * , 245 - 251 .chechkin , a.v . ,gonchar , v.yu . ,klafter , j. and metzler , r. ( 2005 ) , phys .rev e. * 72 * 010101 .gorenflo , r. , loutchko , j. , luchko yu .( 2002 ) , fract .* 5 * ( 4 ) , 491 - 518 .metzler , r. , tejedor , v. , jeon , j .- h .( 2009 ) , acta phys .b * 40*(5 ) , 1315 - 1331 .lageras , a.n .( 2005 ) , j. appl .prob . , * 42 * , 1134 - 1144 .titchmarsh , e.c . ,the theory of the riemann zeta function , 2nd ed .new york : clarendon press ( 1987 ) .
subordinated processes play an important role in modeling anomalous diffusion - type behavior . in such models the observed constant time periods are described by the subordinator distribution . therefore , on the basis of the observed time series , it is possible to conclude on the main properties of the subordinator . in this paper we analyze the anomalous diffusion models with three types of subordinator distribution : , tempered stable and gamma . we present similarities and differences between the analyzed processes and point at their main properties ( like the behavior of moments or the mean squared displacement ) .
recently , several deep multi - modal learning tasks have emerged . there are image captioning , text conditioned image generation , object tagging , text to image search , and so on .for all these works , how to achieve semantic multi - modal representation is the most crucial part .therefore , there were several works for multi - modal representation learning . andall of these works require image - text pair information .their assumption is , image - text pair has similar meaning , so if we can embed image - text pair to similar points of multi - modal space , we can achieve semantic multi - modal representation . butpair information is not always available in several situations .image and text data usually not exist in pair and if they are not paired , manually pairing them is an impossible task .but tag or category information can exist separately for image and text . andalso , does not require paired state and can be manually labeled separately . andlearning multi - modal representation from image - text pair information can be a narrow approach . because , their training objective focuses on adhering image and text in same image - text pair and does nt care about adhering image and text , that are semantically similar , but in different pair .so some image and text can have not similar multi - modal feature even though they are semantically similar . in addtion , resolving every pair relations can be a bottleneck with large training dataset . to deal with above problems , for multi - modal representation learning , we bring concept from ganin s work which does unsupervised image to image domain adaptation by adversarial backpropagation .they use adversarial learning concept which is inspired by gan ( generative adversarial network) to achieve category discriminative and domain invariant feature .we extend this concept to image - text multi - modal representation learning .we think image and text data are in covariate shift relation .it means , image and text data has same semantic information or labelling function in high level perspective but they have different distribution shape .so we regard , multi - modal representation learning process is adapting image and text distribution to same distribution and retain semantic information at the same time.in contrast with previous multi - modal representation learning works , we do nt exploit image - text pair information and only use category information .our focus is on achieving category discriminative , domain ( image , text ) invariant and semantically universal multi - modal representation from image and text . with above points of view , we did multi - modal embedding with category predictor and domain classifier with gradient reversal layer .we use category predictor for achieving discriminative power of multi - modal feature . and using domain classifier with grdient reversal layer , which makes adversarial relationship with embedding network and domain classifier , for achieving domain ( image , text ) invariant multi - modal feature .domain invariant means image and text have same distribution in multi - modal space .we show that our multi - modal feature distribution is well mixed about domain , which means image and text multi - modal feature s distributions in multi - modal space are similar , and also well distributed by t - sne embedding visualization . and comparison classification performance of multi - modal feature and uni - modal ( image only , text only ) feature shows , there exists small information loss within multi - modal embedding process and still multi - modal feature has category discriminative power even though it is domain invariant feature after multi - modal embedding .and our sentence to image search result ( figure [ fig : intro_search_ex ] ) with multi - modal feature shows our multi - modal feature has universal semantic information , which is more than category information .it means , within multi - modal - embedding process , extracted universal information from word2vec and vgg - verydeep-16 is not removed . in this paper, we make the following contributions .first , we design novel image - text multi - modal representation learning method which use adversarial learning concept .second , in our knowledge , this is the first work that does nt exploit image - text pair information for multi - modal representation learning .third , we verify image - text multi - modal feature s quality in various perspectives and various methods.our approach is much generic as it can be easily used for any different domain ( e.g. sound - image , video - text ) multi - modal representation learning works with backpropagation only .several works about image - text multi - modal representation learning have been proposed over the recent years .specific tasks are little bit different for each work , but these works crucial common part is achieving semantic image - text multi - modal representation from image and text .image feature extraction and text feature extraction method are different with each work .but almost they commonly use image - text pair information to learn image - text semantic relation .many previous approaches use ranking loss ( the training objective is minimizing distance of same image - text pair and maximizing distance of different image - text pair in multi - modal space ) for multi - modal embedding .work use r - cnn for image feature and brnn for text feature and apply ranking loss .and some approaches ( , ) use vgg - net for image feature extracting and use neural - language - model for text feature extracting and apply ranking loss or triplet ranking loss . some other approachesuse deep generative model or dbm ( deep boltzmann machine) for multi - modal representation learning . in these methods, they intentionally miss one modality feature and generate missed feature from other modality feature to learn relation of different modalities .therefore they also use image - text pair information and the process is complicate and not intuitive . adversarial network concept has started from gan ( generative adversarial network ) .this concept showed great results for several different tasks .for example , dcgan ( deep convolutional generative adversarial network ) drastically improve generated image quality . andtext - conditioned dcgan generate related image from text .besides image generation , some approach apply adversarial learning concept to domain adaptation field with gradient reversal layer . they did domain adaptation from pre - trained image classification network to semantically similar but visually different domain ( e.g. edge image , low - resolution image ) image target . for this , they set category predictor and domain classifier , which do adversarial learning , so network s feature trained for category discriminative and domain invariant property .covariate shift is a primary assumption for domain adaptation field , which assumes that source domain and target domain have same labelling function ( same semantic feature or information ) but mathematically different distribution form .there was theoretical work about domain adaptation within covariate shift relation source and target domain .and we assume that image and text are also in covariate shift relation .we assume image and text have same semantic information ( labeling function ) but have different distribution form .so our multi - modal embedding process is adapting those distributions as same and retain semantic information at the same time .now , we detail our proposed novel method and model ( figure [ fig : model ] ) , multi - modal representation learning by adversarial backpropagation .we use ms coco dataset . our network structure ( figure [ fig : model ] )is divided into two parts : feature extraction and multi - modal representation learning .the former part aims at transforming each modality signal into feature .the latter part is devised to embed each feature representation into single ( multi - modal ) space.for the representation of visual features , we use vgg16 which is pre - trained on imagenet . to extract image features , we re - size an image to the size and crop patches from the four corners and the center .then the 5 cropped area are flipped to get total 10 patches .we extract fully - connected features ( fc7 ) from each patch , and average them to get a single feature .to represent sentences , we use word2vec , which embeds word into 300-dimensional semantic space . in feature extraction process, words in a sentence are converted into word2vec vectors , each of which is a 300-dimensional vector .if a sentence contains words , we get a feature whose size is .we add zero padding to the bottom row of the feature to fix its size .since the maximum length of a sentence in ms coco dataset is 59 , we set the feature size to . after extracting features from each modality , the multi - modal representation learning process follows . for an image feature and a sentence feature , we apply two transformations and for images and sentences respectively , to embed two features into a single -dimensional space .that is , and are satisfied.for embedding image feature , we use two fully connected layers with relu activation .since sentence feature is 2-dimensional , we apply textcnn to , to make it possible for to be embedded in the space . at the end of each feature embedding network, we use batch - normalization and l2-normalization respectively .and we apply dropout for all fully - connected layers.the embedding process is regulated by two components category predictor and domain classifier with gradient reversal layer , which is a similar concept to that of .the category predictor regulates the features on the multi - modal space , in such a way that multi - modal features are discriminative enough to be classified into the valid categories .meanwhile , the domain classifier with gradient reversal layer makes the multi - modal features being invariant to their domain .we adopt the concept from .grl ( gradient reversal layer ) is a layer in which backward pass is reversing gradient values . for a layer s input , the output , and the identity matrix ,forward pass is shown on equation [ eq : grl_forward ] .in backward pass , for the loss of the network , the gradient subject to is shown on equation [ eq : grl_backward ] . is adaptation factor which is the amount of domain invariance we want to achieve at a point of training . the domain classifier is a simple neural network that has two fully - connected layers , with the last sigmoid layer that determines the domain of features in the multi - modal embedding space .that is , it is trained in a way that it discriminates the difference between features from two domains.however , since the grl reverses the gradient , feature embedding networks are trained to generate features whose domains are difficult to be determined by the domain classifier .this makes adversarial relationship between the embedding network and the domain classifier .consequently , domain - invariant features can be generated by the multi - modal embedding networks .for the calculation of network loss , we sum the losses of two ends category predictor and domain classifier .we use sigmoid cross entropy loss for the two ends . for calculating joint gradient of the category predictor and the domain classifier , the two gradients are added , which is shown in the equation below . where and is the error of category predictor and domain classifier respectively , is the output of the last feature embedding layer , and is the adaptation factor.we use adam optimizer for training with relatively small learning rate . that s because of the empirical difficulty of generating domain - invariant features with regular learning rate .to achieve domain invariant feature , we use domain classifier and gradient reversal layer . and we should properly schedule ( adaptation factor ) value from 0 to some positive value . because , at the first stage of training , domain classifier should become smart in advance for adversarial learning process . and with value increasing , domain classifying with multi - modal feature become difficult and domain classifier become smarter to classify it correctly . in our experiment , it turns out that proper scheduling is important to achieve domain invariant feature . after exploring many scheduling methods, we find below schedule scheme is optimal , which is exactly the same scheduling as . in the equation , is the fraction of current step in max training steps .we used batch normalization and l2-normalization for normalizing image and text feature distribution just before the multi - modal feature layer . in our experiment , without proper normalization , it seems to be trained well ( loss value decreases gently and classification accuracy is fine ) but when checking the t - sne embedding and search result , we can recognize that image and text feature distribution is collapsed just for achieving domain invariant feature ( collapsed means distance between features going to zero ) .so proper normalization process is important to achieve domain invariant and also well distributed multi - modal feature .above figure [ fig : tsne ] is a t - sne embedding result of computed multi - modal features from ms coco test set s 5000 images and sentences .( a ) is result of trained with triplet ranking loss ( the training objective is minimizing distance of same image - text pair and maximizing distance of different image - text pair in multi - modal space ) which exploits image - text pair relation . for implementing ,we consult wangs work .we use fv - hglmm for sentence representation , pre - trained vgg16 for image representation and two - branch fully - connected layers for multi - modal embedding , which is same as wang s did .difference is they use complex data sampling scheme and we use random data sampling at training stage .( b ) is result of trained with category predictor and domain classifier which uses our model . in ( a ) , image and text feature distributions are not well mixed , which means image and text multi - modal feature s distributions in multi - modal space are not similar .image and text multi - modal features are not overlapped in multi - modal space .it means , semantically similar image and text are not embedded to near points of multi - modal space .but in ( b ) , our result , we can get well mixed with domains image - text multi - modal feature distribution .image and text are overlapped in multi - modal space and also distributed enough for being discriminated .it means , image and text has similar distribution in multi - modal space . we think difference of ( a ) and ( b ) comes from difference of training objective . because our model(b ) trained for hard to classify domain ( image , text ) of the multi - modal feature and triplet ranking loss(a ) is trained for adhering same image - text pair and pushing different image - text pair .and result means triplet ranking loss not adapt image and text to same distribution in multi - modal space .so this result shows our model s training objective is more suitable for learning well mixed with domains and also well distributed multi - modal feature than other methods ..category classification result of ms coco val set with various modes .`` image only '' means just use vgg - net and category predictor , `` text only '' means just use word2vec , textcnn and category predictor .so , `` image only '' and `` text only '' modes do nt include domain classifier and gradient reversal layer . ``image+text(m ) '' is our multi - modal network model ( figure [ fig : model ] ) . [cols="^,^,^,^",options="header " , ] we build our sentence to image search system with 40504 number of ms coco validation set which is never seen at training stage .we train with 82783 images ( train set ) and test for 40504 images ( val set ) . and simply do k - nearest - neighbor search in multi - modal space with computed multi - modal feature .figure [ fig : search_compare ] shows comparison of our search result and category based search result .you can see the sentence query has more semantic information than its category label . andcategory based search can not exploit that semantic information but our search system can exploit that semantic information of sentence query . in figure[ fig : search_compare ] , we can see our search system finds several objects which is not contained in category information but exists in sentence query . in 1st row of figure[ fig : search_compare ] , our search system rightly catches information woman standing under trees by a field " from sentence even though it was just trained to predict [ person , tie ] from sentence and image at training time .it means our multi - modal embedding process did nt remove universal information extracted from word2vec and vgg16 . and also match image and text semantically relevant feature during multi - modal embedding process . in 2nd row of figure[ fig : search_compare ] , our search system thinks that most similar image with the query is food image which that category is [ spoon , broccoli ] , which is not overlapped with query s category .but interestingly , in human s semantic perspective , we can recognize they have similar semantic information .( `` covered with different vegetables and cheese . '' ) in figure [ fig : search_result ] ( next page ) , you can see more various search results from our multi - modal search system .for benchmark of search system , we did recall evaluation ( table [ table : recallk ] , next page ) with sentence - to - image and image - to - sentence retrieval .for this , we used karpathy s data split scheme . compare to state - of - the - art results , our model s performance is relatively low .we think , the major reason is , previous models trained for adhering image - text pair and pushing different image - text pair in multi - modal space and recall evaluate query s pair appeared or not in retrieval result .so , even if search result is semantically reasonable ( figure [ fig : search_result ] ) , if query s pair not appear in retrieval result , recall can be low .so we think , this metric is not fully appropriate to assess search quality .but for comparison , we also did recall experiment .we have proposed a novel approach for multi - modal representation learning which uses adversarial backpropagation concept .our method does not require image - text pair information for multi - modal embedding but only uses category label .in contrast , until now almost all other methods exploit image - text pair information to learn semantic relation between image and text feature.our work can be easily extended to other multi - modal representation learning ( e.g. sound - image , sound - text , video - text ) .so our method s future work will be extending this method to other multi - modal case .deng , jia , dong , wei , socher , richard , li , li - jia , li , kai , and fei - fei , li .imagenet : a large - scale hierarchical image database . in _computer vision and pattern recognition , 2009 .cvpr 2009 .ieee conference on _ , pp . 248255 .ieee , 2009 .frome , andrea , corrado , greg s , shlens , jon , bengio , samy , dean , jeff , mikolov , tomas , et al .devise : a deep visual - semantic embedding model . in _ advances in neural information processing systems_ , pp . 21212129 , 2013 .ganin , yaroslav and lempitsky , victor .unsupervised domain adaptation by backpropagation . in blei ,david and bach , francis ( eds . ) , _ proceedings of the 32nd international conference on machine learning ( icml-15 ) _ , pp . 11801189 .jmlr workshop and conference proceedings , 2015 .url http://jmlr.org/proceedings/papers/v37/ganin15.pdf .goodfellow , ian , pouget - abadie , jean , mirza , mehdi , xu , bing , warde - farley , david , ozair , sherjil , courville , aaron , and bengio , yoshua .generative adversarial nets . in _ advances in neural information processing systems_ , pp . 26722680 , 2014 .karpathy , andrej and fei - fei , li .deep visual - semantic alignments for generating image descriptions . in _ proceedings of the ieee conference on computer vision and pattern recognition _ , pp . 31283137 , 2015 .kim , yoon .convolutional neural networks for sentence classification . in moschitti , alessandro , pang , bo , and daelemans , walter( eds . ) , _ proceedings of the 2014 conference on empirical methods in natural language processing , emnlp 2014 , october 25 - 29 , 2014 , doha , qatar , a meeting of sigdat , a special interest group of the acl _ , pp .acl , 2014 .isbn 978 - 1 - 937284 - 96 - 1 .url http://aclweb.org/anthology/d/d14/d14-1181.pdf .klein , benjamin , lev , guy , sadeh , gil , and wolf , lior .associating neural word embeddings with deep image representations using fisher vectors . in _ proceedings of the ieee conference on computer vision and pattern recognition _ , pp . 44374446 , 2015 .lin , tsung - yi , maire , michael , belongie , serge , hays , james , perona , pietro , ramanan , deva , dollr , piotr , and zitnick , c lawrence .microsoft coco : common objects in context . ineuropean conference on computer vision _ , pp .springer , 2014 .ma , lin , lu , zhengdong , shang , lifeng , and li , hang .multimodal convolutional neural networks for matching image and sentence . in _ proceedings of the ieee international conference on computer vision _ , pp . 26232631 , 2015 .mikolov , tomas , sutskever , ilya , chen , kai , corrado , greg s , and dean , jeff . distributed representations of words and phrases and their compositionality . in _ advances in neural information processing systems_ , pp . 31113119 , 2013 .nair , vinod and hinton , geoffrey e. rectified linear units improve restricted boltzmann machines . in _ proceedings of the 27th international conference on machine learning ( icml-10 ) _ , pp . 807814 , 2010 .radford , alec , metz , luke , and chintala , soumith .unsupervised representation learning with deep convolutional generative adversarial networks . _corr _ , abs/1511.06434 , 2015 .url http://arxiv.org/abs/1511.06434 .reed , scott e. , akata , zeynep , yan , xinchen , logeswaran , lajanugen , schiele , bernt , and lee , honglak .generative adversarial text to image synthesis . in balcan , maria - florina and weinberger , kilian q. ( eds . ) , _ proceedings of the 33nd international conference on machine learning , icml 2016 , new york city , ny , usa , june 19 - 24 , 2016 _ , volume 48 of _ jmlr workshop and conference proceedings _ , pp . 10601069 .jmlr.org , 2016 .url http://jmlr.org/proceedings/papers/v48/reed16.html .srivastava , nitish , hinton , geoffrey e , krizhevsky , alex , sutskever , ilya , and salakhutdinov , ruslan .dropout : a simple way to prevent neural networks from overfitting . _journal of machine learning research _ , 150 ( 1):0 19291958 , 2014 .vinyals , oriol , toshev , alexander , bengio , samy , and erhan , dumitru .show and tell : a neural image caption generator . in _ proceedings of the ieee conference on computer vision and pattern recognition _, pp . 31563164 , 2015 .
we present novel method for image - text multi - modal representation learning . in our knowledge , this work is the first approach of applying adversarial learning concept to multi - modal learning and not exploiting image - text pair information to learn multi - modal feature . we only use category information in contrast with most previous methods using image - text pair information for multi - modal embedding . in this paper , we show that multi - modal feature can be achieved without image - text pair information and our method makes more similar distribution with image and text in multi - modal feature space than other methods which use image - text pair information . and we show our multi - modal feature has universal semantic information , even though it was trained for category prediction . our model is end - to - end backpropagation , intuitive and easily extended to other multi - modal learning work . = 1
a number of theoretical predictions exist for the propagation of magnetohydrodynamic ( mhd ) waves associated with coronal loops .studied the various types of propagation through a low- plasma using reasonable approximations for the conditions inside coronal loops .their work was followed by a large number of authors since then ( for example see one of the many review papers by ) and was recently confirmed and refined by applying numerical modelling .many attempts to observe propagations in the coronal loops have been made since the first theoretical predictions .one of the most challenging types of oscillations to observe are the fast sausage mode mhd that have expected periodicities below 1 ( see for a detailed review ) .radio , optical and x - rays observations have been used to detect such waves with limited success . in this paperwe will present the application of image processing techniques as a way to enhance optical observations made by the solar eclipse coronal imaging system ( secis ) project during the june 2001 total solar eclipse in zambia .a detailed description of the instrument can be found in . starting with a number of authorshave published possible detections of oscillations with periodicities below 10 .have reported possible detections of optical intensity oscillations with periods in the range of 0.54 , while more recently ( hereafter w01 , w02 and k03 respectively ) provided strong indications of oscillations with periodicities while reporting on optical secis august 1999 total solar eclipse observations . continuing the work published for the secis 1999 observations , we have analysed observations made during the june 2001 total solar eclipse in lusaka , zambia . based on experience from the analysis of the 1999 data set ,a number of numerical techniques were used in order to improve the signal - to - noise ( s / n ) ratio and establish an objective , numerical criterion for the identification of the corona intensity oscillations over any statistical effects .a brief description of the observations and data analysis is presented here with more emphasis given to the advanced mathematic techniques used in an effort to improve the s / n ratio and determine the real detections of corona loop waves over the influences of noise in the data set .a detailed description of the secis instrument , as used for the 1999 observations is provided by , while a discussion of the improvements made for the 2001 observations can be found in ( hereafter k04 ) .the observations taken by secis in 2001 and their data reduction will not be presented in detail in this paper as they are the subject of k04 . however , a brief description of the data set is to follow as needed for the presentation of the image processing techniques reported here .8000 fe xiv images of 512 pixel with a resolution of 4 arcsec pixel were taken during the 3.5 of totality .although the observing field was large enough to include the whole moon disk and the lower part of the corona , we chose only to observe the north - east limb .this decision is in line with the 1999 observations and was taken to avoid edge effects of the ccd and optics as well as to include important parts of the outer corona . a brief description of the data reduction of the eclipse 2001 observationsis included here for the purpose of describing the data that were used for the application of the _ trous _ wavelet transform and monte carlo analysis .the images taken during the 2001 observations were reduced by using dark and flat - field frames , for current subtraction and flat field correction .the 8000 images were then automatically co - aligned using the edge of the moon as a reference point for a first order alignment .for this first alignment the moon was effectively considered stationary during the duration of the eclipse .a more accurate alignment was subsequently achieved by using a clear feature from an area of the lower corona as reference .this second alignment corrected for the motion of the moon in respect to the solar corona during totality .k04 provides a detailed discussion on the alignment technique used and its various steps .after co - alignment the 8000 images of the observations form a three dimensional data array .the basic technique used for the detection of intensity oscillations throughout the secis project is the continuous wavelet transformation of the time series that corresponds to each of pixels of the aligned images .details on the transformation function and its implementation in time series can be found in , while examples of the application of this analysis to secis data can be found in a number of publications ( e.g.w01 , w02 , k03 and k04 ) .additionally k03 explicitly mentions a number of criteria used for a wavelet detection to be considered as a solar intensity oscillation ( as oppose to detections created by noise ) . to test the performance of k03 s criteria , k04 used automated software to scanned large areas of the image covered by the moon and upper corona andconfirmed their effectiveness .on of the most significant limitations of the secis eclipse observations is the low s / n ratio .although the observations were taken using a broad fe xiv filter and the solar corona is known to be bright in fe xiv emission , there are three major factors that severely limit the s / n ratio achieved by secis . for the purposes of this paper we will only emphasise the s / n ratio limiting factors : 1 .the prime mirror of the secis telescope has a 200 diameter and a focal length of f/10 .this is because the instrument was designed mainly with observing solar eclipses in mind and has to be lightweight and easy to travel .the ccd cameras took observations at a ratio of frames per second .this has the obvious disadvantages of a very short exposure time and a very fast readout speed . with current technologysuch fast ccd readout speeds increase the readout noise drastically .as the purpose of the secis project is to detect high frequency intensity oscillations , the atmospheric effects became significant .this is because the earth s atmosphere itself is known to oscillate in these frequencies and that is causing non - gaussian noise on the data set . having the above limitations in mind , the _ trous _ filtering was investigated as a means of noise reduction because the algorithm has a number of advantages . *the computational requirements are within acceptable levels .this is particularly important as both of the secis eclipse observations will have a size of more than 8 .* the reconstruction algorithm is trivial .this is important as it makes the reconstruction of the time series more accurate .* the transform is known for every sample of the time series of every pixel .this is important to this project as the exact moment an oscillation starts and ends on a given pixel can be very significant .w02 made some interesting measurement of the propagation speed of a travelling wave on secis 1999 total solar eclipse observations by determining the exact time the oscillation arrived at any given pixel . *the transform is clearly evolving through the different scale in a predictable mater .this makes easier to choose the right scale for the filtering of a certain data set . *the transform is isotropic . as the secis data are also isotropic ( specially in the time domain ) , any filtering used should also be isotropic to avoid an artifacts being introduced .the _ trous _ filtering is a relatively recent ( some of the first application were described by ) , s sophisticated , highly tunable and complicated multiresolution algorithm . due to its complication the exact description of the algorithm is outside the purposes of this paper . more information on the algorithm , its various parameters , advantages and disadvantages of the various procedures that can be used in conjunction with the _ trous _ wavelet transform and examples of its application on astrophysical data can be found in .each pixel of the three - dimensional data array of the reduced secis 2001 observations was treated as a independent time series and was transformed to a number of coefficients on a multi - scale domain using the _ trous _ wavelet transform . splines were used for correlation and the noise was assumed to be gaussian .the sigma of the noise in the coefficients was determined automatically and multiresolution hard k - sigma thresholding was used to remove the coefficients that were found to be noise .the time series was then reconstructed using the coefficients corrected for noise . to test the effectiveness of the _ trous _ wavelet transformation in noise reductionthe same three parts of the data set that were chosen by k04 were used again .figure [ areas ] contains the time - average image of the aligned secis 2001 total eclipse observations .after alignment the three - dimensional data array was averaged over the time axis resulting to a two - dimensional image .the edges of the image ( one hundred pixels of the left , right and bottom of the image and two hundred pixels from the top ) were discarded as they suffer from edge effects of the ccd .highlighted are three areas of the data set that were used to test the effectiveness of the _ trous _ filtering algorithm and the monte carlo randomisation test .these are areas of the image covered by the moon s disk , the outer corona and lower corona .those areas were chosen to be in the proximity of ar 9513 as this is potentially the most interesting ( from the coronal loop oscillations point of view ) area of the 2001 observations .the proximity of the test areas to the area were more detections of coronal waves are expected is important as it produces more reliable statistics as there are no effects from large - scale variation in ccd sensitivity or deferent atmospheric conditions .30 pixels moon area of the 2001 observations .all pixels in that area had their gaussian noise removed by applying the _ trous _ wavelet transformation . marked with x are the 374 detections made by the automated software of k04 after the removal of gaussian noise.,width=188 ] 50 pixels area of the outer corona .the signal of these are was filtered using the _ trous _ filtering algorithm and the automated technique of k04 was used for the detection of intensity oscillations .1054 pixels found to oscillate in intensity.,width=302 ] 60 area of the lower corona and the moon limb .the area was treated with the same algorithm as figures [ moon ] and [ upper ] and 276 detections were found.,width=415 ] the same automated detection algorithm was used as in k04 and the results for the periodicity range of 78 are displayed in figures [ moon ] , [ upper ] and [ lower ] .figure [ moon ] contains 374 pixels that have false ( i.e. not caused by the solar corona ) detections of oscillations out of the 900 pixels of the sample .before the _ trous _ noise filtering the same area contained 5 oscillating pixels .figure [ upper ] contains 1054 oscillating pixels out of 2500 while before the application of the _ trous _ wavelet transformation algorithm there were 11 . on figure [ lower ]we have detections on 276 pixels out of the 4200 pixels of the sample while k04 found 84 ( out of which 66 were concentrated in a very compact area in the middle of the image ) . the difference in number of detections before and after the is significant . the number of detections before and after the _ trous _ wavelet transformation increased by a factor of 75 , 96 and 3.3 for the moon , outer corona and lower corona areas respectively .what is also important is that the increase is not the same for the three areas .while the increase in number of detections is similar for the moon and outer corona areas , it is by far smaller for the lower corona ., this is the un - filtered time series and there are no detections of oscillations.,width=415 ] although it is not surprising that the _ trous _ filtering effects the areas with high s / n ratio less than those with very low ( or zero ) s / n , it might appear strange that the filtering causes the areas with very low s / n ratio to be detected as oscillating . to examine this difference in some detail the wavelet transformation of two pixels , one from the moon area and another from the lower coronaare included as figures [ wavelet_moon_noisy ] and [ wavelet_lower_noisy ] .both points were not found to oscillate before the filtering but only after . by examining the time series on panel ( a ) of figures [ wavelet_moon_noisy ] and [ wavelet_lower_noisy ] ,it is obvious that they are both very noisy , although in the case of figure [ wavelet_lower_noisy ] there is an underling longer timescale variation , while in figure [ wavelet_moon_noisy ] the signal oscillates around an average value .these differences are also apparent in the panel ( b ) of the two figures were the wavelet transformation in figure [ wavelet_moon_noisy ] has a lot of high values in very low periodicities ( since those are far more effected by non - systematic noise ) , while figure [ wavelet_lower_noisy ] having higher s / n ratio is less effected by noise therefor there are less high values in the wavelet transformation even in low periodicities . on high periodicities ,although there is an area of interest in both figures , there is nothing that satisfies the criteria established by k03 . after filtering with the _ trous _wavelet transformation algorithm the same two points were analysed using wavelets .figure [ wavelet_moon_filtered ] contains the time series produced and and wavelet transformation that corresponds to point of the moon area that we analysed in figure [ wavelet_moon_noisy ] .all the jittering in the time series has disappeared and only some small picks and a small long - term variation have remained .the wavelet transformation corresponds well to what appears on the time series , producing very low values on the very high frequencies ( as the gaussian noise effects the high frequencies more ) , some short - lived high values on the high frequencies ( that correspond to the high picks of the time series ) and a number of detections in lower frequencies ( that correspond to the long - term variation ) .as expected , the _ trous _ wavelet transformation filtering was very effective on removing the gaussian noise ( which is why there are no oscillation in very low periodicities ) and the detections on the higher periodicities should be attributed to another factor . since by definition the area of the images covered by the moon has not direct light from the lower corona , another source of light should be considered .as it is known that in total solar eclipses the sky is not completely dark ( the sky is much brighter that during night even to the naked eye ) , the long - term variations in the time series and the resulting detections should be attributed to the scattered light and atmospheric affects that produces variations in brightness .although those existed previously in the un - filtered data , there were small and effect by gaussian noise , therefor they did not produce enough power to become valid detections ., a number of oscillations can be found.,width=415 ] figure [ wavelet_lower_filtered ] contains the time series and wavelet transform of the point of lower corona analysed in figure [ wavelet_lower_noisy ] .the effects of the _ trous _ wavelet filtering here are different to those on the previous time series . although most of the jittering has disappeared up to the 130 , the variation that existed on the un - filtered data after the 130 still largely remains .the same applies to the transformation of panel 9b ) of figure [ wavelet_lower_filtered ] : all high values in very low periodicities have disappeared up to 130 , but a significant number remains after then . on higher periodicitiesthere is clearly an amplification of existing high values mainly on the region after the 130 .the increase in power on the low periodicities coincides with a similar increase in the power of high periodicities .the statistical analysis on secis 2001 as performed by k03 & k04 indicates that there are false ( i.e. caused by noise or atmospheric effects ) oscillations that satisfy the criteria establish by k03 .in particular atmospheric seeing is known to produce differential distortion effects that produce false detections in the range of 5 to 19 periods .so far the only satisfactory way found to treat those false detections is statistical .the possibility for a detection to be due to noise or atmospheric effects was calculated by scanning for oscillations large parts of the data were no real ( i.e. solar coronal ) oscillations can be expected .the number of oscillations found in those areas was then used to establish the possibility of detection to be false in the areas of the lower solar corona that are in the proximity of the test areas .the cases were the concentration of detections is much higher than expected , a detection of a corona wave is reported ( as in k04 ) . in a bid to establish a quantitative criterion to determine which detections of oscillations are due to noise or atmospheric effects and overcome the limitations of the statistical methods used , the monte carlo analysis ( or randomisation ) was investigated .this method has successfully used before in the analysis of solar physics data by who applied it to time series analysed by wavelet transforms .it was chosen because it provides a test of noise that is distribution free ( or else non - parametric ) , i.e. it does not depend on any given noise model ( e.g. gaussian noise , poisson , etc ) . herewe follow the fisher method as described by and performed 1000 permutations per pixel of the aligned three - dimensional array . in order to evaluate the performance of this randomisation method, we used the same test areas of the moon and the outer corona as in previous section . for each individual pixel of these areasthe maximum power of the wavelet transformation was recorded and then compared to the maximum value of the 1000 shuffled time series produced from the original .the percentage of those shuffled time series that had maximum power of their wavelet transformation larger than that of the original time series was then recorded . when the original time series is random noise of any given type ( as this test is distribution free ) we expect that 50% of the shuffled time series will have wavelet transformations with higher maximum power than the original data . for the purposes of this analysis we will consider any value smaller than 1% ( ie less than 1% of the shuffled time series has wavelet transformation with power more than that of the original time series ) is indicating that the original time series has a strong signal comparing to the noise level ..percentage of pixels in each of the test areas before and after the noise filtering that were found to have . [ cols="^,^,^",options="header " , ] table [ table ] contains the percentage of pixels in each of the test areas that were found to have .a number of important results become apparent : first of all the areas of the moon and outer corona before noise - filtering have more that the half of their pixels having a relatively low s / n ratio , while the percentage of the lower corona pixels ( again before filtering for noise ) that have a similarly low s / n ratio is much lower ( 5 % ) .this is significant as it confirms the choice of as a criterion to distinguish between the pixels that have signal dominated by the solar corona and those that do not .second the application of the _ trous _ noise filtering makes reduces the randomness of the data set significantly ( by a factor of 5 ) making it difficult to use the randomisation test reliably .a third useful result shown on table [ table ] is that the percentage of pixels that have is approximately the same in the moon area ( where we know that the signal of all pixels is due to not direct observation of the sun ) and the outer corona ( where we know the signal is partially directly from the solar corona and partially scattered light ) .this is important as it indicates that the scattered light on this image area is a significant portion of the signal and it will be very difficult to distinguish which detections are real in this area by applying the existing criteria . _ trous _ wavelet transformation s inability to reduce the number of false detections can be explained if we consider the different contributions to the signal .any pixel value of this data set is a combination of the detection of scattered light from earth s atmosphere , gaussian noise and ( for those areas that are not covered by moon ) the detection of light directly from the solar corona . by removing the gaussian noise any weak oscillations due to earth s atmospherecould be detected more clearly .those oscillations are known to be introduced by various optical effects produced by earth s atmosphere : variation in transmission through the atmosphere , differential distortions caused by winds at high altitude , etc .in contrast the area lead by the lower solar corona had already enough signal and a high s / n ratio , therefor the reduction of the gaussian noise levels did not contribute to a major increase in the number of detected oscillations . also because the signal coming from the solar corona in this particular area was strong before the _ trous _ noise filtering , the relatively weaker atmospheric effects did not increase dramatically after the subtraction of the gaussian noise . as a resultthe number of detected oscillations did not increase as dramatically in that area as in the other two .the most significant breakthrough of the efforts to establish a quantitative criterion to determine which detections of oscillations are real and which are not , came when was calculated for the pixels that were found to oscillate by k04 ( i.e. the 5 pixels of the moon area , 11 pixels of the outer corona area and 66 pixels of the lower corona ) .all pixels from lower corona were found to have ( i.e. none of the 1000 shuffled time series was found to have a wavelet transformation with higher power than that of the original time series ) , while 4 out of 5 of the detections from the moon area and 10 out of 11 detections from the outer corona , were found with .therefor , a criterion can be establish that will use the randomisation test described here to reject those pixels with .two well known signal processing techniques , the trous wavelet transformation and monte carlo analysis , were applied to secis 2001 data .those two methods were evaluated by using two test areas ( areas were the signal from the solar corona was expected to be small or zero ) and a useful area were detections of corona oscillations were expected . by comparing the results from the three areas , an accurate evaluation of the numerical techniques described above was made .the trous algorithm produced mixed results .although the reduction to gaussian noise level was very significant , the ability to detect corona waves was actually reduced .this is because of the effect of earth s atmosphere in the data set .intensity oscillations caused by the atmosphere were weaker in signal than those caused by the solar corona , therefor , when the s / n ration was lower those oscillations did not obtain the significance levels needed to become detections .after the noise was reduced the significance levels of the intensity oscillations caused by the atmosphere was increased enough to produce a large number of false detections .it is also worth noticing that the areas where the signal from the corona was stronger had much less false detections than those areas with weak or no signal , indicating that the signal from the lower corona is significantly stronger than the atmospheric effects . by the use of the two test areas of the data setit has become apparent that a reliable , objective , numerical method is needed in order to distinguish those detections caused by plasma from the solar corona to those introduced by atmospheric effects .the monte carlo analysis ( otherwise refereed here as randomisation test ) was investigated as a means to make this distinction .all detections reported as real by k04 were found to have while almost all ( 14 out of 16 ) of those reported as false were found to be in the range of .therefor the value of is proposed as a criterion for rejecting future detections from the lower corona region .holschneider m. , kronland - martinet r. , morlet j. , tchamitchian p. , 1989, a real - time algorithm for signal analysis with the help of the wavelet transform , in _ wavelets : time - frequency methods and phase - space _ , 286 , springer - verlag
8000 images of the solar corona were captured during the june 2001 total solar eclipse . new software for the alignment of the images and an automated technique for detecting intensity oscillations using multi scale wavelet analysis were developed . large areas of the images covered by the moon and the upper corona were scanned for oscillations and the statistical properties of the atmospheric effects were determined . the _ trous _ wavelet transform was used for noise reduction and monte carlo analysis as a significance test of the detections . the effectiveness of those techniques is discussed in detail .
turing instability is one of the reference mechanisms for pattern formation in nature .the turing idea applies to a large gallery of phenomena that can be modelled via reaction - diffusion equations .these latter are mathematical models that describe the dynamical evolution of distinct families of constituents , mutually coupled and freely diffusing in the embedding medium .diffusion can seed the instability by perturbing the mean field homogeneous state , through an activator inhibitor mechanism , and so yielding the emergence of patched , non homogeneous in space , density distributions .the most intriguing applications of the turing paradigm are encountered in the context of morphogenesis , the branch of embryology which studies the development of patterns and forms in biology .the realm of application of the turing ideas encompasses however different fields , ranging from chemistry to biology , passing through physics , where large communities of homologous elements evolve and interact . according to the classical viewpoint, however , the diffusion coefficient of the inhibitor species has to be larger than that of the activator , for the patterns to eventually develop .this is a strict mathematical constraint which is not always met in e.g. contexts of biological relevance , and which limits the possibility of establishing a quantitative match between theory and empirical data .spatially extended systems made of interacting species sharing similar diffusivities can indeed display self - organized patched patterns , an observation that still calls for a sound interpretative scenario , beyond the classical turing mechanisms .one viable strategy to possibly reconcile theory and observations has been explored in and . in these studies, the authors considered the spontaneous emergence of persistent spatial patterns as mediated by the demographic endogenous noise , stemming from the intimate discreteness of the scrutinized system .the intrinsic noise translates into a systematic enlargement of the parameter region yielding the turing order , when compared to the corresponding domain predicted within the deterministic linear stability analysis .it is however unclear at present whether experimentally recorded patterns bear the imprint of the stochasticity , a possibility that deserves to be further challenged in the future . alternatively , and to bridge the gap with the experiments , the turing instability concept has been applied to generalized reaction diffusion equations . these latter account for cross diffusion terms which are hypothesized to exist on purely heuristic grounds or by invoking the phenomenological theory of linear non equilibrium thermodynamics .diagonal and off diagonal coefficients of the diffusion matrix are not linked to any microscopic representation of the examined dynamics and are hence treated as free parameters of the model . in authors quantify the impact of cross terms on the turing bifurcation , showing e.g that spatial order can materialize also if the inhibitor s diffusion ability is less pronounced than the activator s one .starting from this setting , the aims of this paper are twofold . on the one side , we shall elaborate on a microscopic theory of multispecies diffusion , fully justified from first principles .the theory here derived is specifically targeted to the two species case study and extends beyond the formulation of . on the other side , and with reference to the brusselator model, we will show that turing patterns can take place for any ratio of the main diffusivities . in doingso we will cast the conclusions of into a descriptive framework of broad applied and fundamental interest , where the key cross diffusion ingredients are not simply guessed a priori but rigorously obtained via a self consistent derivation anchored to the microscopic world . working in the context of a reference case study , the brusselator model, we shall also perform numerical simulations based on both the underlying stochastic picture and the idealized mean field formulation to elaborate on the robustness of the observed patterns . in the followingwe briefly discuss the derivation of the model , focusing on the specific case where two species are supposed to diffuse , sharing the same spatial reservoir .consider a generic microscopic system bound to occupy a given volume of a space .assume the volume to be partitioned into a large number of small hypercubic patches , each of linear size .each mesoscopic cell , labelled by , is characterized by a finite carrying capacity : it can host up to particles , namely of type , of type , and vacancies , hereafter denoted by . in general, the species will also interact , as dictated by specific reaction terms .let us start by solely focusing on the diffusion part , silencing any direct interaction among elementary constituents .as we shall remark , there exists an indirect degree of coupling that results from the competition for the available spatial resources . in practice, the mobility of the particles is balked if the neighbouring patches have no vacancies .particles may jump into a nearest neighbour patch , only if there is a vacancy to be eventually filled .this mechanism translates into the following chemical equation where and label nearest neighbour patches . here , and identify the particles that belong to cell . labels instead the empties that are hosted in patch . the parameters and stand for the associated reaction rates .similar reactions control the migration from cell towards cell .in addition , and extending beyond the scheme proposed in , we imagine the following reactions to hold : which in practice account for the possibility that elements ( resp . ) and ( resp . ) swap their actual positions .the state of the system is then specified by the number of and particles in each patch , the number of vacancies following from a straightforward normalization condition .introduce the vector , where .the quantity represents the rate of transition from state , to another state , compatible with the former .the transition rates associated with the migration between nearest neighbour , see eqs .( [ mig ] ) , take the form where we have made explicit in the components that are affected by the reactions . as discussed in , the factor , reflects the natural request of a finite capacity , and will eventually yield a macroscopic modification of the fick s law of diffusion .moreover , chemical equations ( [ mig1 ] ) result in the following transition rates : the process here imagined is markov , and the probability to observe the system in state at time is ruled by the master equation , \label{master}\ ] ] where the allowed transitions depend on the state of the system via the above relations .starting from this microscopic , hence inherently stochastic picture , one can derive a self consistent deterministic formulation , which exactly holds in the continuum limit .mathematically , one needs to obtain the dynamical equations that govern the time evolution of the ensemble averages and . to this end , multiply first the master eq .( [ master ] ) by , with , and sum over all .after an algebraic manipulation which necessitates shifting some of the sums by , one eventually gets , \end{aligned}\ ] ] where the notation means that we are summing over all patches which are nearest neighbours of patch . the averages in eq .( [ pre_macro ] ) are performed explicitly by recalling the expression for the transition rates as given in eqs .( [ trs ] ) and ( [ trs1 ] ) .replace then the averages of products by the products of averages , an operation that proves exact in the continuum limit . by introducing the continuum concentration , rescaling time by a factor of and taking the size of the patches to zero one finally gets , which then turns into the continuum operator when sending to zero the size of the patch and scaling the rates and appropriately . ] , \nonumber \\\frac{\partial \phi_{b}}{\partial t } & = & d_{22 } \nabla^2 \phi_{b}+d_{21 } \left [ \phi_{b } \nabla^2 \phi_{a } - \phi_{a } \nabla^2 \phi_{b } \right ] , \label{pdes}\end{aligned}\ ] ] where and . ] and .the above system of partial differential equations for the concentration and is a slightly modified version of the one derived in , this latter being formally recovered when setting to zero . in the generalized context hereconsidered , the cross diffusion coefficients and are different , specifically smaller , than the corresponding mean diffusivities and .we emphasize again that the crossed , nonlinear contributions stem directly from the imposed finite carrying capacity and , as such , have a specific , fully justified , microscopic origin .the diffusive fluxes that drive the changes in the concentrations and can be written as : it is interesting to notice that relations ( [ modified_ficks ] ) enable us to make contact with the field of linear non equilibrium thermodynamics ( lnet ) , a branch of statistical physics which defines the general framework for the macroscopic description of e.g. transport processes .one of the central features of lnet is the relation between the forces , which cause the state of the system to change , and the fluxes , which are the result of these changes . within the formalism of lnet the fluxes and rule the diffusion of the two species and are linearly related to the forces , the gradients of the respective concentrations .the quantities that establish the formal link between forces and fluxes are the celebrated onsager coefficients , postulated on pure heuristic grounds .interestingly , eqs .( [ modified_ficks ] ) provide a self consistent derivation for the onsager coefficients , that enters the generalized fick s scenario here depicted .define and .( [ pdes ] ) can be written in the compact form : where the matrix reads : stringent constraint from thermodynamics is that all eigenvalues of the diffusion matrix are real and positive .this in turn corresponds to requiring and . a straightforward calculation yields : where .by definition . moreover , and are both positive and smaller than one . hence , and , a result that points to the consistency of the proposed formulation .having derived a plausible macroscopic description for the two components diffusion process , we can now move on by allowing the involved species to interact and consequently consider in the mathematical model the corresponding reaction terms . as an important remark, we notice that these latter can be also obtained as follows the above , rather general , approach that bridges micro and macro realms .first , one need to resolve the interactions among individual constituents , by translating into chemical equations the microscopic processes implicated .these include cooperation and competition effects , as well as the indirect interferences stemming from the finite carrying capacity that we have imposed in each mesoscopic patch .then , one can recover the deterministic equations for the global concentrations , by operating in the continuum system size limit .in general , eq . ([ compactpde ] ) is modified into : where .as we have anticipated , the interest of this generalized formulation , resides in that it allows for turing like patterns in a region of the parameter space that is instead forbidden when conventional reaction diffusion systems are considered .the novelty of the proposed formulation has to do with the presence of specific cross diffusion terms , which follow a sound physical request , and add to the classical laplacians , signature of fickean diffusion .let be the steady state solution of the homogeneous ( aspatial ) system , namely .the fixed point is linearly stable if the jacobian matrix has positive determinant and negative trace .it is worth stressing that the derivatives in matrix are evaluated at the homogeneous fixed point . back to the complete model , a spatial perturbation superposed to the homogeneous fixed pointcan get unstable if specific conditions are met .such conditions , inspired to the seminal work by turing , are hereafter derived via a linear stability analysis .define and proceed with a linearization of eq .( [ compactpde2 ] ) to eventually obtain : going to fourier space one gets : where . by characterizing the eigenvalues of the matrix , one can determine whether a perturbation to the homogeneous solution can yield patterns formation .in particular , if one of the eigenvalues admits a positive real part for some values of , then a spatially modulated instability develops .the growth of the perturbation as seeded by the linear instability will saturate due to the non linearities and eventually results in a characteristic pattern associated to the unstable mode .steady patterns of the turing type require in addition that the imaginary part of the eigenvalues associated to the unstable mode are zero .in formulae , the turing instability sets in if there exists a such that and .these latter conditions are to be imposed , jointly with the request of a stable homogeneous fixed point ( , ) , to identify the parameters values that drive the instability .alternatively , one can obtain a set of explicit conditions following the procedure outlined below , and adapted from .the eigenfunctions of the laplacian operator are : and we write the solution to eq .( [ lineartur ] ) in the form : by substituting the ansatz into eq .yields : \mathbf w_k = 0.\ ] ] the above system admits a solution if the matrix is singular , i.e. : the solutions of ( [ det ] ) can be interpreted as dispersion relations .if at least one of the two solutions displays a positive real part , the mode is unstable , and the dynamics drives the system towards a non homogeneous configuration in response to the initial perturbation .introduce the auxiliary quantity defined as : \\ -\hat{\phi}_b \left [ d_{12 } \frac{\partial f_b}{\partial { \phi}_b } + d_{21 } \frac{\partial f_a}{\partial { \phi}_b } \right ] \label{cond_gen}\end{gathered}\ ] ] then a straightforward calculation results in the following compact conditions for the instability to occur : together with and . for demonstrative purposes we now specialize on a particular case study and trace out in the parameters plane , the domain that corresponds to the turing instability .our choice is to work with the brusselator model reflects the presence of the finite carrying capacity , as discussed in .similar conclusions hold however if the diluted limit is performed , _ just _ in the reaction terms , hence replacing with . ] which implies setting and .species plays now the role of the activator , while stands for the inhibitor . results of the analysis are reported in left panels of fig .[ figure1 ] , where the region of interest is singled out in the plane ( ) , for different choices of .turing patterns are predicted to occur for , at odd with what happens in the conventional scenario where standard fick s diffusion is assumed to hold ( see below ) .the right panels report the results of direct simulations and confirm the presence of macroscopically organized patterns in a region of the parameters space that is made classically inaccessible by the aforementioned , stringent condition the simulations refers to the choice .these observations are general and similar conclusions can be drawn assuming other reactions schemes of the inhibitor / activator type , different from the brusselator model . [ cols="^,^ " , ]summing up , turing patterns can develop for virtually _ any _ ratio of the main diffusivities in a multispecies setting .this striking effect originates from the generalized diffusion theory that is here assumed to hold and that builds on the scheme discussed in . because of the competition for the available resources , a modified ( deterministic ) diffusive behaviouris recovered : cross diffusive terms appear which links multiple diffusing communities and which add to the standard laplacian terms , relic of fick s law . the fact that turing like patterns are possible for e.g. equal diffusivities of the species involved . ] , as follows a sound dynamical mechanism , constitutes an intriguing observation that hold promise to eventually reconcile theory and experimental evidences .the investigated setting applies in particular to multispecies systems that evolve in a crowded environment , as it happens for instance inside the cells where different families of proteins and other biomolecular actors are populating a densely packed medium .it is interesting to notice that the stochastic fluctuations , endogenous to the scrutinized system in its discrete version , eventually destroy the patterns , that are instead deemed to be stable according to the idealized deterministic viewpoint .the lifetime of the metastable patched patterns increases however with the size of the system , in striking analogy with what has been observed for the so called quasi stationary states , out of equilibrium regimes observed in systems subject to long range interactions . for large enough , the homogeneization as seeded by fluctuations is progressively delayed and eventually prevented in the continuum limit .we wish to thank alan mckane and tommaso biancalani for useful discussion . the work is supported by ente cassa di risparmio di firenze and the program prin2009 . turing am ( 1952 ) the chemical basis of morphogenesis .phils trans r soc london ser b 237:37 - 72 .buceta j , lindenberg k ( 2002 ) switching - induced turing instability .phys rev e 66:046202 .murray jd , mathematical biology , second edition , springer .maynard smith j ( 1974 ) models in ecology , cambridge university press , cambridge .levin sa , segel la ( 1976 ) hypothesis for origin of planktonic patchiness .nature 259:659 .mimura m , murray jd ( 1978 ) on a diffusive prey - predator model which exhibits patchiness .j theor biol 75:249 - 262 .baurmann m , gross t , feudel u ( 2007 ) instabilities in spatially extended predator prey systems : spatio - temporal patternsin the neighborhood of turing hopf bifurcations .j theor biol 245:220 - 229 . wilson w g , harrison sp , hastings a , mccann k ( 1999 ) exploring stable pattern formation in models of tussock moth populations .j anim ecol 68:94 - 107 .shiferaw y , karma a ( 2006 ) turing instability mediated by voltage and calcium diffusion in paced cardiac cells .pnas 103:5670 - 5675 .ammelt e , schweng d , purwins hg ( 1993 ) spatio - temporal pattern formation in a lateral high - frequency glow discharge system .physics letters a 179:348 - 354 .butler t , goldenfeld n ( 2009 ) robust ecological pattern formation induced by demographic noise phys rev e 80:030902(r ) .biancalani t , fanelli d , di patti f ( 2010 ) stochastic turing patterns in the brusselator model .phys rev e 81:046215 .de groot sr , mazur p ( 1984 ) , non - equilibrium thermodynamics , dover , new york .kumar n , horsthemke w ( 2011 ) effects of cross diffusion on turing bifurcations in two - species reaction - transport systems .phys rev e 83:036105 .fanelli d , mckane a ( 2010 ) diffusion in a crowded environment .phys rev e 82:021113 .chung jm , peacock - lpeza e ( 2007 ) bifurcation diagrams and turing patterns in a chemical self - replicating reaction - diffusion system with cross diffusion .j chem phys 127:174903 .iida m , mimura m , ninomiya h ( 2006 ) diffusion , cross - diffusion and competitive interaction .j math biol 53:617641 .klika v , baker re , headon d , gaffney ea ( 2011 ) the influence of receptor - mediated interactions on reaction - diffusion mechanisms of cellular self - organisation . bull math biol .doi 10.1007/s11538 - 011 - 9699 - 4. de kepper p , castets v , dulos e , boissonade j ( 1991 ) turing - type chemical patterns in the chlorite - iodide - malonic acid reaction .physica d 49:161 - 169 .lengyel i , epstein ir ( 1991 ) modeling of turing structure in the chlorite - iodide - malonic acid - starch reaction system .science 251:650652 .vanag vk , epstein ir ( 2001 ) pattern formation in a tunable medium : the belousov - zhabotinsky reaction in an aerosol ot microemulsion .phys rev lett 87:228301 .strier de , ponce dawson s ( 2007 ) turing patterns inside cells .plos one 2:e1053 .baker re , gaffney ea , maini pk ( 2008 ) partial differential equations for self - organization in cellular and developmental biology .nonlinearity 21:r251 - 11r290 .gillespie dt ( 1976 ) . a general method for numerically simulating the stochastic time evolution of coupled chemical reactions .j. comp . phys .22:403 - 434 .antoniazzi a , fanelli d , ruffo s , yamaguchi y ( 2007 ) non equilibrium tricritical point in a system with long - range interactions .phys . rev .99 040601 .campa a , dauxois t , ruffo s. ( 2009 ) statistical mechanics and dynamics of solvable models with long - range interactions , physics reports 480 , 57 - 159 .rogers t , mckane a , ( 2012 ) jamming and pattern formation in models of segregation , phys .e 85 , 041136 .
the turing instability paradigm is revisited in the context of a multispecies diffusion scheme derived from a self - consistent microscopic formulation . the analysis is developed with reference to the case of two species . these latter share the same spatial reservoir and experience a degree of mutual interference due to the competition for the available resources . turing instability can set in for all ratios of the main diffusivities , also when the ( isolated ) activator diffuses faster then the ( isolated ) inhibitor . this conclusion , at odd with the conventional vision , is here exemplified for the brusselator model and ultimately stems from having assumed a generalized model of multispecies diffusion , fully anchored to first principles , which also holds under crowded conditions . turing instability , stochastic processes , reaction - diffusion systems , cross - diffusion systems
there are numerous types of theoretical data which , if integrated in a vo , will without doubt enhance its scientific capabilities .although it has been stressed the vo itself is not intended to be a remote observatory , some branches in the theory part of a vo could very well emulate such behavior .one can imagine that after an initial selection from a set of models or a match to an observation , fine - tuning be done by re - running the models , given enough computer time and access to software ( for an existing example see e.g. pound et al .2000 ) .it is perhaps instructive to view the theory part of a vo from two different points of view : that of the theorist and that of the observer .what will a theorist find in a vo ?he will find a large number of models that can be `` observed '' .observing such models can be done in several ways .first , one can make simulated observations of simulation data , and then compare observations with these models . given that many models add the independent time parameter, simulations also add the complexity of exploring 4-dimensional histories and finding a best match in the time domain .the 3d spatial information will mostly likely be on a grid , or a discrete set of points .a new and largely unused capability of theory data in a vo will be to compare models with models , much like observations are compared .this should also result in improved models , as differences and similarities between models can quickly be highlighted .theorists will also find a variety of standard initial conditions or benchmark data in a vo , which will make it easier to test new algorithms and compare them to previously generated data .in addition , one could also argue that besides saving the data , saving the code that generated the data will be valuable .finally , adding theoretical data to a vo will undoubtedly also spur new data mining and cs techniques .what will an observer find about theory data in a vo ?first , models can be selected and compared to observations , processing those models as though they were observed with a particular instrument .second , theory data can also be used to calibrate observations .examples are : comparing hipparchos proper motion studies with a similar analysis applied to simulations , and using stellar evolution tracks to determine cluster ages from an hrd .the added complexity of theoretical data will need new searching and matching techniques , and thus bring different type of data mining and computer science to the playing field .in order to develop a better understanding of theory data , we have started various types of theory data , mostly simulations in which time is the independent variable .some datasets are simple benchmarks , taking initial conditions for well - known problems in astrophysics , going back to the first published benchmark of the iau 25-body problem ( lecar 1968 ) . during the iau 208 conference in tokyo ( teuben 2002 ) a survey was undertaken amongst practitioners of a well defined subset of theory data : particle simulations .these ranged from planetary to cosmological simulations , and included grid - based as well as particle - based calculations .one noteworthy find was that a surprisingly large fraction of the theorists would rather not like to see their data published in a vo , since computers get faster each year , algorithms get better and data ages quickly .unlike observations , theoretical data often suffer from assumptions and thus comparisons can have less meaning than naively thought . on a technical note ,simulation data actually do not differ much from observational data .most theoretical data sets fall two types : grid based ( `` image '' , each datum being the same type ) or particle based ( a `` table '' with columns and rows ) .an image can also be seen as a special case of a table . in recent years , added complexities are nested grids , such as in amr , and the tdyn tables in starlab s kira code ( portegies zwart et al .2000 ) , where only relevant particles are updated .the miriad uv - data format is an example where such complexities have also been introduced to observational data .defining the header and meta data for theoretical data will be at least as challenging as that for observational data .the recently completed grape-6 ( hut and makino 1999 , makino 2002 ) can now produce massive datasets with a size of terabytes for a single run . in order to handle these data , and to share them with ` guest observers ' , we have started to set up a data archive ( see also the manybody.org web site ) . in the near future we plan to start federating our archive with other theory archives and with the budding virtual observatories .
during the last couple of years , observers have started to make plans for a virtual observatory , as a federation of existing data bases , connected through levels of software that enable rapid searches , correlations , and various forms of data mining . we propose to extend the notion of a virtual observatory by adding archives of simulations , together with interactive query and visualization capabilities , as well as ways to simulate observations of simulations in order to compare them with observations . for this purpose , we have already organized two small workshops , earlier in 2001 , in tucson and aspen . we have also provided concrete examples of theory data , designed to be federated with a virtual observatory . these data stem from a project to construct an archive for our large - scale simulations using the grape-6 ( a 32-teraflops special purpose computer for stellar dynamics ) . we are constructing interfaces by which remote observers can observe these simulations . in addition , these data will enable detailed comparisons between different simulations .
word sense disambiguation is a crucial task in many nlp applications , such as machine translation , parsing and text retrieval .given the growing utilization of machine readable texts , word sense disambiguation techniques have been variously used in corpus - based approaches .unlike rule - based approaches , corpus - based approaches release us from the task of generalizing observed phenomena in order to disambiguate word senses .our system is based on such an approach , or more precisely it is based on an example - based approach . since this approach requires a certain number of examples of disambiguated verbs , we have to carry out this task manually , that is , we disambiguate verbs appearing in a corpus prior to their use by the system .a preliminary experiment on ten japanese verbs showed that the system needed on average about one hundred examples for each verb in order to achieve 82% of accuracy in disambiguating verb senses . in order to build an operational system ,the following problems have to be taken into account : 1 .since there are about one thousand basic verbs in japanese , a considerable overhead is associated with manual word sense disambiguation .2 . given human resource limitations ,it is not reasonable to manually analyze large corpora as they can provide virtually infinite input .3 . given the fact that example - based natural language systems , including our system , search the example - database ( database , hereafter ) for the most similar examples with regard to the input , the computational cost becomes prohibitive if one works with a very large database size .all these problems suggest a different approach , namely to _ select _ a small number of optimally informative examples from a given corpora .hereafter we will call these examples `` samples . ''our method , based on the utility maximization principle , decides on which examples should be included in the database .this decision procedure is usually called _selective sampling_. selective sampling directly addresses the first two problems mentioned above .the overall control flow of systems based on selective sampling can be depicted as in figure [ fig : concept ] , where `` system '' refers to dedicated nlp applications .the sampling process basically cycles between the execution and the training phases . during the execution phase ,the system generates an interpretation for each example , in terms of parts - of - speech , text categories or word senses . during the training phase, the system selects samples for training from the previously produced outputs . during this phase, a human expert provides the correct interpretation of the samples so that the system can then be trained for the execution of the remaining data .several researchers have proposed such an approach .lewis et al . proposed an example sampling method for statistics - based text classification . in this method, the system always selects samples which are not certain with respect to the correctness of the answer .dagan et al .proposed a committee - based sampling method , which is currently applied to hmm training for part - of - speech tagging .this method selects samples based on the training utility factor of the examples , i.e. the informativity of the data with respect to future training .however , as all these methods are implemented for statistics - based models , there is a need to explore how to formalize and map these concepts into the example - based approach . with respect to problem 3 , a possible solution would be the generalization of redundant examples .however , such an approach implies a significant overhead for the manual training of each example prior to the generalization .this shortcoming is precisely what our approach allows to avoid : reducing both the overhead as well as the size of the database .section [ sec : vader ] briefly describes our method for a verb sense disambiguation system .the next section [ sec : sampling ] elaborates on the example sampling method , while section [ sec : eval ] reports on the results of our experiment . before concluding in section [ sec : conclusion ] , discussion is added in section [ sec : discussion ] .[ cols= " < ,< , < " , ] [ tab : corpus ] we at first estimated the system s performance by its precision , that is the ratio of the number of correct outputs , compared to the number of inputs . in this experiment , we set in equation ( [ eq : certainty ] ) , and in equation ( [ eq : utility_temp ] ) .the influence of ccd , i.e. in equation ( [ eq : ccd ] ) , was extremely large so that the system virtually relied solely on the sim of the case with the greatest ccd .figure [ fig : precision ] shows the relation between the size of the training data and the precision of the system . in figure [ fig : precision ] , when the x - axis is zero , the system has used only the seeds given by ipal .it should be noted that with the final step , where all examples in the training set have been provided to the database , the precision of both methods is equal . looking at figure [ fig : precision ] one can see that the precision of random sampling was surpassed by our training utility sampling method .it solves the first two problems mentioned in section [ sec : intro ] .one can also see that the size of the database can be reduced without degrading the system s precision , and as such it can solve the third problem mentioned in section [ sec : intro ] .we further evaluated the system s performance in the following way .integrated with other nlp systems , the task of our verb sense disambiguation system is not only to output the most plausible verb sense , but also the interpretation certainty of its output , so that other systems can vary the degree of reliance on our system s output .the following are properties which are required for our system : * the system should output as many correct answers as possible , * the system should output correct answers with great interpretation certainty , * the system should output incorrect answers with diminished interpretation certainty .motivated by these properties , we formulated a new performance estimation measure , pm , as shown in equation ( [ eq : performance ] ) .a greater accuracy of performance of the system will lead to a greater pm value . in equation ( [ eq : performance ] ) , is the maximum value of the interpretation certainty , which can be derived by substituting the maximum and the minimum interpretation score for and , respectively , in equation ( [ eq : certainty ] ) .following table [ tab : kuro ] , we assign 11 and 0 to be the maximum and the minimum of the interpretation score , and therefore = 11 , disregarding the value of in equation ( [ eq : certainty ] ) . is the total number of the inputs and is a coefficient defined as in equation ( [ eq : delta ] ) . in equation ( [ eq : delta ] ) , is the parametric constant to control the degree of the penalty for a system error . for our experiment , we set , meaning that pm was in the range to 1 .figure [ fig : performance ] shows the relation between the size of the training data and the value of pm . in this experiment, it can be seen that the performance of random sampling was again surpassed by our training utility sampling method , and the size of the database can be reduced without degrading the system s performance .in this section , we will discuss several remaining problems .first , since in equation ( [ eq : utility ] ) , the system calculates the similarity between and each example in , computation of becomes time consuming . to avoid this problem , a method used in efficient database search techniques , in which the system can search some neighbour examples of with optimal time complexity , can be potentially used .second , there is a problem as to when to stop the training : that is , as mentioned in section [ sec : intro ] , it is not reasonable to manually analyze large corpora as they can provide virtually infinite input .one plausible solution would be to select a point when the increment of the total interpretation certainty of remaining examples in is not expected to exceed a certain threshold .finally , we should also take the semantic ambiguity of case fillers ( noun ) into account .let us consider figure [ fig : uncertain ] , where the basic notation is the same as in figure [ fig : certainty ] , and one possible problem caused by case filler ambiguity is illustrated .let `` x1 '' and `` x2 '' denote different senses of a case filler `` x. '' following the basis of equation ( [ eq : certainty ] ) , the interpretation certainty of `` x '' is small in both figure [ fig : uncertain - a ] and [ fig : uncertain - b ] .however , in the situation as in figure [ fig : uncertain - b ] , since ( a ) the task of distinction between the _ verb _ senses 1 and 2 is easier , and ( b ) instances where the sense ambiguity of case fillers corresponds to distinct verb senses will be rare , training using either `` x1 '' or `` x2 '' will be less effective than as in figure [ fig : uncertain - a ] .it should also be noted that since _ bunruigoihyo _ is a relatively small - sized thesaurus and does not enumerate many word senses , this problem is not critical in our case .however , given other existing thesauri like the edr electronic dictionary or wordnet , these two situations should be strictly differentiated .[ fig : uncertain - a ] [ fig : uncertain - b ]in this paper we proposed an example sampling method for example - based verb sense disambiguation .we also reported on the system s performance by way of experiments .the experiments showed that our method , which is based on the notion of training utility , has reduced the overhead for the training of the system , as well as the size of the database . as pointed out in section [ sec : intro ] , the generalization of examples is another method for reducing the size of the database .whether coupling these two methods would increase overall effectivity is an empirical matter requiring further exploration .future work will include more sophisticated methods for verb sense disambiguation and methods of acquiring seeds , the acquisition of which is currently based on an existing dictionary .we will also build an experimental database for natural language processing using our example sampling method .the authors would like to thank dr .manabu okumura ( jaist , japan ) , mr .timothy baldwin ( titech , japan ) , and dr .michael zock and dr .dan tufis ( limsi , france ) for their comments on an earlier version of this paper .
this paper proposes an efficient example selection method for example - based word sense disambiguation systems . to construct a practical size database , a considerable overhead for manual sense disambiguation is required . our method is characterized by the reliance on the notion of the training utility : the degree to which each example is informative for future example selection when used for the training of the system . the system progressively collects examples by selecting those with greatest utility . the paper reports the effectivity of our method through experiments on about one thousand sentences . compared to experiments with random example selection , our method reduced the overhead without the degeneration of the performance of the system . = 15.82 cm = 23.39 cm = -1.5 cm definecountersubfloatnumber t subfloatcaptionheadtrue addtoresetsubfloatnumbercaptypecaptypet subcaption#1[#2]#3 captypet makecaptionwoheading#1#2 10@ tempboxa tempboxa > # 1 # 2 to
type i collagen fibrils , cable - like assemblies of long biological molecules , the so - called triple helices , are the major constituents of connective tissues .electron microscopy and atomic force microscopy provide images of fibrils which show that : + - they are not smooth , as most of the bundles of fibres built by biopolymers , but are striated all along their length with a constant period nm , + - the radii of their circular cross sections are in general distributed in a range going from to nm , with a few exceptions of about or nm . these characteristics are common to all type i collagen fibrils whatever their origin , conditions of extraction , preparation and observation , in vitro as well as in vivo .this suggests that the lateral size of a fibril , once its growth has been triggered by external biological factors , is mostly controlled by the evolution of its internal structure . in other words ,the free energy of a fibril as function of its radius should contain , in addition to the sum of the cohesive and interfacial terms , whose minima correspond to molecular dispersion or mass precipitation only , a third intrinsic term with to avoid mass precipitation and limit the growth .the first candidate to be thought of is of course the double - twist induced by the chirality of the triple helices as its propagation in our euclidean space generates elastic stresses which can play such a role .this has been shown for smooth bundles of fibres ordered along a hexagonal lattice , but collagen fibrils are striated and their triple helices , although densely packed , do not present such an order .the striations shown by the fibrils result from specific interactions between triple helices leading to a regular shift , the hodge petruska ( hp ) staggering , with the alternate overlap and gap regions shown in figure [ f1 ] .the segment of triple helices are densely organized in the overlap regions , but not in the gap regions where one segment out of five is replaced by a vacancy .the distribution of those vacancies in the plane of figure [ f1 ] and out of it determines a layered periodic structure made visible by the striations .such a periodic layering is not compatible with a double - twist which would impose a variation of the layer thickness .this problem was addressed in the case of smectic phases of chiral mesogenic molecules and the solution proposed was that of a double twist texture with the local smectic order preserved in coaxial domains separated by cylindrical screw dislocation walls .however , the layers of type i collagen fibrils are different from those of liquid crystalline smectic phases . owing to the fact that they are built by triple helices much longer than the layer thickness , these layers are not liquid , can not glide on each other and their number along a fibril is imposed by that on its axis .we examine here how a system of long triple helices could conciliate double - twist and layering in the best manner possible analyzing the geometrical distortions imposed by the coexistence of these two terms . in order to consideronly the perturbation brought by the layering , without interfering with those related to the propagation of the double - twist in our euclidean space and molecular extension , we first build a template using the hopf fibration of the hypersphere whose fibres of constant length are organized with a uniform double - twist ( see appendix a ) . we also describe the transverse dense organization of the triple helices in this template with the algorithm of phyllotaxis which ensures the best packing efficiency in a situation of circular symmetry ( see appendix b ) . we finally show that the interplay between the development of an axial layering in presence of a constant double - twist and the edge dislocations naturally present in a phyllotactic pattern can limit the growth of the template built in and that this is maintained when this curved space is projected onto the flat space .the fibres of the hopf fibration in a hypersphere of radius , where is the pitch of the double twist , are great circles of length .several periodic configurations of layers can be superposed on this fibration , either the simple stacking of flat layers normal to the axis of the fibration or helicoidal layers tilted relative to this axis .are indeed determined by surfaces whose distance varies when increases , but this occurs largely beyond the values considered here . ] the traces of such layers determine strips on the rectangle representative of torus supporting the fibres at an angle from the axis , as drawn on figure [ f2 ] .the strips must be drawn in order to respect the continuity of the layers when identifying two by two the sides of the rectangle to build the torus . moreover , being connected by the triple helices crossing them , their number of intersections with each fibre , or diagonal , must stay constant equal to that measured along the axis with layers normal to it . supporting fibres at an angle from the torus axis with the traces of layers normal to this axis ( a ) and those of helicoidal layers with pitches of one ( b ) or two ( c ) strips , the tilt angle and on this figure . ]these periodic configurations are labeled according to the number of strips defining the pitch of the helix drawn on torus . if the number of strips intersecting the diagonal is the same for all of them , their numbers of intersections with the horizontal and vertical sides of the rectangle vary as and respectively .the tilt of the fibres relative to the layer normal can then be written as where and its variations are shown in figure [ f3 ] for a double - twist pitch nm deduced from the observations described in a next section .of the fibres with respect to the layer normal as increases in configurations for corresponding to a double - twist pitch of about nm . ] for the tilt of the fibres increases linearly , for it decreases to zero for .those curves show that too large an increase of the tilt as increases in one configuration can be avoided changing for configuration .these variations in one configuration , or when moving from one configuration to the following , should be favored or not by the local organization of the triple helices as they imply a shear of the hp staggering along their common direction as shown in figure [ f4 ] .the proposition of the hp staggering was advanced considering that lateral chemical bonds between triple helices are able to build the regular shift shown on figure [ f1 ] . owing to the fact that triple helices have a rotation symmetry of order 3 , similar drawings can be built in planes at and from that of figure [ f1 ] so that the gap and overlap regions be distributed coherently along the fibril axis to build the layering at the origin of the striations , the hp staggering would develop bidimensional sheets of large extension , it was indeed first proposed that each period of the hp staggering closes onto itself forming a cylindrical microfibril containing five triple helices and that such microfibrils assemble to build a fibril , this process is now questioned . ] .the interactions stabilizing this organization certainly restrain the relative displacements of triple helices along their lengths required if the layered structuring in the hopf fibration evolves as described above .the energy cost associated with shears could be lowered , eventually suppressed , if the propagation of lateral bonds between triple helices was interrupted so that they can slide along each other .the phyllotactic model recently proposed to describe the dense packing of triple helices in fibrils with circular cross section provides such an opportunity . or shears required when the triple helices are tilted with respect to the layer normal ( a ) or when a change of configuration occurs ( b ) , the black lines represent the direction of the triple helices . ]the distribution of points representing the hopf fibration on its spherical basis is that of the phyllotactic pattern built on this surface by the algorithm of phyllotaxis .the representative points of closely bonded triple helices in a hp staggering are to be aligned along the parastiches of this pattern , the lines of shortest distances between neighbor points , and so are the interactions building this hp staggering. the cohesion of the assembly should therefore be strong within hexagonal grains where the symmetries of the triple helices and their environment are coherent , each point being at the intersection of three parastiches in a hexagonal voronoi cell with six close neighbors . in the vicinity of grain boundaries , where the hexagons are strongly distorted towards a shape quite close to that of a square orare transformed into heptagons or pentagons , the molecular and local symmetries are no longer coherent .the parastiches , and the propagation of the interactions aligned along them are perturbed along grain boundaries , as shown in figure [ f5 ] , and the cohesion of the assembly should be weakened .triple helices would therefore be free to slide along each other , making changes of configurations possible , on torii whose generator circles are grain boundaries . as those changes are also expected to take place in between two cancelation points of the tilt ,the positions of the grain boundaries are compared with those of the cancelation points in figure [ f6 ] .the core of the pattern , up to grain boundary 21 , shows a high density of defects and therefore has a low cohesion .a simple tilt can grow easily without need for a change of configuration .when increases beyond this core , the defects concentrate in well individualized grain boundaries delimiting grains without defects , hence with a higher cohesion .this constrains the growth of the tilt and calls for a change of configuration on the first grain boundary met .however , while the separation between two consecutive cancelation points of the tilt decreases that of grain boundaries increases and changes of configuration become less and less possible as increases . compared with that of the grain boundaries ( gb ) and with the radius of the torus supporting the fibres .the positions of the grain boundaries , identified by their number of dislocations , are obtained from a phyllotactic pattern built on the spherical basis of a hopf fibration with the double - twist pitch nm and a distance between site nm . ] for instance , grain boundaries and would favor the changes of configuration to and to respectively , but grain boundary is too far to favor that from to and even that from to . in that zone , where configuration changes beyond at the most are hindered , the tilt should be forced to increase almost linearly with , so that the energy associated with it , , would vary as , or .this would provide the term which , being added to those of volume and surface varying as and mentioned in the introduction , is needed to limit the growth of the fibrils . from figure [ f6 ] , this limitation would occur around a radius of about nm for the template built in a hypersphere containing a hopf fibration with a double - twist pitch nm .a value of which corresponds to most of the radii observed in our space .the stereographic projection is the simplest way to carry out the transfer from a curved space to a flat one .this conformal projection does not affect the topology , a torus in is projected as a torus in . choosing the pole of projection on the axis and a projection space containing the axis in onecoordinate plane , as in figure [ f9 ] of appendix a , this axis stays unchanged and the radius of the generator circle of the projection behaves as which stays close to in for the small values of considered here .this enables a direct comparison with the fibrils described below .this remarkable morphology shown in figure [ f7 ] was obtained by precipitation of calf - skin tropocollagen in vitro and studied by electron microscopy .striations as well as longitudinal traces making a complete turn around the director circle of the torus are quite discernable on the micrographs .the second shows the existence of a double - twist with the topology of a hopf fibration .the period of the striations is of nm , as for any fibril , and the perimeter of the director circle , which is therefore the pitch of the double - twist , varies from to nm , hence our choices of nm for the radius of and the number of layers created by the hp staggering in the preceding sections .the radii of the cross sections of those toroidal fibrils , when they can not be suspected to have been disrupted along their perimeter during drying , lies in between and nm in good agreement with the above expectation .toroidal fibrils can also be formed by other biopolymers such as polypeptides having diameters close to that of collagen triple helices .an organization of fibres having the topology of the hopf fibration is clearly visible on some micrographs shown in this reference , but always without any striation , hence without layering . comparing toroidal fibrils with equivalent director circles , the radii of the generator circles of these built by polypeptides are more than two times larger than that those built by collagen , as if , owing to the absence of layering , the growth of the first was not limited so early than that of the second . in the absence of a simple method to transform a torus in into an infinite cylinder in ,we just transfer the organization of fibres and layers built in the toroidal template along a straight cylinder whose director circle has the radius of the generator circle of the torus . as straight fibrils in tendonshave been the objects of many x - ray scattering structural studies , this open the possibility to confront the results of the model with those observations .first , the lateral size obtained along this approach is such that the fibril cross section is mostly occupied by the domain of the phyllotactic pattern proposed in in which the long range ordering of triple helices is compatible with that deduced from x - rays scattering studies .second , x - ray diffraction patterns are also characterized by scatterings with fan - like shapes along the equatorial and meridian directions .an example is shown in figure [ f8 ] . in most of the studies ,the fans around the equator , to be associated with the transverse organization of triple helices , are rather diffuse with opening angles from to . and those around the meridian , to be associated with the longitudinal striations , is better defined with smaller opening angles from to .the opening angles being different , these fan - like shapes can not have their origins in the disorientation of the samples .this suggests that the fibres and the layers are oriented differently , as proposed by our model where the angles and may differ .for instance , on the periphery of a fibril with a radius nm , the triple helices should be oriented at rad or from the fibril axis whereas the layers of configuration should be oriented at rad or . also , while the orientation of the fibres increases continuously with the distance from the axis , that of the layers vary by jump each time the configuration changes . however , as the tendons contain fibrils of different sizes , a precise relation between angles and radii can not be deduced from these x - rays studies .the molecular interactions determining the assembly of triple helices in type i collagen fibrils are of rather complex nature and exert their actions in constrained situations . for instance the circular symmetry imposed by the interfacial tension prevents the propagation of a transverse crystalline order and the double - twist configuration associated with the molecular chirality of the triple helices is not compatible with the periodic axial layering issued from their hp staggering .the structure of the fibrils is most likely the result of quite subtle competitions between different terms whose exact knowledge is rather poor at the moment .this situation precludes any quantitative search for compromises and we limited our examination of this structural problem to its geometrical foundation .we introduced a periodic axial layering into a template in which triple helices of constant length develop a uniform double - twist and are densely packed according to the spirals of a phyllotactic pattern . as the radius of template increases , the periodic layering can not be preserved without moving from planar to helicoidal configurations around the axis of the template and this requires shearing the triple helices along their common direction .such relative displacements can not be obtained without going against the lateral bindings of the triple helices along the shearing surface .fortunately , edge dislocations naturally present in the spirals of a phyllotactic pattern , along which triple helices are not connected to some of their neighbors , provide such opportunities .those dislocations are concentrated along circular grain boundaries which are more and more distant as the radius of the template increases so that the periodic layering can not be preserved for a lateral size larger than that shown by real fibrils .such an intrinsic stress , as well as that generated by the propagation of the double twist , would therefore contribute to the control of the growth without calling for external factors of control .in spite of this agreement , grey areas remain which call for appropriate experimental studies .they concern the internal structure of fibrils with well characterized radii , including the propagation of the hp staggering , and the relation between twist and radius imposed by the template .in addition to the experimental studies already quoted in the article , a few others might open such directions : + - electron microscopy of tendon normal sections showing a spiral lateral organization of the triple helices , + - atomic force microscopy of fibrils deposited on a substrate showing tilted striations and twisted triple helices at the periphery of fibrils with large radii , + - electron microscopy putting in light a growth from pointed paraboloidal tips eventually susceptible to be related to the organization of the hp staggering , + - mechanical studies such as those described in , or on individual fibrils with optical twitters , which would give access to the elastic constants needed to develop a thermodynamical approach . beyond its eventual interest for the conception of artificial tissues ,such a program is also justified as a contribution to the analysis of the respective roles of genetics and physical chemistry in building the morphologies needed for biological materials to fill their functions .a stereographic projection of the hypersphere with one family of hopf fibres onto the euclidean space is shown in figure [ f9 ] . with toroidal coordinates ( a ) , of the family of hopf fibres supported by those torii ( b ) and the double - twist of the local surrounding along every fibre of the family ( c ) . ] the fibres , great circles of , can be drawn on nested parallel torii characterized each by an angle , those with and are reduced to great circles and are the axes of the fibration .the fibres are enlaced , each one making one turn around the others .torus and its fibres are at a distance from the axis where is the radius of , such a torus can be built in identifying two by two the opposite sides of the rectangle shown in figure [ f10 ] . and ( a ) is folded into a torus ( b ) in .a diagonal of the rectangle and one of its parallel lines become two enlaced great circles of length on the torus , fibres at an angle from its axis . ] following this identification , the fibres of torus , all at the distance from the axis , are all oriented at the same angle with respect to this axis and make a complete turn around it keeping a constant distance between themselves .if another set of and axes is chosen among the great circles of at a distance from each other , the same situation is reproduced so that any two fibres keep a constant distance between themselves , the fibres are also called clifford parallels .this is a perfect double - twist configuration with a pitch .the fibres being parallel , their organization in the hypersphere can be simply represented by a distribution of points on a sphere , the basis of the hopf fibration , in a way similar to the representation of a set of parallel straight lines in by points on a plane .the shape of the cross section of a dense fiber bundle is expected to reflect the symmetry of its molecular packing . however, this statement is belied by type i collagen fibrils which show a circular cross section while structural studies suggest that their molecules can be assembled with some long range lateral order .we recently examined how the iterative process of phyllotaxis , a non conventional crystallographic solution to packing efficiency in situations of high radial symmetry , could establish a link between those two apparently conflicting points .a phyllotactic organization of points indexed by is built by an algorithm such that the position of point is given by its polar coordinates and that is which is the equation of a fermat spiral , the generative spiral .the area of the circle of radius which contains points is so that the area per point has the value , indeed it oscillates close to this value for small s then converges towards it .the most homogeneous and isotropic environment , or the best packing efficiency in radial symmetry , is obtained with where is the irrational golden ratio .a sector of a phyllotactic pattern for points on a plane with their voronoi cells is shown in figure [ f11 ] .in such a pattern , pentagons and heptagons are topological defects distributed among the hexagons .they appear concentrated in narrow circular rings with constant width separating large rings of hexagons whose width increases as one moves from the core towards the periphery . in the narrow rings , pentagons and heptagonsare associated in dipoles separated by hexagons whose shape is close to that of a square with two corners cut .the rings of dipoles are indeed grain boundaries separating hexagonal grains and the dipoles are dislocations introducing the new parastichies needed to maintain the density as constant as possible .the radii of those rings of dipoles tend to follow the fibonnacci series , from and , which makes the organization self - similar , invariant by a change of scale .the evolutions of the distances between first neighbor points , as measured along the three parastichies , are shown in figure [ f12 ] .parameter adjusted to have a mean distance close to nm . the three colours correspond to the three parastichies , the upper and lower crossings on the same verticals correspond to grain boundaries and the intermediate ones to the cores of hexagonal grains . ]the algorithm of phyllotaxis has also been developed on the sphere with the aim of applying it for the spherical basis of the hopf fibration .patterns of concentric grains separated by circular grain boundaries similar to that described on the plane are obtained and the perimeters of the grain boundaries are independent of the curvature .it results from this last point that the distances of the grain boundaries to the pole measured on the sphere vary with its curvature .g. m. grason , _ phys .e _ 79 , 041919 ( 2009 ) and _ phys .lett . _ * 105 * , 045502 ( 2010 ) .j. charvolin and j .- f .sadoc , _ biophysical reviews and letters _ , * 8 * , 33 - 49 ( 2013 ) .r. d. kamien , _ j. phys. ii france _ * 7 * , 743 - 750 ( 1997 ). j. charvolin et j .- f .sadoc , `` tores et torsades '' , collection savoirs actuels , _ edp sciences et cnrs editions _ ( 2011 ) or j. charvolin and j .- f. sadoc , _ eur .j. e _ * 25 * , 335 - 341 ( 2008 ) .a. cooper , _ biochem .j. _ * 112 * , 515 - 519 ( 1969 ) .j. j. b. p. blais and p. h. geil , _j. ultrastr .res . _ * 22 * , 303 - 311 ( 1968 ) . t. j. wess , a. p. hammersley , l. wess and a. miller , __ * 248 * , 487 - 493 ( 1995 ) .d. j. mcbride jr , v. coe , j. r. shapiro and b. brodsky , _ j. mol .biol . _ * 270 * , 275 - 284 ( 1997 ) .t. j. wess and j. p. orgel , _ thermochimica acta _ * 365 * , 119 - 128 ( 2000 ) .j. h. laing , j. p. orgel , j. dubochet , a. al - amoudi , t. j. wess , g. j. cameron and c. laurie , _fibre diffraction review _ * 11 * , 119 - 122 ( 2003 ) .j. doucet , f. briki , a. gourrier , c. pichon , s. bensamoun and j .- f .sadoc , _ jour . of struc .biol . _ * 173 * , 197 - 201 ( 2011 ) .d. j. s. hulmes , j .- c .jesior , a. miller , c. berthet - colominas and c. wolff , _ proc .usa _ 78 , 3567 - 3571 ( 1981 ) .m. p. e. wenger , l. bozec , m. a. horton and p. mesquida , _ biophys .j. _ * 93 * , 1255 - 1263 ( 2007 ) .d. j. prockop and a. fertala , _ jour . of struc . biol . _* 122 * , 111 - 118 ( 1998 ) .l. bozec , g. van der heijden and m. horton , _ biophys . j. _ * 92 * , 70 - 75 ( 2007 ) .f . sadoc and j. charvolin , _ j. phys . a : math .theor . _ * 42 * 465209 ( 2009 ) . j .-sadoc , n. rivier and j. charvolin , _ acta cryst . a _ * 68 * , 470 - 483 ( 2012 ) .i. n. ridley , _ mathematical biosciences _ * 58 * , 129 - 139 ( 1982 ) . j .-sadoc , j. charvolin and n. rivier , _gen.__**46 * * 295202 ( 2013 ) .
type i collagen fibrils have circular cross sections with radii mostly distributed in between and nm and are characterized by an axial banding pattern with a period of nm . the constituent long molecules of those fibrils , the so - called triple helices , are densely packed but their nature is such that their assembly must conciliate two conflicting requirements : a double - twist around the axis of the fibril induced by their chirality and a periodic layered organization , corresponding to the axial banding , built by specific lateral interactions . we examine here how such a conflict could contribute to the control of the radius of a fibril . we develop our analysis with the help of two geometrical archetypes : the hopf fibration and the algorithm of phyllotaxis . the first one provides an ideal template for a twisted bundle of fibres and the second ensures the best homogeneity and local isotropy possible for a twisted dense packing with circular symmetry . this approach shows that , as the radius of a fibril with constant double - twist increases , the periodic layered organization can not be preserved without moving from planar to helicoidal configurations . such changes of configurations are indeed made possible by the edge dislocations naturally present in the phyllotactic pattern where their distribution is such that the lateral growth of a fibril should stay limited in the observed range . because of our limited knowledge about the elastic constants involved , this purely geometrical development stays at a quite conjectural level . submitted to _ biophysical reviews and letters _
cosmic inflation is a great idea to solve some cosmological problems and to predict the fine fluctuations of cosmic microwave background ( cmb ) .hitherto the surviving and most economical model of inflation involves a single scalar field slowly rolling down its effective potential , with a canonical kinetic term and minimally coupled to the einstein gravity. we will call it the simplest single - field inflation , although there is still freedom to design its exact potential .the single - field inflation passed the latest observational test successfully , even with the simplest quadratic potential .nevertheless there are perpetual attempts to modify the simplest single - field inflation .some of them are motivated by incorporating inflation model into certain theoretical frameworks , such as the standard model of particle physics or string theory .some others put their stake on signatures that can not appear in the simplest single - field model , such as a large deviation from the gaussian distribution in the cmb temperature fluctuations . among these modifications ,the two - field slow - roll inflation is the most conservative one , at least in my personal point of view .it introduces another scalar field rather than a non - conventional lagrangian such as non - canonical kinetic terms or modifications of gravity .it also retains the slow - roll condition , which makes the model simple and consistent with the observed cmb power spectrum . if both conventional lagrangian and non - conventional lagrangian are adaptable to the observational data , then the model with conventional lagrangian would be more acceptable , unless there are better and solid theoretical motivations for non - conventional lagrangian . on the observational side ,two new features arise in two - field model .first , the model is able to leave a residual entropic perturbation between the fluctuations of dark matter and cmb .second , in a simple model with quadratic potential , numerical computations found that the non - gaussianity can be temporarily large at the turn of inflation trajectory in field space .longer - lived large non - gaussianities were discovered recently by in many other two - field models . compared with the simplest one - field inflation ,the field space becomes two - dimensional in a two - field model .when the inflation trajectory is curved in field space , the entropic perturbation will be coupled to the adiabatic perturbation .so there are more uncertainties in calculation of cosmological observables , such as power spectra of cmb and their indices. it would be more complicated to honestly compute the bispectra and non - linear parameters , which reflect the non - gaussianity of the primordial fluctuations .fortunately , based on the extended -formalism , vernizzi and wands invented an analytic method to estimate such non - gaussianities .they demonstrated the power of this method in a two - field model with additive separable potentials .this method was later applied by choi _et al_. to a model with multiplicative separable potentials .encouraged by the method of vernizzi and wands , we tried to improve it for the two - field slow - roll model with generic potentials but failed .finally , we only designed a larger class of models whose non - gaussianity can be estimated by this method .it is a class of models whose potential take the form with or . here , and are arbitrary functions of the indicated variables as long as the slow - roll condition is satisfied .scalar fields and are inflatons .the outline of this paper is as follows . in our convention of notations, we will prepare some well - known but necessary knowledge in section [ sect - preparation ] concisely . in section [ sect - hunt ] , we will present the exact form of our models , whose non - linear parameters will be worked out in sections [ sect - modeli ] and [ sect - modelii ] .some specific examples are investigated in section [ sect - examples ] .we summarize the main results of this paper in the final section .this is a note concerning references .some of our techniques stem from these references or slightly generalize theirs .sometimes we employ the techniques with few explanation if the mathematical development is smooth . to better understand them ,the readers are strongly recommended to review the relevant parts of .we are interested in inflation models described by the following action .\ ] ] because of the appearance of , the field has a non - standard kinetic term . following the notation of slow - roll parameters defined in the slow - roll condition can be expressed as , , with . as an aside , we mention that model is equivalent to the generalized gravity when .but then we find , which violates the the slow - roll condition .this is a pitfall in treating generalized gravity as a two - field model .this pitfall can be circumvented by the scheme in . under the slow - roll condition ,the background equations of motion are very simple using them one may directly demonstrate observationally , the most promising probe of primordial non - gaussianities comes from the bispectrum of cmb fluctuations , which is characterized by the non - linear parameter . if , it would be detectable by ongoing or planned satellite experiments .it has been shown in that the non - linear parameter in two - field inflation models can be separated into a momentum dependent term and a momentum independent term it is also proved in that the first term is always suppressed by the tensor - to - scalar ratio , leading to .hence this term is negligible in observation . for action, the second term may be large and deserves a closer look . here is the -folding number from the initial flat hypersurface to the final comoving hypersurface . to evaluate, we will work out the derivatives of with respect to and in the next section , focusing on a class of analytically solvable models .making use of equations , the -folding number can be cast as hence is an arbitrary function of and in principle , because along any classical trajectory under the slow - roll condition .however , for a given , we have to choose a suitable form of so that the integrations defined by in can be performed .later on we will fix to meet the ansatz for simplicity .but for the moment let us leave it as an arbitrary function of and . it is straightforward to obtain the first order partial derivatives akin to , we define an integral of motion along the trajectory of inflation here the explicit form of is determined by scalar potential . we will give the expression of for some types of potential in this section .if we fix the limits of integration to run from to , then due to the background equations , along classical trajectories under the slow - roll approximation .so the constant parameterizes the motion off classical trajectories . in order to know , , , in, we should calculate the first order derivatives of on the initial flat hypersurface , differentiating with respect to , it gives on large scales , the comoving hypersurface coincides with the uniform density hypersurface .this implies under the slow - roll condition whose differentiation with respect to is combined with on the final comoving surface , it could give the solution for and .this is in general difficult analytically . to overcome the difficulty, we introduce an ansatz : although we are free to design the function , the above condition is not always satisfiable .we have hunted for analytical models meeting this condition , and found it is achievable if with or .here , and are arbitrary functions of the indicated variables as long as the slow - roll condition is satisfied . in this paper , we will pay attention to this situation .but it is never excluded that there might be other situations in which and are solvable from and , even if ansatz is violated .ansatz simplifies our discussion significantly . once it holds , equations and lead to while is reduced as as a result , the partial derivatives of take the form in these equations , we have adopted the notations in the above , the expression of and its derivatives involve nuisance integrals . to further simplify our study, we utilize one more ansatz in favor of this ansatz , we have and so do its derivatives .as was mentioned , ansatz can be satisfied by special forms of potential .now ansatz further constrains the form of and .let us discuss it in details case by case . for this class of models, according to , we set while condition is met by or hereafter , as free parameters in our models , , , , , and are arbitrary real constants .the normalization of is fixed for simplicity .this is always realizable by rescaling the field .taking , model recovers the well - studied sum potential , to which we will return in subsection [ subsect - sumpot ] . in subsection[ subsect - nspotii ] , we will study a specific example of non - separable potential that corresponds to in . as will be discussed in subsection [ subsect - ieqii ], there is an equivalence relation between case i in this subsection and case ii in the next subsection .models in class i can be transformed to those in class ii , and _ vice versa_. we will translate model to a nicer form and explore it .for this class of models , we take then condition is satisfied .condition can be met by or we observed that , and can be obtained from , and perfectly by the following replacement : in fact , there is a general equivalence relation between case i and case ii , on which will be elaborated in subsection [ subsect - ieqii ] .equation dictates implicitly as a differential equation . to obtain the explicit form of , one should solve the equation .this could be done analytically in some corners of the parameter space .for instance , setting , equation gives however , if , it leads to a larger class of model leaving as an arbitrary function of .model or is separable and can be seen as the well - studied product potential .more discussion on models with product potential will be given in subsection [ subsect - prodpot ] .in the case that and , we find another model in subsection [ subsect - nspotii ] , we will study an example of non - separable potential which corresponds to in . since is an arbitrary real constant , equation can generate many other forms of potential .for example , when and , we get a model we have classified our models into two categories , corresponding to subsections [ subsect - casei ] and [ subsect - caseii ] . in case i, the potential is a function of sum . in caseii , the potential is a function of product .after the non - dimensionalization , case i can be translated to case ii by the transformation the last relation in is a corollary of the former ones because . on the other hand , via transformation , an arbitrary potential of case i can be transformed to that of case ii .so the two `` cases''are just two different formalisms for studying the same models .they are equivalent to each other .we are free to study a model in either formalism contingent on the convenience .for instance , using the formulae in this section , a model with potential and prefactor can be studied in two different formalisms : * formalism i : , with , , . *formalism ii : , with , , .but apparently , for this model the calculation will be easier in formalism ii .because the dependence of and on and is unaltered , the quantization of perturbations is not affected by the choice of formalism . for the same reason ,the exact dependence of on and is the same in both formalisms .this model is given by , which is equivalent to model .corresponding to this model , the number of -foldings and the integral constant along the inflation trajectory are we have defined the slow - roll parameters in . in the present case , they are of the form now equations and become while the function defined by takes the form then we get the partial derivatives of with respect to and , in terms of .\end{aligned}\ ] ] with the above result at hand , it is straightforward to calculate ,\\ \nonumber & & n_{,\varphi_*\chi_*}=-\frac{2w^{*2}\mathcal{a}}{\alpha m_p^4u^*_{,\varphi}v^*_{,\chi}},\\ & & n_{,\chi_*\chi_*}=\frac{1}{\alpha m_p^2}\left[\left(1-\frac{\eta^*_{\chi\chi}}{2\epsilon^*_{\chi}}\right)\alpha v+u+\frac{\alpha^2}{\epsilon^*_{\chi}}\mathcal{a}\right],\end{aligned}\ ] ] where for convenience we used notations for these notations , the relation holds . in the next section ,the definitions of and are different , but the same relation also holds . as a result ,using formula we get the main part of non - linear parameter in this model \\ & & + \frac{v^2}{\epsilon^*_{\chi}}\left[\left(1-\frac{\eta^*_{\chi\chi}}{2\epsilon^*_{\chi}}\right)\alpha v+u\right]+\left(\frac{u}{\epsilon^*_{\varphi}}+\frac{v}{\epsilon^*_{\chi}}\right)^2\alpha^2\mathcal{a}\biggr\}.\end{aligned}\ ] ] the non - linear parameter depends on the exponent in a complicated manner . for the purpose of rough estimation , we assume both and are of order unity .this assumption is reasonable if , and are of the same order .it is also consistent with the relation .furthermore , motivated by the slow - roll condition and the observational constraint on spectral indices , we assume the slow - roll parameters are of order . in saying this we mean all of the slow - roll parameters are of the same order , which is a strong but still allowable assumption . after making these assumptions , we can estimate the magnitude of in three regions according to the value of .firstly , in the limit , we have .so the third term in curly brackets of is of order , while the other two terms are of order .consequently , we can estimate .it seems that a small value of could give rise to a large non - linear parameter .specifically , under our assumptions above , if , then the non - linear parameter . however , this limit violates our assumptions . on the one hand , we have assumed . on the other hand ,equations tell us , which apparently violates our assumption in the limit .so we can not use the oversimplified assumptions to estimate the non - linear parameter in this limit . secondly , for , we would have . then the last term in braces of is of order .the other terms can be of order .after cancelation with the prefactor , it leads to the estimation . that is to say , in this limit ,the non - linear parameter is independent of in the leading order and suppressed by the slow - roll parameters .the third region is . in this region ,the non - linear parameter is still suppressed , .our conclusion is somewhat unexciting .this model could not generate large non - gaussianities under our simplistic assumptions .however , one should be warned that our estimation above relies on two assumptions : and .although these assumptions are reasonable , they may be avoided in very special circumstances . to further look for a large non - gaussianity with our formula, one should give up these assumptions and carefully scan the whole parameter space in a consistent way .generally that is an ambitious task if not impossible .but for a specific model of this type , we will perform such a scanning in subsection [ subsect - nspotii ] .subsequently , after obtaining the equations and }{(u_{,\varphi}^2v^2+u^{\nu+2}v_{,\chi}^2)^2},\\ % & & z_{,\chi}=\frac{q\nu u_{,\varphi}^2v^{\nu+1}v_{,\chi}}{u_{,\varphi}^2v^2+u^{\nu+2}v_{,\chi}^2}+\frac{2(p+qw^{\nu})wu_{,\varphi}^2v_{,\chi}(uv_{,\chi}^2-wv_{,\chi\chi})}{(u_{,\varphi}^2v^2+u^{\nu+2}v_{,\chi}^2)^2}.\end{aligned}\ ] ] we find by a little computation here notation is different from the one in the previous section , .\end{aligned}\ ] ] in terms of and the relation , once again straightforward calculation gives ,\\ \nonumber & & n_{,\varphi_*\chi_*}=-\frac{2(p+qw^{*\nu})^2w^*\mathcal{a}}{m_p^4u^{*\nu}u^*_{,\varphi}v^*_{,\chi}},\\ & & n_{,\chi_*\chi_*}=\frac{1}{m_p^2u^{*\nu}}\left[\left(1-\frac{\eta^*_{\chi\chi}}{2\epsilon^*_{\chi}}\right)v+q\nu w^{*\nu}u+\frac{\mathcal{a}}{\epsilon^*_{\chi}}\right].\end{aligned}\ ] ] therefore , the non - linear parameter in this model is \\ & & + \frac{v^2}{\epsilon^*_{\chi}}\left[\left(1-\frac{\eta^*_{\chi\chi}}{2\epsilon^*_{\chi}}\right)v+q\nu w^{*\nu}u\right]+\left(\frac{u}{\epsilon^*_{\varphi}}-\frac{v}{\epsilon^*_{\chi}}\right)^2\mathcal{a}\biggr\}.\end{aligned}\ ] ] similar to the previous section , we can estimate by assuming and . under these assumptions , the only possibility to generate a large non - linear parameter is in the limit .unfortunately , careful analysis ruled out this possibility . because the assumption implies , we find the non - linear parameter is not enhanced by but is suppressed by the slow - roll parameters , . the same suppression applies if lies in other regions .so we conclude that it is hopeless to generate large non - gaussianities in this model unless one goes beyond the assumptions we made .a careful scan of parameter space will be done in subsection [ subsect - nspotiii ] for a specific model .in sections above , we have generalized the method of and applied it to a larger class of models .these models are summarized by equations and , whose non - linear parameters are given by and generally . to check our general formulae , we will reduce and to previously known limit in subsections [ subsect - sumpot ] and [ subsect - prodpot ] .the reduced expressions are consistent with the results of . in subsections [ subsect - nspoti ] , [ subsect - nspotii ] and [ subsect - nspotiii ], we will apply our formulae to non - separable examples and scan the full parameter spaces .we should stress that all results in this paper are reliable only in the slow - roll region , that means at the least , , with .a method free of slow - roll condition for some special models has been explored in reference .this potential is obtained from by setting .the condition is necessary to guarantee . after taking ,the result in section [ sect - modeli ] matches with that in obviously . like equation , we leave as an arbitrary function of , as long as the slow - roll parameters are small .this is a special limit of section [ sect - modelii ] . using relations we get the reduced form of non - linear parameter ,\end{aligned}\ ] ] where we have made use of the fact that as well as the following notations .\end{aligned}\ ] ] one may compare this formula with .note that their definitions of , and are slightly different from ours by some factors .taking these factors into account , the result here is in accordance with .we spend an independent subsection on this model not because of its non - gaussianity , but because it has an elegant relation between the -folding number and the angle variable of fields . for this model ,the number of -foldings from time during the inflation stage to the end of inflation is note that can be regarded as sum of squares .its time derivative gives the hubble parameter .so we can follow the standard treatment to parameterize the scalars in polar coordinates rewriting the equations of motion in terms of the polar coordinates , we obtain a differential relation between and for the present model , with . it can be solved out to give at the end of inflation , if the scalars arrive at the bottom of potential , one may simply set .relation is a trivial but useful generalization of polarski and starobinsky s relation .recall that polarski and starobinsky s relation has been widely used for the inflation model with two massive scalar fields , which corresponds to exponent in the model of this subsection .the simple demonstration above generalized the relation to arbitrary . as an application, we evaluate on the initial flat hypersurface and then on the final comoving hypersurface , getting the ratio which reduces to this result can be also achieved from directly .our purpose in this and the next subsections is to examine non - gaussianities by parameter scanning .two common assumptions will be used : the -folding number is fixed to be and the inflation is supposed to conclude at the point . using the latter assumption and the general formulae in section [ sect - modeli ] , we find all of the relevant quantities can be expressed by , and : ,\end{aligned}\ ] ] ,\end{aligned}\ ] ] ^ 2},\end{aligned}\ ] ] here we defined like the previous subsection .if , it can be proved that . without loss of generality , we will consider the parameter region .as has been mentioned , from or , one can get relation .this relation is equivalent to if , it gives and thus . in the above expressions ,there are five parameters : , , , and .the number can be reduced by the assumptions we made at the beginning of this section .firstly , and can be traded to each other with the relation . secondly , since we have assumed , equations and can be used to eliminate two degrees of freedom further .now we see only two parameters are independent , and we choose them to be and in the analysis below .the number counting in this way agrees with the fact that is a first order system under the slow - roll approximation . as a useful trick, we introduce a dimensionless notation , then equations and can be reformed as and usually the second equation has no analytical expression for the root , but one may still find the root numerically . in the region , both and increase monotonically from zero to infinity , so this equation with respect to has exactly one positive real root if the right hand side is finite . in terms of , and , this equation is of the form fixing , the recipe of our numerical simulation is as follows : 1 . given the values of and in parameter space , , numerically find the root of equation , where .2 . compute , , and according to and equations .3 . evaluate with the formula ^ 2+\frac{\epsilon^c_{\chi}}{x}[rx-(r-1)\epsilon^c_{\varphi}]^2\right\}^{-2}\\ \nonumber & & \times\biggl\{\frac{\epsilon^c_{\varphi}}{2x^r}[x^r+(r-1)\epsilon^c_{\chi}]^2\left[1-\frac{(r-1)\epsilon^c_{\chi}}{x^r}\right]\\ \nonumber & & + \frac{\epsilon^c_{\chi}}{2x}[rx-(r-1)\epsilon^c_{\varphi}]^2\left[1+\frac{(r-1)\epsilon^c_{\varphi}}{rx}\right]\\ & & -\epsilon^c_{\varphi}\epsilon^c_{\chi}\left[1-\frac{(\epsilon^c_{\varphi}+r\epsilon^c_{\chi})^2}{r}\right]\left[\frac{x^r+(r-1)\epsilon^c_{\chi}}{x^r}+\frac{rx-(r-1)\epsilon^c_{\varphi}}{x}\right]^2\biggr\}.\end{aligned}\ ] ] 4 .repeat the above steps to scan the entire parameter space of and . due to the violation of slow - roll condition, the vicinity of should be skipped to avoid numerical singularities ( see spikes in figure [ fig - nspotii ] ) . as functions of and , under the assumptions and . is defined as , the ratio of two parameters in the potential of this model.**,title="fig:",scaledwidth=45.0% ] + as functions of and , under the assumptions and . is defined as , the ratio of two parameters in the potential of this model.**,title="fig:",scaledwidth=45.0% ] + in a practical simulation , we scan the region , on a uniform grid with points .some simulation results are illustrated in figure [ fig - nspotii ] .when drawing the figure , we have imposed the slow - roll condition , , , . in the limit , they are in agreement with the analytical results , .one may also check the results in other limits analytically , such as or .theoretically , should correspond to an inflation model driven by one field . but our method does not apply to that limit , because it would violate the slow - roll condition for . from figure [ fig - nspotii],we can see the non - linear parameter is suppressed by slow - roll parameters .especially , in the neighborhood of , the spikes of are located at the same positions as the spikes of .such a coincidence continues to exist even if one relaxes the slow - roll condition .but there is no spike in similar graphs for , and .actually , these spikes are mainly attributed to the enhancement of and by in the small limit .after the parameter scanning and the numerical simulation , our lesson is that this model can not generate a large non - gaussianity unless the slow - roll condition breaks down .this is a special model of with , , . as in the previous subsection , we assume and . then from section [ sect - modelii ] we get the relations .\ ] ] for the present model , equation gives that is if we introduce the notations , then combining it with equation and the condition , we can express , and in terms of , and , ,\\ \epsilon^*_{\chi}&=&r\epsilon^c_{\chi}\exp\left[\frac{(r-1)\epsilon^c_{\varphi}}{\epsilon^c_{\chi}}\right],\end{aligned}\ ] ] ^ 2}.\ ] ] on the basis of equation , we deduce that should be positive and suppressed by slow - roll parameters .in particular , thus we focus on the region .as indicated by the above analysis , if we are interested only in the non - linear parameter and slow - roll parameters , this model has two free parameters after using our assumptions and equations of motion .they will be chosen as and in our simulation , just like in the previous subsection .but we should warn that , compared with the previous subsection , the notation has a distinct meaning in the current subsection .as functions of and , under the assumptions and . is defined as , and it is plotted in logarithmic scale.**,title="fig:",scaledwidth=45.0% ] + as functions of and , under the assumptions and . is defined as , and it is plotted in logarithmic scale.**,title="fig:",scaledwidth=45.0% ] + as functions of and near the corner , , under the assumptions and . is defined as , and it is plotted in linear scale . in the middle and the lower graphs ,the regions with and respectively are cut off.**,title="fig:",scaledwidth=45.0% ] + as functions of and near the corner , , under the assumptions and . is defined as , and it is plotted in linear scale . in the middle and the lower graphs , the regions with and are cut off.**,title="fig:",scaledwidth=45.0% ] + as functions of and near the corner , , under the assumptions and . is defined as , and it is plotted in linear scale . in the middle and the lower graphs ,the regions with and respectively are cut off.**,title="fig:",scaledwidth=45.0% ] + the parameter scanning is illustrated by figures [ fig - nspotiilog ] and [ fig - nspotiilin ] . in figure[ fig - nspotiilog ] , parameter decreases exponentially from 1 to . in this process ,the non - linear parameter grows roughly proportional to while the slow - roll condition is violated gradually .this phenomenon agrees with equations and , both of whose amplitude are enhanced by the factor when is small . in figure[ fig - nspotiilog],we find a sharp spike for the non - linear parameter in the corner , .figure [ fig - nspotiilin ] is drawn to zoom in this corner , with scaled linearly .as shown by this figure , the spike dwells in a position violating the slow - roll condition .therefore , the non - linear parameter in this model must be small once the slow - roll condition , , ( ) is imposed .in this paper , we investigated a class of two - field slow - roll inflation models whose non - linear parameter is analytically calculable . in our convention of notations , we collected some well - known but necessary knowledge in section [ sect - preparation ] . slightly generalizing the method of , we showed in section [ sect - hunt ] how their method could be utilized in a larger class of models satisfying two ansatzes , namely and . in subsections[ subsect - casei ] and [ subsect - caseii ] we proposed models meeting these ansatzes .we put our models in the form of with in subsection [ subsect - casei ] and with in subsection [ subsect - caseii ] . at first glance , these are two different classes of models .but in fact they are two dual forms of the same class of models , just as proved in subsection [ subsect - ieqii ] . in a succinct form , our models can be summarized by equations and , whose non - linear parameters were worked out in sections [ sect - modeli ] and [ sect - modelii ] respectively , see equations and . under simplistic assumptions , we found no large non - gaussianity in these models . as a double check, we reduced the expression for non - linear parameter to the additive potential in subsection [ subsect - sumpot ] , and to multiplicative potential in subsection [ subsect - prodpot ] .the resulting non - linear parameters match with , confirming our calculations . in subsection[ subsect - nspoti ] , for a special class of models , we generalized polarski and starobinsky s relation .for more specific models , we scanned the parameter space to evaluate the non - linear parameter , as shown by figures in subsections [ subsect - nspotii ] and [ subsect - nspotiii ] . in the scanning, we assumed the -folding number and the inflation terminates at .for the models we studied in subsections [ subsect - nspotii ] and [ subsect - nspotiii ] , the non - linear parameter always takes a small positive value under the slow - roll approximation .99 a. h. guth , phys .d * 23 * , 347 ( 1981 ) .a. d. linde , phys .b * 108 * , 389 ( 1982 ) .a. albrecht and p. j. steinhardt , phys .lett . * 48 * , 1220 ( 1982 ) .e. komatsu _ et al ._ , arxiv:1001.4538 [ astro-ph.co ] . f. l. bezrukov and m. shaposhnikov , phys . lett .b * 659 * , 703 ( 2008 ) [ arxiv:0710.3755 [ hep - th ] ] .a. de simone , m. p. hertzberg and f. wilczek , phys . lett .b * 678 * , 1 ( 2009 ) [ arxiv:0812.4946 [ hep - ph ] ] .s. kachru , r. kallosh , a. d. linde , j. m. maldacena , l. p. mcallister and s. p. trivedi , jcap * 0310 * , 013 ( 2003 ) [ arxiv : hep - th/0308055 ] .p. chingangbam and q. g. huang , jcap * 0904 * , 031 ( 2009 ) [ arxiv:0902.2619 [ astro-ph.co ] ] .q. g. huang , jcap * 0905 * , 005 ( 2009 ) [ arxiv:0903.1542 [ hep - th ] ] .x. gao and b. hu , jcap * 0908 * , 012 ( 2009 ) [ arxiv:0903.1920 [ astro-ph.co ] ] .y. f. cai and h. y. xia , phys .b * 677 * , 226 ( 2009 ) [ arxiv:0904.0062 [ hep - th ] ] . q. g. huang , jcap * 0906 * , 035 ( 2009 ) [ arxiv:0904.2649 [ hep - th ] ] .x. gao and f. xu , jcap * 0907 * , 042 ( 2009 ) [ arxiv:0905.0405 [ hep - th ] ] .x. chen , b. hu , m. x. huang , g. shiu and y. wang , jcap * 0908 * , 008 ( 2009 ) [ arxiv:0905.3494 [ astro-ph.co ] ] .t. matsuda , class .* 26 * , 145016 ( 2009 ) [ arxiv:0906.0643 [ hep - th ] ] .x. gao , m. li and c. lin , jcap * 0911 * , 007 ( 2009 ) [ arxiv:0906.1345 [ astro-ph.co ] ] .x. gao , jcap * 1002 * , 019 ( 2010 ) [ arxiv:0908.4035 [ hep - th ] ] .k. enqvist and t. takahashi , jcap * 0912 * , 001 ( 2009 ) [ arxiv:0909.5362 [ astro-ph.co ] ] .x. chen and y. wang , jcap * 1004 * , 027 ( 2010 ) [ arxiv:0911.3380 [ hep - th ] ] .j. o. gong , c. lin and y. wang , jcap * 1003 * , 004 ( 2010 ) [ arxiv:0912.2796 [ astro-ph.co ] ] .j. garcia - bellido and d. wands , phys .d * 53 * , 5437 ( 1996 ) [ arxiv : astro - ph/9511029 ] .c. t. byrnes and d. wands , phys .d * 74 * , 043529 ( 2006 ) [ arxiv : astro - ph/0605679 ] .g. i. rigopoulos , e. p. s. shellard and b. j. w. van tent , phys .d * 76 * , 083512 ( 2007 ) [ arxiv : astro - ph/0511041 ] .f. vernizzi and d. wands , jcap * 0605 * , 019 ( 2006 ) [ arxiv : astro - ph/0603799 ] .c. t. byrnes , k. y. choi and l. m. h. hall , jcap * 0810 * , 008 ( 2008 ) [ arxiv:0807.1101 [ astro - ph ] ] .c. t. byrnes , k. y. choi and l. m. h. hall , jcap * 0902 * , 017 ( 2009 ) [ arxiv:0812.0807 [ astro - ph ] ] .c. t. byrnes and g. tasinato , jcap * 0908 * , 016 ( 2009 ) [ arxiv:0906.0767 [ astro-ph.co ] ] .c. t. byrnes and k. y. choi , adv .astron .* 2010 * , 724525 ( 2010 ) [ arxiv:1002.3110 [ astro-ph.co ] ] .f. bernardeau and j. p. uzan , phys .d * 66 * , 103506 ( 2002 ) [ arxiv : hep - ph/0207295 ] . f. bernardeau and j. p. uzan , phys .d * 67 * , 121301 ( 2003 ) [ arxiv : astro - ph/0209330 ] .h. r. s. cogollo , y. rodriguez and c. a. valenzuela - toledo , jcap * 0808 * , 029 ( 2008 ) [ arxiv:0806.1546 [ astro - ph ] ] . y. rodriguez and c. a. valenzuela - toledo , phys .d * 81 * , 023531 ( 2010 ) [ arxiv:0811.4092 [ astro - ph ] ] .d. h. lyth and y. rodriguez , phys .lett . * 95 * , 121302 ( 2005 ) [ arxiv : astro - ph/0504045 ] .k. y. choi , l. m. h. hall and c. van de bruck , jcap * 0702 * , 029 ( 2007 ) [ arxiv : astro - ph/0701247 ] .f. di marco and f. finelli , phys .d * 71 * , 123502 ( 2005 ) [ arxiv : astro - ph/0505198 ] .j. c. hwang and h. noh , phys .d * 71 * , 063536 ( 2005 ) [ arxiv : gr - qc/0412126 ] .x. ji and t. wang , phys .d * 79 * , 103525 ( 2009 ) [ arxiv:0903.0379 [ hep - th ] ] .d. seery and j. e. lidsey , jcap * 0509 * , 011 ( 2005 ) [ arxiv : astro - ph/0506056 ] .d. polarski and a. a. starobinsky , nucl .b * 385 * , 623 ( 1992 ) .d. langlois , phys .d * 59 * , 123512 ( 1999 ) [ arxiv : astro - ph/9906080 ] .
two - field slow - roll inflation is the most conservative modification of a single - field model . the main motivations to study it are its entropic mode and non - gaussianity . several years ago , for a two - field model with additive separable potentials , vernizzi and wands invented an analytic method to estimate its non - gaussianities . later on , choi _ et al_. applied this method to the model with multiplicative separable potentials . in this note , we design a larger class of models whose non - gaussianity can be estimated by the same method . under some simplistic assumptions , roughly these models are unlikely able to generate a large non - gaussianity . we look over some specific models of this class by scanning the full parameter space , but still no large non - gaussianity appears in the slow - roll region . these models and scanning techniques would be useful for future model hunt if observational evidence shows up for two - field inflation .
the theory of cps is a framework for design and analysis of distributed algorithms for coordination of the groups of dynamic agents . in many control problems ,agents in the group need to agree upon certain quantity , whose interpretation depends on the problem at hand . the theory of cps studies the convergence to a common value ( consensus ) in its general and , therefore , abstract form .it has been a subject of intense research due to diverse applications in applied science and engineering .the latter include coordination of groups of unmanned vehicles ; synchronization of power , sensor and communication networks ; and principles underlying collective behavior in social networks and biological systems , to name a few . from the mathematical point of view , analysis of continuous time cps is a stability problem for systems of linear differential equations possibly with additional features such as stochastic perturbations or time delays .there are many effective techniques for studying stability of linear systems .the challenge of applying these methods to the analysis of cps is twofold .first , one is interested in characterizing stability under a minimal number of practically relevant assumptions on the structure of the matrix of coefficients , which may depend on time .second , it is important to identify the relation between the structure of the graph of interactions in the network to the dynamical performance of cps .a successful solution of the second problem requires a compilation of dynamical systems and graph theoretic techniques .this naturally leads to spectral methods , which play important roles in both mathematical disciplines , and are especially useful for problems on the interface between dynamics and the graph theory . a general idea for using spectral methods for analyzing cpsis that , on the one hand , stability of the continuous time cp is encoded in the eigenvalues ( evs ) of the matrix of coefficients ; on the other hand , evs of the same matrix capture structural properties of the graph of the cp .the spectral graph theory offers many fine results relating the structural properties of graphs to the evs of the adjacency matrix and the graph laplacian .this provides a link between the network topology and the dynamical properties of cps . in this paper , under fairly general assumptions on cps , we study two problems : convergence of cps and their stability in the presence of stochastic perturbations .the former is the problem of asymptotic stability of the consensus subspace , a one - dimensional invariant ( center ) subspace .the latter is a special albeit representative form of stability of the consensus subspace with respect to constantly acting perturbations .the rate of convergence to the consensus subspace sets the timescale of the consensus formation ( or synchronization ) from arbitrary initial conditions or upon instantaneous perturbation .therefore , the convergence rate is important in applications where the timing of the system s responses matters ( e.g. , in decision making algorithms , neuronal networks , etc ) .stochastic stability , on the other hand , characterizes robustness of the consensus to noise .this form of stability is important when the consensus needs to be maintained in noisy environment over large periods of time ( e.g. , communication networks , control of unmanned vehicles , etc ) .we believe that our quantitative description of these two forms of stability elucidates two important aspects of the performance of cps .the questions investigated in this paper have been studied before under various hypotheses on cps : constant weights , time - dependent interactions , and cps with time - delays .optimization problems arising in the context of cp design were studied in .there is a body of related work on discrete time cps .robustness of cps to noise was studied in . in this paper , we offer a unified approach to studying convergence and stochastic stability of cps .our method applies to networks with directed information flow ; both cooperative and noncooperative interactions ; networks under weak stochastic forcing ; and those whose topology and strength of connections may vary in time .we derive sufficient conditions guaranteeing convergence of time - dependent cps and present estimates characterizing their stochastic stability . for cps on undirected graphs , we show that the rate of convergence and stability to random perturbations are captured by the generalized algebraic connectivity and the total effective resistance of the underlying graphs .previously , these results were available only for cps on graphs with positive weights . to further elucidate the role that network topology plays in shaping the dynamical properties of cps , we further develop our results for cps on simple networks ( see text for the definition of a simple network ) .our analysis of simple networks reveals the role of the geometric properties of the cycle subspace associated with the graph of the network ( such as the first betti number of the graph ; the length and the mutual position of the independent cycles ) to the stability of cps to random perturbations . in addition, we explore several implications of the results of the spectral graph theory to cp design .first , we show that expanders , sparse highly connected graphs , generate cps with the rate of convergence bounded from zero uniformly when the size of the network tends to infinity .in particular , cps based on expanders are effective for coordinating large networks .second , we point out that cps with random connections have nearly optimal convergence rate . in contrast, the convergence of cps on regular lattice - like graphs slows down rapidly as the size of the network grows .we illustrate these observations with numerical examples and refer to the relevant graph - theoretic results .the mathematical analysis of cps in this paper uses the method , which we recently developed for studying synchronization in systems of coupled nonlinear oscillators and reliability of neuronal networks .we further develop this method in several ways .first , we relate the key properties of the algebraic transformation of the coupling operator used in for studying synchronization to general properties of a certain class of pseudo - similarity transformations .second , we strengthen the graph theoretic interpretation of the stability analysis . we believe that our method will be useful for design and analysis of cps and for studying synchronization in a large class of models .the outline of the paper is as follows . in section [ algebra ] , we study the properties of a pseudo - similarity transformation , which is used in the analysis of cps in the remainder of the paper .section [ convergence ] is devoted to the convergence analysis of cps . after formulating the problem and introducing necessary terminology in [ formulation ] , we study convergence of cps with constant and time - dependent coefficients in [ stationary ] and [ nonstationary ] respectively .section [ robustness ] presents estimates characterizing stochastic stability of stationary and time - dependent cps .these results are applied to study cps protocols on undirected weighted graph in section [ undirected ] . in section [ connectivity ] , we discuss the relation between the connectivity of the graph and dynamical performance of cps .the results of this paper are summarized in section [ conclude ] .the analysis of cps in the sections that follow relies on certain properties of a pseudo - similarity transformation , which we study first .matrix is pseudo - similar to via if sd = ds. equation ( [ com ] ) is equivalent to the following property sq(d)=q(d)s , for any polynomial . to study the existence and the properties of pseudo - similar matrices , we recall the definition of the moore - penrose pseudo - inverse of a rectangular matrix ( cf . ) . is called a pseudo - inverse of if throughout this section , we use the following assumption .let and such that condition ( [ ranks ] ) implies that s^+=s^(ss^)^-1 , and , therefore , s^+s = p_r(s^)=p_(s)^ , ss^+=i_n - p . here , and denote the projection matrix onto the column space of and the identity matrix .the combination of ( [ ranks ] ) and ( [ kers ] ) guarantees the existence and uniqueness of the pseudo - similar matrix for via .let and satisfy assumption [ defines ] .then d = sds^+ is a unique pseudo - similar matrix to via . by the first identity in ( [ two - prop ] ) , therefore , equation ( [ com ] ) is solvable with respect to . by multiplying both sides of ( [ com ] ) by from the right and using the second property in ( [ two - prop ] ) ,we obtain ( [ pseudo ] ) .+ \{td}= s\{td}s^+,t. equation ( [ exp ] ) follows from the second identity in ( [ two - prop ] ) and the series representation of .+ the next lemma relates the spectral properties of and .suppose and satisfy assumption [ defines ] and is the pseudo - similar matrix to via . a : : if is a nonzero ev of then is an ev of of the same algebraic and geometric multiplicity. moreover , carries out a bijection from the generalized of onto that of preserving the jordan block structure .b : : is an ev of if and only if the algebraic multiplicity of as an ev of exceeds . in this case , the algebraic multiplicity of as as an ev of is diminished by . maps the generalized of onto that of .c : : maps a jordan basis of onto that of .a : : restricted to the direct sum of generalized eigenspaces of corresponding to nonzero eigenvalues is injective .+ let be a nonzero ev of .since for any ( cf .( [ poly ] ) ) , is a generalized of of index if and only if is a generalized of of index . therefore , bijectively maps the generalized of onto that of .the associated jordan block structures are the same .b : : if the generalized of is larger than then is nontrivial .choose a jordan basis for restricted to its generalized the image of this basis under consists of the vectors forming a jordan basis of restricted to its generalized and zero vectors . under the action of , each cyclic subspace of looses a unit in dimension if and only if .c : : the statement in * c * follows by applying the argument in * b * to a jordan basis of restricted to the generalized eigenspace corresponding to a nonzero eigenvalue .next , we apply lemmas [ main - property ] and [ spectra ] to the situation , which will be used in the analysis of cps below .denote and .let and be such that d=\{m^nn : me=0 } s=. by lemmas [ main - property ] and [ spectra ] , we have 1 . !d^(n-1)(n-1 ) : sd = ds 2 .d = sds^+ .3 . denote the evs of counting multiplicity by _ 1=0 , _ 2 , _ 3 , , _n , such that is an eigenvector corresponding to . then _ 2 , _ 3 , , _ nare the evs of . for maps bijectively the generalized of to those of . is an ev of if and only if the algebraic multiplicity of as an ev of is greater than .the following matrix satisfies ( [ d - and - s ] ) and can be used as an intertwining matrix in ( [ exists ] ) s= ( cccccc -1 & 1 & 0 & & 0 & 0 + 0 & -1 & 1 & & 0 & 0 + & & & & & + 0 & 0 & 0 & & -1 & 1 ) ^(n-1)n .in [ formulation ] , we introduce a continuous time cp , a differential equation model that will be studied in the remainder of this paper . convergence of cps with constant and time - dependent coefficients is analyzed in [ stationary ] and [ nonstationary ] , respectively . by a continuous time cp with constant coefficientswe call the following system of ordinary differential equations ( odes ) : x^(i)=_j=1^n a_ij ( x^(j)-x^(i ) ) , i:=\{1,2, ,n}. unknown functions ] using the terminology from the electrical networks theory , we call a conductance matrix. next , we associate with ( [ cp ] ) a directed graph , where the vertex set ] belongs to if by the network we call where function assigns conductance to each edge . if conductance matrix is symmetric , can be viewed as an undirected graph . if , in addition , , is called simplethe convergence analysis of cps with constant and time - dependent coefficients relies on standard results of the theory of differential equations ( see , e.g. , ) .it is included for completeness and to introduce the method that will be used later for studying stochastic stability of cps .we rewrite ( [ cp ] ) in matrix form x = dx . the matrix of coefficientsd = a-(|a_1,|a_2, ,|a_n),|a_i=_j=1^n a_ij is called a coupling matrix .let be a matrix with one dimensional null space , ( see example [ main - example ] for a possible choice of ) .the analysis in this section does not depend on the choice of .suppose has been fixed and define s=(ss^)^-12s .note that has orthogonal rows ss^=i_n-1s^s= s^+ s = p_^ , where stands for the orthogonal projection onto . by definition , and satisfy conditions of corollary [ 1dker ] . therefore , there exists a unique matrix d = sds^+=(ss^)^-12sds^(ss^)^-12=sds^ , whose properties are listed in corollary [ 1dker ] . in addition , using normalized matrix ( cf .( [ rescales ] ) ) affords the following property .let and be as defined in ( [ rescales ] ) .then , the pseudo - similar matrix to via , is normal ( symmetric ) if is normal ( symmetric ) . if is symmetric , then so is by ( [ newdhat ] ) .suppose is normal. then there exists an orthogonal matrix and diagonal matrix such that by lemma [ spectra ] , with and denote the columns of by , /\{1\} ] are measurable locally bounded real - valued functions . under these assumptions ,we formulate two general sufficient conditions for convergence of time - dependent cps . by a time - dependent cp, we call the following ode x = d(t)x , d(t)=(d_ij(t))^nn , where the coupling matrix is defined as before d(t)=a(t)-(|a_1(t),|a_2(t ) , , |a_n(t ) ) , |a_i(t)=_j=1^n a_ij(t ) . under our assumptions on , the solutions of ( [ dynamic ] )( interpreted as solutions of the corresponding integral equation ) are well - defined ( cf . ) . by construction , coupling matrix satisfies the following condition ( see ( [ coupling ] ) ) for convenience , we reserve a special notation for the class of admissible matrices of coefficients : d(t)_1:=\ { m(t)^nn : m_ij(t)l_loc^(^+ ) & ( m(t)t0 ) } .note that for any there exists a unique pseudo - similar matrix to via , , provided satisfies assumption [ defines ] .below , we present two classes of convergent time - dependent cps . the first class is motivated by the convergence analysis of cps with constant coefficients . matrix valued function is called uniformly dissipative with parameter if there exists such that y^d(t ) y -y^y y^n-1 & t0 , where is the pseudo - similar matrix to via . the class of uniformly dissipative matrices is denoted by . the convergence of uniformly dissipative cps is given in the following theorem .let .cp ( [ dynamic ] ) is convergent with the rate of convergence at least .it is sufficient to show that is an asymptotically stable solution of the reduced system for y = d(t ) . for solution of the reduced system , we have thus , in conclusion , we prove convergence for a more general class of cps . the coupling matrix is called asymptotically dissipative if d=\ { m(t)_1 : _ t t^-1_0^t _|y|=1 y^m(u ) y du < 0 } .if then cp ( [ dynamic ] ) is convergent .let be a solution of the reduced system ( [ reduced - again ] ). then by gronwall s inequality , since , thus , this section , we study stochastic stability of cps . specifically , we consider x = d(t)x+u(t)w , x(t)^n , where is a white noise process in , ( cf . ( [ admissible ] ) ) , and , the consensus subspace , , forms an invariant center subspace of the deterministic system obtained from ( [ perturbed ] ) by setting . since the transverse stability of the consensus subspace is equivalent to the stability of the equilibrium of the corresponding reduced equation , along with ( [ perturbed ] ), we consider the corresponding equation for : y = d(t)y+su(t)w , where is defined in ( [ rescales ] ) .the solution of ( [ red ] ) with deterministic initial condition is a gaussian random process .the mean vector and the covariance matrix functions of stochastic process m(t):= y(t ) v(t):= , satisfy linear equations ( cf . ) m= d m v= d v + vd^+^2 su(t)u(t)^s^. the trivial solution of the reduced equation ( [ reduced ] ) is not a solution of the perturbed equation ( [ red ] ) . nonetheless , if the origin is an asymptotically stable equilibrium of the deterministic reduced equation obtained from ( [ red ] ) by setting , the trajectories of ( [ perturbed ] ) exhibit stable behavior .in particular , if ( [ dynamic ] ) is a convergent cp , for small , the trajectories of the perturbed system ( [ perturbed ] ) remain in vicinity of the consensus subspace on finite time intervals of time with high probability .we use the following form of stability to describe this situation formally .cp ( [ dynamic ] ) is stable to random perturbations ( cf .( [ perturbed ] ) ) if for any initial condition and _t p_^x(t)=0 |p_^x(t)|^2=o(^2 ) , t. let be a uniformly dissipative matrix with parameter ( cf . definition [ dissipative ] ) .then cp ( [ dynamic ] ) is stable to random perturbations . in particular , the solution of the initial value problem for ( [ perturbed ] ) with deterministic initial condition satisfies the following estimate |p_^x(t)|^2 _uu(u)u^(u ) , t>0 , where stands for the operator matrix norm induced by the euclidean norm .suppose the strength of interactions between the agents in the network can be controlled by an additional parameter x= g d(t)x + u(t)w . here ,the larger values of correspond to stronger coupling , i.e. , to faster information exchange in the network . by applying estimate ( [ main ] ) to ( [ strength ] ), we have |p_^x(t)|^2 _ uu(u)u^(u ) , t>0 .note that the variance of can be effectively controlled by .in particular , the accuracy of the consensus can be enhanced to any desired degree by increasing the rate of information exchange between the agents in the network . for the applications of this observations to neuronal networks , we refer the reader to . since and , it sufficient to prove ( [ main ] ) with replaced by the solution of the reduced problem ( [ red ] ) , .let denote the principal matrix solution of the homogeneous equation ( [ red ] ) with .the solution of the initial value problem for ( [ red ] ) is a gaussian random process whose expected value and covariance matrix are given by since , we have y^d(t)y - y^y y^n-1 , t0 .this has the following implication . for all , is positive definite and , therefore , is well defined . here and throughout this paper , stands for the symmetric part of square matrix . using this observation , we rewrite the integrand in ( [ covy ] ) as follows where by taking into account ( [ integrand ] ) , from ( [ covy ] ) we have y_t _ u\{f(u ) } \ { i-(t)^(t ) } _ uf(u ) .further , since ( cf .( [ orthogonality ] ) ) . estimate ( [ main ] ) follows from ( [ ey ] ) , ( [ almost ] ) , and ( [ there ] ) .+ theorem [ robust ] describes a class of stochastically stable time - dependent cps . because much of the previous work focused on cps with constant coefficients ,we study them separately . to this end , we consider x = dx + w , and the corresponding reduced system for y = dy + sw . in ( [ const - perturbed ] ) , we set to simplify notation .suppose that and has the same meaning as in ( [ alpha ] ) .then for any , there exists a positive constant such that _ t p_^x(t)=0|p_^x(t)|^2 .since , for any , there exists a change of coordinates in , , such that ( cf . ) . by theorem[ robust ] , solutions of satisfy ( [ main ] ) .thus , ( [ first - estimate ] ) holds with for some possibly depending on .+ the estimate of in ( [ first - estimate ] ) characterizes the dispersion of the trajectories of the stochastically forced cp ( [ const - perturbed ] ) around the consensus subspace . can be viewed as a measure of stochastic stability of consensus subspace . in ( [ first - estimate ] ) , the upper bound for is given in terms of the leading nonzero eigenvalue of . if is normal then precise asymptotic values for and are available .the stability analysis of ( [ const - perturbed ] ) with normal is important for understanding the properties of cps on undirected graphs , which we study in the next section .suppose is normal .denote the evs of by , where is a simple ev .let be a pseudo - similar matrix to via ( cf .( [ rescales ] ) ) . then for any deterministic initial condition , the trajectory of ( [ const - perturbed ] ) is a gaussian random process with the following asymptotic properties by the observation in remark [ note ] , it sufficient to prove the relations ( [ asympt - mean ] ) and ( [ asympt - norm ] ) with replaced by .the solution of the reduced equation ( [ const - red ] ) with a deterministic initial condition is a gaussian process ( cf . ) y(t)=e^tdy_0+_0^t e^(t - u)dsdw(u ) . from ( [ linear ] ) specialized to solutions of ( [ const - red ] ) , we have since and is a stable normal matrix , from ( [ cov ] ) we have v(t)=_0^t e^2ud^s du ( d^s)^-1 , t , where stands for the symmetric part of . by taking into account ( [ mean - y ] ), we have estimate ( [ asympt - norm ] ) was derived in for cps with positive weights .a similar estimate was obtained in in the context of analysis of a neuronal network .in this section , we apply the results of the previous section to cps on undirected graphs .the analysis reveals the contribution of the network topology to stability of cps .in particular , we show that the dimension and the structure of the cycle subspace associated with the graph of the network are important for stability .the former quantity , the first betti number of the graph , is a topological invariant of the graph of the network .we start by reviewing certain basic algebraic constructions used in the analysis of graphs ( cf . ) . throughout this subsection , we assume that is a connected graph with vertices and edges : the vertex space , , and the edge space , , are the finite - dimensional vector spaces of real - valued functions on and , respectively .we fix an orientation on by assigning positive and negative ends for each edge in .the matrix of the coboundary operator with respect to the standard bases in and is defined by h=(h_ij)^mn , h_ij=\ { cl 1 , & v_j e_i , + -1 , & v_j e_i , + 0 , & . .the laplacian of is expressed in terms of the coboundary matrix l = h^h . by denote a spanning tree of , a connected subgraph of such that contains no cycles . without loss of generality, we assume that e(g)=\{e_1 , e_2 , , e_n-1}. a cycle of length is a cyclic subgraph of : for some distinct integers ^k ] not belonging to the spanning tree , there corresponds a unique cycle such that we orient cycles , ] as the fundamental cycles of .define matrix by construction , has the following block structure , z=(q i_c ) , where is the identity matrix .matrix contains the coefficients of expansions of ] _n-1+k=-_l=1^n-1 q_kl , q_kl\{0 , 1}. here , ] , then ; and , otherwise .thus , the spectra of and coincide , because is an orthogonal matrix .b : : ( l_c(g))_ij= _ i(l_c(g)),_j(l_c(g))= \ { + cc |o_i|-1 , & i = j , + |o_io_j| , & ij . + .c : : if the cycles are disjoint , then assuming that that , ] are assigned to all edges of .if the network is simple , .let be a spanning tree of .we continue to assume that the edges of are ordered so that ( [ span - tree ] ) holds .let , , and , , be the coboundary , cycle incidence matrix and conductance matrices corresponding to the spanning tree , respectively . using ( [ part ] ) , we recast the coupling matrix as d =- h^(c_1+q^c_2q)h .to form the reduced equation , we let then y = d y+ sw , where d=-(h h^)^12 ( c_1+q^c_2q)(h h^)^12 .both and are symmetric matrices . by lemma [ spectra ] ,the eigenspaces of and are related via .the evs of are the same as those of except for a simple zero ev corresponding to the constant eigenvector .a cp with positive weights s is always convergent , as can be easily seen from ( [ rewrite ] ) . for a general case, we have the following necessary and sufficient condition for convergence of the cp on an undirected graph .the cp ( [ constant ] ) with matrix ( [ weight - lap ] ) is convergent if and only if matrix is positive definite for some spanning tree . by ( [ rewrite ] ), is a stable matrix if and only if is positive definite . + if stochastic stability of the cp ( [ const - perturbed] and ( [ weight - lap ] ) is guaranteed by theorem [ variability ] .in particular , ( [ asympt - norm ] ) characterizes the dispersion of trajectories around the consensus subspace .theorems [ exhaustive ] and [ variability ] provide explicit formulas for the rate of convergence and the degree of stability of convergent cps on undirected graphs .specifically , let be a network corresponding to ( [ const - perturbed ] ) with ( cf .( [ recast - coup ] ) ) .let denote the evs of and define ( ) = _ 2 ( ) = _i=2^n 1_i .formulas in ( [ alpha - and - rho ] ) generalize algebraic connectivity and ( up to a scaling factor ) total effective resistance of a simple graph to weighted networks corresponding to convergent cps on undirected graphs . by replacing the evs of by those of in ( [ alpha - and - rho ] ), the definitions of and can be extended to convergent cps with normal coupling matrices . for simple networks , there are many results relating algebraic connectivity and total effective resistance and the structure of the graph ( cf .theorems [ exhaustive ] and [ variability ] link structural properties of the network and dynamical performance of the cps . in conclusion of this section, we explore some implications of ( [ rewrite ] ) for stability of ( [ const - perturbed ] ) .to make the role of the network topology in shaping stability properties of the system more transparent , in the remainder of this paper , we consider simple networks , i.e. , . in the context of stability of cps ,the lower bounds for and the upper bounds for are important .let stand for the coboundary matrix of a spanning tree of undirected graph and let be the corresponding cycle incidence matrix .then a : : b : : where stands for the algebraic connectivity of , and denotes the smallest ev of the positive definite matrix .since is a simple network , the coupling matrix taken with a negative sign is the laplacian of , .likewise , is a laplacian of .let denote the evs of .below we use the same notation to denote the evs of other positive definite matrices , e.g. , and . by lemma [ spectra ] ,the second ev of , , coincides with the smallest ev of below , we will use the following observations .a : : the sets of nonzero evs of two symmetric matrices and coincide . since the former is a full rank matrix , the spectrum of consists of nonzero evs of .in particular , _1(hh^)=(g ) .b : : the evs of and those of coincide . using the variational characterization of the evs of symmetric matrices ( cf . ) and the observations * a * and * b * above , we have hence , likewise , a symmetric argument yields lemma [ one - more ] shows respective contributions of the spectral properties of the spanning tree and those of the cycle subspace to the algebraic connectivity and effective resistance of . in this respect , it is of interest to study the spectral properties of , in particular , its smallest ev and the trace .another motivation for studying comes from the following lemma . under assumptions of lemma[ one - more ] , solutions of cp ( [ const - perturbed ] ) satisfy _ t |hx(t)|^2 = ^22(g , g),(g , g)=(i_n-1+q^q)^-1 .the reduced equation for has the following form y = dy+h w , where d= h d h^+ . using and ( [ part ] ) , we rewrite ( [ dhatt ] ) as follows d= h d h^+ = -h h^h h^(h h^)^-1=-h h^(i_n-1+q^q ) . by applying the argument used in the proof of theorem [ variability ] to the reduced equation ( [ redd ] ), we obtain _x(t)|^2 = ^2 2 \{h h^(d)^-1 } = ^22 ( g , g ) . the combination of ( [ new - lim ] ) and ( [ dhatt ] ) yields ( [ kappa ] ) .+ the following lemma provides the graph - theoretic interpretation of .let be a connected graph and be a spanning tree of .a : : if is a tree then ( g , g)=n-1 .b : : otherwise , denote the corank of by and let be the system of fundamental cycles corresponding to .+ b.1 ; ; denote = 1n-1_k=1^c ( |o_k|-1 ). then 1 , b.2 ; ; if then 1-cn-1(1 - 1 ) 1 , where = _ k \{|o_k| + _ lk |o_ko_l|}. b.3 ; ; if ] , denote the evs of : by the arithmetic - harmonic means inequality , we have ( 1n-1^n-1_i=1_i)^-1^n-1_i=11_i1 .the double inequality in ( [ cool ] ) follows from ( [ harmonic ] ) by noting that and b.2 : : since , by the interlacing theorem ( cf .theorem 4.3.4 ) , we have 1_k(i_n-1+q^q)_k+c(i_n-1)=1 , k. for , we use the weyl s theorem to obtain 1_k(i_n-1+q^q)1+_n-1(q^q)=1+_c(qq^ ) . using ( [ dot ] ) , by the gershgorin s theorem, we further have 1+_c(qq^)_k \{|o_k| + _ lk |o_ko_l|}. the combination of ( [ weyl ] ) and ( [ gersh ] ) yields b.3 : : since each cycle contains at least two edges from the spanning tree , then the number of disjoint cycles can not exceed the integer part of . in particular , . + by ( [ dot ] ) , because the cycles are disjoint .further , the nonzero eigenvalues of and coincide .thus , _k((i_n-1+q^q)^-1)=\ { + cc 1 , & k , + | o_k+c+1-n|^-1 , & n - ckn-1 , + . by plugging ( [ sp ] ) in ( [ kappa ] ) ,we obtain ( [ disjoint ] ) .estimates of in lemma [ cycles ] , combined with the estimates in lemmas [ one - more ] and [ rephrase ] , show how stochastic stability of cps depends on the geometric properties of the cycle subspace associate with the graph , such as the first betti number ( cf .( [ c - small ] ) ) and the length and the mutual position the fundamental cycles ( cf .( [ weight ] ) , ( [ cool ] ) , ( [ interposition ] ) , ( [ disjoint ] ) ) . in particular , from ( [ kappa ] ) and the estimates in lemma [ cycles ]one can see how the changes of the graph of the network , which do not affect a spanning tree , impact stochastic stability of the cp .likewise , by combining the statements in lemma [ cycles ] with the following estimate of the total effective resistance ( cf .lemma [ one - more ] ) ( ) ( g)(g , g ) , one can see how the properties of the spanning tree and the corresponding fundamental cycles contribute to the stochastic stability of the cp . *in the previous section , we derived several quantitative estimates characterizing convergence and stochastic stability of cps . in this section , we discuss two examples illustrating how different structural features of the graph shape the dynamical properties of cps . in the first pair of examples , we consider graphs of extreme degrees : vs. . in the second example , we take two networks of equal degree but with disparate connectivity patterns : random vs. symmetric .these examples show that both the degree of the network and its connectivity are important .consider two simple networks supported by a path , , and by a complete graph , ( fig . [ f.0 ] a and b ) .the coupling matrices of the corresponding cps are given by d_p= ( cccccc -1 & 1 & 0 & & 0 & 0 + 1 & -2 & 1 & & 0 & 0 + & & & & & + 0 & 0 & 0 & & 1 & -1 ) d_c= ( cccc -n+1 & 1 & & 1 + 1 & -n+1 & & 1 + & & & + 1 & 1 & & -n+1 ) .the nonzero evs of and are given by _^ 2(i2n)_i+1(k_n)=n , i=1,2 , , n-1 .thus , ( p_n)=4 ^ 2(2n ) , ( k_n)=n , ( k_n)=1-n^-1 . to compute compute , we use the formula for the total effective resistance of a tree ( cf .( 5 ) , ) ( p_n)=n^-1_i=1^n-1 i(n - i)=16 ( n^2 - 1 ) .equation ( [ compare ] ) shows that for , the convergence rate of the cp based on the complete graph is much larger than that based on the path : ( k_n)=n o(n^-2)=(p_n ) .one may be inclined to attribute the disparity in the convergence rates to the fact that the degrees of the underlying graphs ( and , therefore , the total number of edges ) differ substantially . to see to what extent the difference of the total number of the edges or , in electrical terms ,the amount of wire used in the corresponding electrical circuits , can account for the mismatch in the rates of convergence , we scale the coupling matrices in example [ pair ] by the degrees of the corresponding graphs : the algebraic connectivities of the rescaled networks are still far apart : _2(d_c)=1+o(n^-1)o(n^-2)=_2(d_p ) .this shows that the different values of in ( [ complete - vs - path ] ) reflect the distinct patterns of connectivity of these networks .explicit formulas for the evs of the graph laplacian similar to those that we used for the complete graph and the path are available for a few other canonical coupling architectures such as a cycle , a star , an lattice ( see , e.g. , 4.4 ) .explicit examples of graphs with known evs can be used for developing intuition for how the structural properties of the graphs translate to the dynamical properties of the corresponding cps .equation ( [ complete - vs - path ] ) shows that the rate of convergence of cps based on local nearest - neighbor interactions decreases rapidly when the network size grows .therefore , this network architecture is very inefficient for coordination of large groups of agents .the following estimate shows that very slow convergence of the cps based on a path for is not specific to this particular network topology , but is typical for networks with regular connectivity . for graph of degree on vertices , the following inequality holds ^ 2.\ ] ]this means that if the diameter of grows faster than ( as in the case of a path or any lattice ) , the algebraic connectivity and , therefore , the convergence rate of the cp goes to zero as .therefore , regular network topologies such as lattices result in poor performance of cps .in contrast , below we show that a random network with high probability has a much better ( in fact , nearly optimal ) rate of convergence .the algebraic connectivity of the ( rescaled ) complete graph does not decrease as the size of the graph goes to infinity ( cf .( [ scale_path ] ) ) .there is another canonical network architecture , a star ( see fig . [ f.0]c ) , whose algebraic connectivity remains unaffected by increasing the size of the network : however , both the complete graph and the star have disadvantages from the cp design viewpoint .cps based on the complete graph are expensive , because they require interconnections .the star uses only edges , but the performance of the entire network critically depends on a single node , a hub , that connects to all other nodes .in addition , update of the information state of the hub requires simultaneous knowledge of the states of all other agents in the network .therefore , neither the complete graph nor the star can be used for distributed consensus algorithms . *a * ideally , one would like to have a family of sparse graphs that behaves like that of complete graphs in the sense that the algebraic connectivity remains bounded from zero uniformly : moreover , the greater the value of the better the convergence of the corresponding cps .such graphs are called ( spectral ) expanders .expanders can be used for producing cps with a guaranteed rate of convergence regardless of the size of the network .there are several known explicit constructions of expanders including celebrated ramanujan graphs ( see for an excellent review of the theory and applications of expanders ) .in addition , random graphs are very good expanders . to explain this important property of random graphs ,let us consider a family of graphs on vertices of fixed degree , i.e. , is an . the following theorem due to alon and boppana yields an ( asymptotic in ) upper bound on . for any , and , ( g_n)g(d)+o_n(1 ) , g(d):=d-2>0 . therefore ,for large , can not exceed more than by a small margin .the following theorem of friedman shows that for a random -graph , is nearly optimal with high probability . for every , \ { ( g_n)g(d)-}=1-o_n(1 ) , where is a family of random -graphs .theorem [ random_graph ] implies that cps based on random graphs exhibit fast convergence even when the number of dynamic agents grows unboundedly .note that for , an -graph is sparse . nonetheless , the cp based on a random -graph possesses the convergence speed that is practically as good as that of the normalized complete graph ( cf .( [ scale_path ] ) ) .therefore , random graphs provide a simple practical way of design of cps that are efficient for coordinating large networks . * a * + * c * in this example , we compare performance of two cps with regular and random connectivity .( a ) : : the former is a cycle on vertices , .each vertex of is connected to ( is even ) of its nearest neighbors from each side ( see fig . [( b ) : : the latter is a bipartite graph on vertices , .the edges are generated using the following algorithm : + 1 .let \to[m] ] ,add edge .repeat step 1 . times . in fig .[ f.2 ] , we present numerical simulations for cps based on graphs in example [ path_and_bipart ] for and .the rates of convergence for these cps are summarized in the following table .table 5.4 [ cols="^,^,^,^,^",options="header " , ] the cp based on regular graphs has a very small rate of convergence already for . as tends to zero .in contrast , random graphs yield rates of convergence with very mild dependence on the size of the network . for the values of used in this experiment , ( ) are close to the optimal limiting rate .the difference of the convergence rates is clearly seen in the numerical simulations of the corresponding cps ( see fig .[ f.2 ] a , b ) .the trajectories generated by cps with random connections converge to the consensus subspace faster .we also conducted numerical experiments with randomly perturbed equation ( [ perturbed ] ) to compare the robustness to noise of the random and regular cps .the cp on the random graph is more stable to random perturbations than the one on the regular graph ( see fig . [ f.2]c , d ) .in this paper , we presented a unified approach to studying convergence and stochastic stability of a large class of cps including cps on weighted directed graphs ; cps with both positive and negative conductances , time - dependent cps , and those under stochastic forcing .we derived analytical estimates characterizing convergence of cps and their stability to random perturbations .our analysis shows how spectral and structural properties of the graph of the network contribute to stability of the corresponding cp .in particular , it suggests that the geometry of the cycle subspace associated with the graph of the cp plays an important role in shaping its stability .further , we highlighted the advantages of using expanders and , in particular , random graphs , for cp design .the results of this paper elucidate the link between the structural properties of the graphs and dynamical performance of cps .the theory of cps is closely related to the theory of synchronization . with minimal modifications ,the results of the present study carry over to a variety coupled one - dimensional dynamical systems ranging from the models of power networks to neuronal networks and drift - diffusion models of decision making . moreover ,the method of this paper naturally fits in into a general scheme of analysis of synchronization in coupled systems of multidimensional nonlinear oscillators .an interested reader is referred to for related techniques and applications . 0.2 cm * acknowledgements . *the author thanks dmitry kaliuzhnyi - verbovetskyi for useful discussions and the referees for careful reading of the manuscript and many useful suggestions .anatolii grinshpan provided comments on an earlier version of this paper , including a more concise proof of lemma [ spectra ] .part of this work was done during sabbatical leave at program of applied and computational mathematics of princeton university .this work was supported in part by the nsf award dms 1109367 .g. margulis , explicit group - theoretic constructions of combinatorial schemes and their applications in the construction of expanders and concentrators .( russian ) problemy peredachi informatsii 24 ( 1988 ) , no .1 , 5160 ; ( english translation in problems inform .transmission 24 ( 1988 ) , no .1 , 3946 ) .i. poulakakis , l. scardovi , and n. leonard , coupled stochastic differential equations and collective decision making in the alternative forced choice task , _ proceedings of the american control conference _ , 2010 .
a unified approach to studying convergence and stochastic stability of continuous time consensus protocols ( cps ) is presented in this work . our method applies to networks with directed information flow ; both cooperative and noncooperative interactions ; networks under weak stochastic forcing ; and those whose topology and strength of connections may vary in time . the graph theoretic interpretation of the analytical results is emphasized . we show how the spectral properties , such as algebraic connectivity and total effective resistance , as well as the geometric properties , such the dimension and the structure of the cycle subspace of the underlying graph , shape stability of the corresponding cps . in addition , we explore certain implications of the spectral graph theory to cp design . in particular , we point out that expanders , sparse highly connected graphs , generate cps whose performance remains uniformly high when the size of the network grows unboundedly . similarly , we highlight the benefits of using random versus regular network topologies for cp design . we illustrate these observations with numerical examples and refer to the relevant graph - theoretic results . + + * key words . * consensus protocol , dynamical network , synchronization , robustness to noise , algebraic connectivity , effective resistance , expander , random graph + + * ams subject classifications . * 34d06 , 93e15 , 94c15
the sample covariance matrix is fundamental to multivariate statistics . when the population size is not large and for a sufficient number of samples , the sample covariance matrix is a good approximate of the population covariance matrix .however when the population size is large and comparable with the sample size , as is in many contemporary data , it is known that the sample covariance matrix is no longer a good approximation to the covariance matrix .the marchenko - pastur theorem states that with the sample size , the population size , as such that , the eigenvalues , , of the sample covariance matrix of normalized i.i.d .gaussian samples satisfy for any real almost surely where and and when .when , there is an additional dirac measure at of mass .moreover , there are no stray eigenvalues in the sense that the top and bottom eigenvalues converge to the edges of the support of : almost surely and almost surely ( when ) .one can extract from this some information of the population covariance matrix even though the sample covariance matrix is not a good approximate .for example , if there are non - zero eigenvalues of the sample covariance matrix well separated from the rest of the eigenvalues , one finds , assuming the gaussian entries , that the samples are not i.i.d .. there are indeed many cases in which a few eigenvalues of the sample covariance matrix are separated from the rest of the eigenvalues , the latter being packed together as in the support of the marchenko - pastur function .the examples include speech recognition , mathematical finance , , , wireless communication , physics of mixture , and data analysis and statistical learning . as a possible explanation for such features , johnstone proposed the ` spiked population model ' where all but finitely many eigenvalues of the population covariance matrix are the same ,say equal to .the question is how the eigenvalues of the sample covariance matrix would depend on the non - unit population eigenvalues as .it is known that the marchenko - pastur result still holds for the spiked model .but and are not guaranteed and some of the eigenvalues are not necessarily in the support of . for example , consider the case when the population covariance matrix has one non - unit eigenvalue , denoted by . if is close to , one would expect that as the dimension becomes large , the population covariance matrix would be close to a large identity matrix , and hence would have little effect on the eigenvalues of the sample covariance matrix . on the other hand , if is much bigger than , then even if becomes large , might still pull up the eigenvalues of the sample covariance matrix .how big should be in order to have any effect , how many eigenvalues of the sample covariance matrix would be pulled up and exactly where would the pulled - up eigenvalues be ?we will see in the results below that the answers are ( where ) , one eigenvalue at most , and , respectively . for _ complex gauassian _ samples , the papers study the _largest _ eigenvalue of the sample covariance matrix .the authors determine the transition behavior and the limiting distributions are also obtained .the purpose of this paper is a complete study of the spiked model for _ both real and complex samples which are not necessarily gaussian_. we obtain almost sure limit results .a general study of ` non - null ' covariance matrices was done in .we will show in this paper how to extract the desired results from the work of .while this paper was being prepared , the authors learned that debashis paul was also studying the spiked model independently at the same time , which has some overlap with this work .paul considers the real gaussian samples for , and obtains the almost sure limits as in and below for large sample eigenvalues .moreover , when all non - unit population eigenvalues are simple , the limiting distribution is found to be gaussian ( see subsection [ sec : discussion ] below for more detail ) . on the other hand , our paper ( i )is concerned with more general samples , not necessarily gaussian , ( ii ) includes all choices of and ( iii ) studies both large and small sample eigenvalues .we remark that a complete study of limiting distributions is still an open question .let be a fixed non - negative definite hermitian matrix .let , , be independent and identically distributed complex valued random variables satisfying and set , , .we take the sampled vectors to be the columns of , where is an hermitian square root of .hence is the population covariance matrix .of course , not all random vectors are realized as such , but this model is still very general .when are i.i.d ( real or complex ) gaussian , the model becomes the gaussian samples with population covariance matrix .outside the gaussian case we see that these vectors cover a broad range of random vectors , completely real or complex , with arbitrary population covariance matrix .let be the sample covariance matrix , where denotes conjugate transpose .denote the eigenvalues of by : for some unitary matrix , for definiteness , we order the eigenvalues as .let be fixed real numbers for some fixed , which is independent of and .let be fixed non - negative integers and set , which are also independent of and .we assume that all the eigenvalues of are except for , say , the first eigenvalues .this is the ` spiked population model ' proposed in .let the first eigenvalues be equal to with multiplicity , respectively : for some unitary matrix , we set .[ thm1 ] assume that and such that for a constant .let be the number of s such that , and let be the number of s such that .then the following holds .* for each , almost surely . * + almost surely .* + almost surely ( recall ) .* for each , almost surely . therefore , when ,in order for a population eigenvalue to contribute a non - trivial effect to the eigenvalues of the sample covariance matrix , it should sufficiently big ( larger than ) or sufficiently small ( less than ) .as an example , when , by denoting the only non - unit eigenvalue by , the largest sample eigenvalue satisfies almost surely .when , by denoting the two non - unit eigenvalues by , the largest sample eigenvalue satisfies almost surely .the results and are also independently obtained in under the assumption that the samples are gaussian .[ thm2 ] assume that and such that for a constant . let be the number of s such that . then the following holds .* for each , almost surely . * + almost surely . * + almost surely .* for all , thus , unlike the case of , small eigenvalues of do not affect the eigenvalues of when .[ thm3 ] assume that and such that let be the number of s such that . then the following holds .* for each , almost surely . * + almost surely .* + almost surely .as mentioned earlier , the limiting density of the eigenvalues of spiked population models is given by the marchenko - pastur theorem as in the identity population matrix case , and for the top eigenvalue in the _ complex gaussian _ case , the results theorem 1.1 and theorem 1.3 were first obtained in .the paper ( see section 6 ) contains an interesting heuristic argument for the critical value and the value for above : they come from a competition between a 1-dimensional last passage time and a 2-dimensional last passage time .it would be interesting to have such a heuristic reasoning for the general case .when is the identity matrix ( the ` null case ' ) , under the gaussian assumption , the limiting distribution for the largest eigenvalue is obtained for the complex case in and for the real case in . shows the gaussian assumption is not necessary when .the limiting distributions are the tracy - widom distributions in the random matrix theory in mathematical physics . for the spiked model with complex gaussian samples when , the limiting distributions of the largest eigenvalue are obtained in .the paper determines the limiting distribution of for complete choices of the largest population eigenvalue and its multiplicity : the distribution is ( i ) the tracy - widom distribution when , ( ii ) certain generalizations of the tracy - widom distribution ( see also ) when , and ( iii ) the gaussian distribution ( ) and its generalization ( , the gaussian unitary ensemble ) when . for real gaussian samples showed that when , and , the limiting distribution of , , is gaussian .it is an interesting open question to determine the limiting distribution for the general case of real samples .see section 1.3 of for a conjecture for the scaling .we include several plots for the case when and there are three non - unit population eigenvalues given by , and ( of multiplicity each ) . in this case , the critical values of the eigenvalues are and . hence theoretically we expect that three sample covariance eigenvalues of values and are away from the interval \simeq [ 0.08578 , 2.91422] ] with lies in an open interval outside the support of for all large .now we will state the main result of which we need for our analysis .it is easy to check that converges to some distribution function .then is the almost sure limit of the empirical spectral distribution of and is the almost sure limit of the empirical spectral distribution of .the function is not the empirical distribution of .the distribution function is defined only through .moreover , the stieltjes transform of , is invertible , with the inverse given by on the other hand , given , is determined from and .note that is well - defined not only on but also up to the real line outside and its inverse exists on .if ] . given an interval ] is not contained in ] is an interval satisfying condition ( f ) .since and for any , we see that \subset & \bigl(-\infty , 0 \bigr ) \cup \biggl(0 , \alpha_m + \frac{c \alpha_m}{\alpha_m-1}\biggr ) \cup ( \alpha_{m } + \frac{c \alpha_{m}}{\alpha_{m}-1 } , \alpha_{m-1 } + \frac{c \alpha_{m-1}}{\alpha_{m-1}-1 } \biggr ) \\ & \cup \dots \cup \biggl ( \alpha_{m_1 + 1 } + \frac{c \alpha_{m_1 + 1}}{\alpha_{m_1 + 1}-1 } , ( 1-\sqrt{c})^2 \biggr ) \\ & \cup \biggl((1+\sqrt{c})^2 , \alpha_{m_0 } + \frac{c \alpha_{m_0}}{\alpha_{m_0}-1 }\biggr ) \cup \dots \cup \biggl ( \alpha_2 + \frac{c \alpha_2}{\alpha_2 - 1 } , \alpha_1 + \frac{c \alpha_1}{\alpha_1 - 1 } \biggr ) \cup \biggl ( \alpha_1 + \frac{c \alpha_1}{\alpha_1 - 1 } , \infty\biggr ) .\end{split}\ ] ] on the other hand , hence \subset \operatorname{supp}(f_\infty)^c ] .first fix .take = \bigl[\alpha_j + \frac{c \alpha_j}{\alpha_j-1}+\epsilon , \alpha_{j-1 } + \frac{c \alpha_{j-1}}{\alpha_{j-1}-1}-\epsilon \bigr]\ ] ] for an arbitrary fixed .( here . ) from , we see that \subset ( z_{j , + } ^{(n ) } , z_{j-1,-}^{(n)})\ ] ] for all large , and hence condition ( f ) is satisfied using proposition [ prop : suppfp ] .set ( when , . ) for given by , but and hence the condition is satisfied .therefore is defined to satisfy the condition .proposition [ prop : suppfp ] now implies that this yields that , , which implies for , and for the second choice of ] , we obtain . as the third and forth choices of ] satisfying condition ( f ) of proposition [ prop : mainprop ] is contained in which is a subset of we first observe a monotonicity of in .let , and let . when , it is clear that therefore , if the ordered eigenvalues of are denoted by , the min - max principle implies that for all .on the other hand , take $ ] for .we first assume .then as , for sufficiently small , and hence . by applying theorem [ thm1 ] and using , we obtain the following :
we consider a spiked population model , proposed by johnstone , whose population eigenvalues are all unit except for a few fixed eigenvalues . the question is to determine how the sample eigenvalues depend on the non - unit population ones when both sample size and population size become large . this paper completely determines the almost sure limits for a general class of samples .
although the precise definition of handedness is often debated , it is widely accepted that roughly one in ten humans are left - handed . since prehistoric times , minor cultural and geographical variations in this percentage have been observed , but every historical population has shown the same strong bias toward right - handedness .both genetic and environmental factors seem to contribute to handedness for individuals ; nonetheless , individual handedness does not necessarily lead to species - level handedness .it is well - established that for an individual , lateralization can be advantageous : for example , it allows for specialization of brain function which may lead to enhanced cognition through parallel information processing . at the species level , however , the advantage of lateralization is not well understood .negative frequency - dependent selection alone , a primary mechanism by which polymorphisms are maintained , can only produce a balanced distribution of left- and right - handers due to the symmetry inherent in handedness .there have been various attempts to account for species - level asymmetry with the use of `` fitness functions '' .we propose a different approach to the problem .we define a function representing the mean probability that a right - handed individual ( male or female ) bears left - handed offspring in a given time period . a minimal model for the evolution of the societal fraction left - handed in terms of this arbitrary frequency - dependent transition rate given by we assume symmetry between right- and left - handers ( supplementary material section [ sec : the symmetry of handedness ] ) , so that we may write to obtain this function incorporates frequency - dependent selection effects , and can be approximated given a biological model for inheritance ( supplementary material section [ sec : phenotypic model for population dynamics ] , figure [ fig : ldotplot ] ) . in a purely competitive society, it would be natural to assume to be a monotonically decreasing function of .when left - handers are scarce , they have an advantage in physical confrontations due to their greater experience against right - handers and the right - handers lack of experience against them .as their numbers grow , that advantage weakens .however , in a purely cooperative society , physical confrontations would not exist , and all individuals would tend to the same handedness to increase cooperative efficiency and eliminate the fitness disadvantage of the minority handedness .( the modern presence of a higher accidental death rate for left - handers demonstrates that a fitness differential persists today . )thus would increase monotonically with . for a system involving both cooperative and competitive interactions, we therefore write where represents the degree of cooperation in interactions and the monotonicity properties of each component function are as given above . for physically reasonable choices of these functions ,there may exist one , three , or five fixed points in this system , depending on the value of .on . ( b ) nonmonotonic on .insets : and its component functions , .,width=317 ] figure [ fig1 ] shows the typical positions of stable and unstable equilibria for equation , where and have been chosen to be generic sigmoid functions ( sigmoid functions arise naturally in models where separate fitness functions exist for left- and right - handers see supplementary material section [ sec : relating fitness functions to probabilistic transition rates ] ) .when the degree of cooperation is less than a critical threshold , the only stable equilibrium is : a 50/50 split between left - hand and right - hand dominant individuals .this is consistent with studies showing individual but not population - level bias in various species .when the degree of cooperation exceeds a critical threshold , two new stable equilibria appear as a result of either a subcritical or supercritical pitchfork bifurcation ( depending on the exact form of the function ) .these equilibria indicate population - level lateralization as seen in human society .the fraction right- or left - handed will depend on the exact value of the cooperation parameter .there is a qualitative difference between the two situations depicted in figure [ fig1 ] , a difference which holds for a broad class of sigmoid functions . in the case of the subcritical pitchfork ( figure [ fig1]a ) , no weak population lateralizationshould ever be observed since equilibria near 50% are unstable ; however , in the case of the supercritical pitchfork ( figure [ fig1]b ) , population lateralization near 50% will be possible , though only stable for a small range of values of .both suggest that weak population lateralization ( fractions .17ex ) should be rare in the natural world , while indicating that a high degree of cooperation may be responsible for the strong lateralization ( fractions .17ex ) observed in some social animals ( e.g. , humans , parrots ) .thus far we have attempted to describe the evolution of the fraction left - handed in populations of lateralized individuals . ideally , we would compare predicted equilibria of equation to data from animal populations exhibiting varying degrees of cooperation . for most species , however , quantifying the degree of cooperation is difficult , and data on population - level lateralization is scarce and sometimes contradictory .this lack of information about the natural world leads us to examine the proxy situation of athletics , where data on handedness and cooperation is more easily accessible . to explain the observed fraction of athletes left - handed , it is important to model the selection process because athletics , unlike evolution , should not cause changes in the population s background rate of laterality .we treat athletic skill s as a normally distributed random variable , and assume that minority handedness creates a frequency - dependent shift that modifies the randomly distributed skill .we then model an ideal selection process as choosing the most skilled players from a population of interested individuals . such a model ( derived in detail in supplementary material section [ sec : derivation of athletic selection model ] ) predicts that the professional fraction left - handed will depend on the fraction selected , and is determined implicitly by the equation where is the background rate of left - handedness , erfc is the complementary error function , is the normalized cut - off in skill level for selection , and is the normalized skill advantage for left - handers . here represents the fraction of the population that would be left - handed in a world consisting only of interactions through the sport under consideration .its value is determined from equation , with a choice of parameter appropriate for the sport under consideration ( is reinterpreted as the mean probability that a right - handed player is replaced by a left - hander in a given time period ) .note that must lie between and : with very high selectivity equation implies that , and with very low selectivity .baseball ( mlb ) , ( men s ) , ( men s ) , ( women s ) , ( quarterbacks , nfl ) , ( pga ) , ( lpga ) , ( right wings , nhl ) , ( left wings , nhl ) , ( other , nhl ) , tennis ( men s ) , tennis ( women s ) .dashed line represents perfect agreement between predicted and observed values .vertical error bars correspond to 95% confidence intervals ( ) ; horizontal error bars correspond to predictions using plus or minus one order of magnitude in , the primary source of uncertainty .left - handed advantage where and both and vary from sport to sport ( see table [ datatable ] and supplementary material section [ sec : data summary]),width=317 ] figure [ fig2 ] shows our application of equation to various professional sports . to reduce arbitrary free parameters , we assume that the cooperativity is close to zero for physically competitive sports and one for sports ( e.g. , golf ) that require lateralized equipment or strategy .figure [ fig1 ] then implies that the ideal equilibrium fraction left - handed will be 50% when is close to zero , and will be either 0% or 100% when . the predictions for figure [ fig2 ]were made by varying a single free parameter , the constant of proportionality for the frequency - dependent skill advantage . to avoid over - fitting, we took this to be a constant across all sports ; given sufficient data , different values of could be estimated independently for each sport .the fraction selected was estimated from the ratio of professional athletes to the number of frequent participants for each sport ( see supplementary material section [ sec : data summary ] for details ) . for the sport of baseball , the great abundance of historical statistical information allows us to validate our proposed selection mechanism .to do so , we use our model to predict the cumulative fraction left - handed as a function of rank , then compare to data . in sports where highly - rated players interact with other highly - rated players preferentially ( e.g. , boxing ), we expect the left - handed advantage to be rank - dependent ( i.e. , depending on the fraction left - handed at rank ) .however , within professional baseball leagues , all players interact with all other players at nearly the same rate , so the left - handed advantage should be independent of rank , i.e. , a constant .this leads us ( see supplementary material section [ sec : derivation of athletic selection model ] for derivation ) to the equation represents the left - handed fraction of all u.s .born players that finished a season ranked in the top by total hits .the left - handed advantage was computed by finding the least - squares best fit .this value differs slightly from the value used for baseball in figure [ fig2 ] ( ) suggesting that , in practice , the proportionality constant may vary from sport to sport ., width=317 ] figure [ fig3 ] shows the predictions of equation as applied to the top - ranked baseball players from 1871 to 2009 .only one free parameter was varied : the left - handed advantage .all other parameters were constrained by known data .the surprisingly good fit to this nontrivial curve can be seen as supporting evidence for the selection model .together with the accuracy of predictions in figure [ fig2 ] , this supports the conclusion that the equilibria of equation are indeed relevant to real - world lateralized systems .despite the good agreement of our predictions with real - world data , we acknowledge that there are limitations in reducing a complex adaptive system to a simple mathematical model .our model includes undetermined functions that would be difficult to measure precisely ( although we found that qualitative predictions are robust see supplementary material section [ sec : parameter sensitivity analysis for probabilistic model ] ) .however , they can be roughly approximated from available data and may be easier to estimate than fitness functions proposed in other models .sports data may not be completely analogous to data from the natural world ; hence , quantitative analysis of lateralization in social animal groups may be a fruitful line of future research . given the limited data on population - level lateral bias in the natural world , we feel that analysis of athletics provides new insight into the evolutionary origins of handedness .our model predictions match the observed distribution of handedness in baseball with just a single free parameter .when applied to 12 groups of elite athletes , the same model does a good job of estimating the fraction left - handed in each , suggesting that the proposed balance between cooperation and competition accurately predicts the ideal equilibrium distribution of handedness .our model is general enough to be applied to any species of animal , and may also have use in understanding population - level lateralized adaptations other than handedness , both physical and behavioural .the model we have presented is the first to take a dynamical systems approach to the problem of laterality .it allows for the prediction of conditions under which population - level lateral bias can be expected to emerge in the animal world and its evolution over time .we exploit the connection between natural selection and selection in professional sports by introducing a novel data set on handedness among athletes , demonstrating a clear relationship between cooperative social behaviour and population - level lateral bias .this work was funded by northwestern university and the james s. mcdonnell foundation .the authors thank r. n. gutenkunst and r. j. wiener for useful correspondence . 3 1996 frequency - dependent maintenance of left handedness in humans .b _ * 263 * , 627 - 1633 .( doi 10.1098/rspb.1996.0238 . )2011 more than 500,000 years of right - handedness in europe ._ laterality _ 1 - 19 .( doi 10.1080/1357650x.2010.529451 . )1977 fifty centuries of right - handedness : the historical record ._ science _ * 198 * , 631 - 632 .( doi 10.1126/science.335510 . )2005 handedness , homicide and negative frequency - dependent selection .b _ * 272 * , 25 - 28 .( doi 10.1098/rspb.2004.2926 ) 2009 why are some people left - handed ?an evolutionary perspective .b _ * 364 * , 881 - 894 .( doi 10.1098/rstb.2008.0235 ) 1981 secular variation in handedness over ninety years .neuropsychologia * 19 * , 459 - 462 .( doi 10.1016/0028 - 3932(81)90076 - 2 ) 1991 the inheritance of left - handedness . in _ciba foundation symposium _ * 162 * ( eds .g. r. bock & j. marsh ) , pp .251 - 267 .chichester , uk : john wiley & sons , ltd .( doi 10.1002/9780470514160.ch15 ) 2007 lrrtm1 on chromosome 2p12 is a maternally suppressed gene that is associated paternally with handedness and schizophrenia . _psychiatry _ * 12 * , 1129 - 1139 .( doi 10.1038/sj.mp.4002053 ) 2009 the prehistory of handedness : archaeological data and comparative ethology .evol . _ * 57 * , 411 - 419 .( doi 10.1016/j.jhevol.2009.02.012 ) 2009 intraspecific competition and coordination in the evolution of lateralization .b _ * 364 * , 861 - 866 .( doi 10.1098/rstb.2008.0227 ) 2009 laterality enhances cognition in australian parrots .b _ * 276 * , 4155 - 4162 .( doi 10.1098/rspb.2009.1397 ) 1977 the mammalian brain and the adaptive advantage of cerebral asymmetry .sci . _ * 299 * , 264 - 272 .( doi 10.1111/j.1749 - 6632.1977.tb41913.x ) 2004 , advantages of having a lateralized brain .b _ * 271 * , s420-s422 .( doi 10.1098/rsbl.2004.0200 ) 2005 survival with an asymmetrical brain : advantages and disadvantages of cerebral lateralization .brain sci ._ * 28 * , 575 - 589 .( doi 10.1017/s0140525x05000105 ) 1974 frequency - dependent selection .ecol syst ._ * 5 * , 115 - 138 .( doi 10.1146/annurev.es.05.110174.000555 ) 2005 maintenance of handedness polymorphism in humans : a frequency - dependent selection model ._ j. theor .* 235 * , 85 - 93 .( doi 10.1016/j.jtbi.2004.12.021 ) 1988 do right - handers live longer ?_ nature _ * 333 * , 213 .( doi 10.1038/333213b0 ) 1991 left - handedness : a marker for decreased survival fitness ._ psychol ._ * 109 * , 90 - 106 .( doi 10.1037/0033 - 2909.109.1.90 ) 1993 evidence for longevity differences between left handed and right handed men : an archival study of cricketers ._ j. epidemiol .community health _ * 47 * , 206 - 209 .( doi 10.1136/jech.47.3.206 ) 2002 comparative vertebrate lateralization , ( eds . l. j. rogers & r. j. andrew ) , west nyack , ny : cambridge university press .1989 footedness in parrots : three centuries of research , theory , and mere surmise .j. psychol . _* 43 * , 369 - 396 .( doi 10.1037/h0084228 ) 2011 the lahman baseball database .http://www.baseball1.com .2011 the world factbook .https://www.cia.gov/library/publications/the-world-factbook/index.html .2009 single sport reports .http://www.sgma.com .2009 - 2010 high school athletics participation survey .there have been various attempts to account for species - level laterality .billiard et al . point out that left - handedness is `` associated with several fitness costs . ''thus the population - level bias can be maintained through a balance between a frequency - dependent fitness function and a constant fitness cost .an alternate model by ghirlanda et al . suggests that the combination of `` antagonistic '' and `` synergistic '' interactions and their associated frequency - dependent fitness functions can create an evolutionarily stable equilibrium with an asymmetric ( and non - trivial ) distribution of lateralization . while both models have merit , they disagree on a fundamental question : are left- and right - handedness interchangeable ?in other words , is a mirror image world of 90% left - handers and 10% right - handers equally plausible ?billiard et al. suggest that the fitness costs `` such as lower height and reduced longevity ... are not likely to be frequency - dependent . ''thus these fitness costs break the symmetry and guarantee that only the observed handedness distribution is possible .however , the aforementioned fitness costs have only been observed in a biased population consisting of lateralized individuals .for example , aggleton et al . show that left - handers are more likely to die prematurely , and that this effect is at least partially due to `` increased vulnerability to both accidental death and death during warfare . ''they go on to argue that `` the most likely explanation for the increase in accidental death among the left - handed men concerns their need to cope in a world full of right - handed tools , machines , and instruments . ''clearly if left - handedness were more prevalent than right - handedness , then left - handed tools would also be more common , and , as a result , right - handers instead would experience increased risk of accidental death .thus , it is likely that these fitness costs are frequency - dependent and symmetric .in contrast to billiard et al . , ghirlanda s model assumes that left- and right - handedness are indeed interchangeable .given an initial distribution of 50% left - handers and 50% right - handers , this model predicts that both the observed distribution and its mirror image are equally likely equilibrium outcomes . on this point, our probabilistic model agrees with ghirlanda et al .given the fact that there is no reason to expect that right - handedness is inherently superior to left - handedness from a fitness perspective , we assume that the probabilistic transition rates satisfy the symmetry condition .the probabilistic model described in the main text provides a description of the population dynamics from a top - down perspective .however , it may be more intuitive to consider the dynamics from a bottom - up perspective . herewe develop a model using reproductive fitness arguments on the level of individuals and show that this produces essentially the same result .* iterative model * let us define to be the total population size , and and to be the number of left- and right - handers , respectively .we make the simplifying assumption that , i.e. , there are no ambidextrous individuals . also , define handedness fractions and so that .suppose that in this population , individuals repeatedly pair off and reproduce , adding new individuals to the population in each generation .from an evolutionary perspective , the expected number of offspring that individuals produce should be dependent on their fitness . with all other factors being equal ,an individual s fitness should be determined by his or her handedness and the distribution of handedness in the population .thus we define and to be expected number of offspring born to right- and left - handers .also , suppose that a pairing produces left - handed offspring with probability and right - handed offspring with probability , where and represent the dominant hands of the parents .we expect to be a frequency dependent function .thus there are 6 possible reproductive interactions ( we ignore gender effects here for simplicity including them should not qualitatively change the results ) : at each iteration , we suppose that a fraction of current members dies off .then the number of left- and right - handers in the new generation is : +\left[1-d\right]l_n \\ r_{n+1 } & = \left[\frac{b_r(1-\sigma_{rr})r_n^2+(b_r+b_l)(1-\sigma_{rl})r_nl_n+b_l(1-\sigma_{ll})l_n^2}{n_n}\right]+\left[1-d\right]r_n ~.\end{aligned}\ ] ] * continuous model * the iterative perspective is intuitive but has limited predictive capacity .one limitation is that the number of left- and right - handers are only defined at fixed intervals . to remove this obstacle, we transform the discrete model into a continuous model .we set to be the instantaneous birth rate for individuals with handedness and to be the instantaneous death rate .we set and , and let to obtain ordinary differential equations for the evolution of , and .however , these equations are not independent . in fact, we are interested only in the the evolution of , which is governed by - \beta_{\textrm{eff } } l(t)~.\ ] ] in order to analyse this ode assumptions about the functions are needed : a first order assumption is that these should be linear functions of the frequency , and by symmetry , we expect that . according to aggleton et al . , the average lifespan of right - handers is 3.31% longer than their left - handed counterparts , with much of the difference attributable to higher rates of premature death in war and accidents .this indicates that in a society consisting of roughly 90% right - handers , left - handers appear to have a lower fitness .this fitness differential should be reflected in the model s reproductive rates .the overall birth rate in the u.s ., a weighted average of and , is known : . if we assume that then linearity and symmetry allow us to derive expressions for and : from , the observed fractions of left - handed offspring are we expect these parameters to be functions of , but all data is drawn from modern societies where the fraction left - handed is .fortunately , using symmetry arguments , we can obtain additional points : in a population of uniform handedness , one might expect all offspring to inherit the same handedness as their parents .however , in practice the situation is more complex .monozygotic ( identical ) twins often possess discordant handedness .thus , handedness can not be fully determined by genotype . to account for this, most genetic models introduce a random component that partially determines handedness . with that motivation , we define to be the probability due to chance that parents with handedness produce left - handed offspring in a population consisting entirely of right - handers .we then obtain : it is unclear exactly what values are appropriate for since no isolated human population consisting entirely of left- or right - handers exists .if , then equilibria at appear and those at and are unstable .this is inconsistent with the observed stable fixed point .we therefore assume must satisfy . for given values , we fit a cubic polynomial to the 4 known points ( known at ) to obtain smooth approximate functions .the resulting dynamical system governed by equation has either 3 or 5 fixed points : ( ) or ( ) where and are stable .for example , if we set , we see an unstable fixed point at in addition to the expected stable fixed points at and .up to this point , we have treated as an unknown parameter . in practice , however , the fraction of the population that is left - handed can be measured .estimates for the value of this parameter depend on the precise method of measurement and definition of left - handedness , but it is generally agreed that this fraction is close to 10% .equation allows us to compute an independent prediction for using only the birth rates and the phenotype ratios of offspring given above ( note : the predicted does not depend on the choice of although the stability of the fixed point does ) .this results in a predicted percent left - handed of , consistent with the measured value . by presenting this model of phenotype evolution, we wish to emphasize the generality of the probabilistic model presented in the main text . for appropriate choices of functions , equation in the main textcan be made to agree nearly identically with model [ phenotypic_model ] above , as demonstrated in figure [ fig : ldotplot ] .[ ] [ ] [ ] [ ] [ ] [ ] [ r][r] [ r][r] [ r][r] [ b][b][2][0 ] [ ] [ ] [ 2][-90] [ % /yr ] generated using generic sigmoid in the probabilistic model from equation of the main text . dashed red line : the function [ % / yr ] implied by the phenotypic model from equation .,width=317 ]often in models of population dynamics , changes in the makeup of a population are related directly to the comparative fitness of the individuals within that population .for example , ghirlanda et al .define a fitness function for lateralized individuals in terms of two component fitness functions , an antagonistic function and a synergistic function .they then argue that equilibrium is obtained when the fitness of left- and right - handed individuals are equal . in this paper ,however , we employ a different approach .we argue that segments of a population will switch handedness over long time scales according at a rate determined by a probabilistic function .thus equilibrium is obtained when the overall transition rates between left- and right - handed individuals balance .these probabilistic transition rates should be related to the comparative fitness of the members of the population .we now determine the relationship between fitness functions , and probabilistic transition rates , . recall that represents the probabilistic transition rate from right to left .clearly , must be non - negative .also , it should reach a maximum ( minimum ) when the difference in fitness between left- and right - handers is at a maximum ( minimum ) .the simplest example of such a function is , where are constants guaranteeing that remains positive . by symmetry , and .thus .transition rates defined in this way satisfy the symmetry relation and , as such , are sigmoidal for many different types of fitness functions. we should note that in this model , the probability that a given individual switches from right to left within time is non - zero even when . in such a situation ,the probability that a given left - hander switches is .however , if right - handers are much more prevalent than left - handers , the total number of switches from right to left may still outweigh the number from left to right .in other words , it is possible that . this would cause the fraction left - handed to increase despite right - handers having a higher fitness . also , if for some , , then and .so if , the fraction left - handed will increase even when the fitness is equal for left- and right - handers .in our model , having equal fitnesses does not necessarily lead to equilibrium . despite the differences between our formulation and a fitness - based formulation , the predictions are similar .if we use the fitness functions similar to those proposed by ghirlanda et al . we can generate and ( which are sigmoid as expected ) . in figure[ fig : ldotplot_fitness ] we plot the resulting function for appropriate parameter values ( such as , , , , ) and observe a graph very similar to figure [ fig : ldotplot ] .[ ] [ ] [ ] [ ] [ ] [ ] [ r][r] [ r][r] [ r][r] [ b][b][2][0] [ b][b][2][-90] [ % /yr ] generated using generic sigmoid in the probabilistic model from equation of the main text . dashed red line : the function [ % /yr ] generated using implied by fitness functions proposed by ghirlanda et al .professional sports are artificial systems that involve varying degrees of competitive and cooperative activities .their participants undergo a selection process through tryouts that is in some ways analogous to natural selection . additionally , handedness data for professional athletes is widely available .thus , athletics provide an ideal opportunity to test whether our model s predictions are consistent with data from selective systems .there is a fundamental difference between selection in professional sports and natural selection . in natural selection ,the distribution of a trait within a population changes in response to selection pressure , modifying the gene pool .professional athletes , however , represent only a small segment of the much larger human population .changes in the distribution of a trait among professional athletes are unlikely to influence the gene pool in the human population ; furthermore , most professional sports have not existed for the time scales required to significantly modify the gene pool . as a result ,the population of professional athletes must draw new members from a pool that consists of about 90% right - handers .thus , direct comparison of our model to sports data is not possible : instead , we must account for this more complex selection process in order to make predictions that _ are _ applicable to real - world sports . to begin , we define to be the fixed point predicted by the probabilistic model described in the main text .this represents the ideal equilibrium distribution of left - handedness in a hypothetical world where all interaction occurs through the sport under consideration .we assume that skill is normally distributed throughout the population with mean and standard deviation . because left - handedness is relatively rare, this trait should provide a competitive advantage in sports involving direct physical confrontation .let represent the fraction left - handed within a sport .when deviates from , the sport is not at its ideal equilibrium state , and left - handers must experience a shift in skill .we assume that professional sports operate efficiently , that is , they select players exclusively according to skill level .then the distributions of skill among left- and right - handers satisfy where in this formulation , when the ideal equilibrium fraction of left - handedness is achieved , their is no advantage to possessing either handedness and the individuals are selected according their intrinsic skill . in most sports , individuals must undergo a tryout in order to demonstrate sufficient skill for participation . as a resultonly a fraction of the total population is allowed to participate in the sport at a given level of competition .we define the fraction selected , where is the number of individuals selected and is the size of the total population . will determine a minimum skill cutoff for participation according to the relation where represents the background rate of left - handedness ( ) .we can simplify this expression by normalizing the various parameters by the standard deviation : we set , , and to get in this formulation , the fraction left - handed among the individuals selected will be represented by the first term on the right - hand side of equation divided by the entire expression , after many tryouts , the fraction left - handed within a sport will stabilize with the .this equilibrium fraction represents the observed fraction left - handed among professional athletes , thus we call it .so , the equilibrium state is implicitly determined by ~.\ ] ] this model has the following properties : * if , then .* if the sport is not selective at all , in the limit ( ) , as this means that selection is independent of skill .* as the sport becomes infinitely selective , ( ) , .in other words , the ideal equilibrium fraction is achieved when the sport is infinitely selective .( to see this , assume that and .expand in a taylor series about and take the limit to find that .thus and , so . )using this model , we employed numerical techniques to compute the solutions for a variety of sports , and then compared these results to the observed fractions left - handed as seen in figure 2 .this model can also be extended to examine how the distribution of handedness varies within a single sport .if left - handedness is a desirable trait ( that is , it provides a skill advantage at equilibrium ) , then we expect that it should be very prevalent among the most skilled individuals due to the selection mechanism . to see this , we consider the case of baseball .it is clear that in baseball , since the sport involves primarily competitive interactions ( the observed ) .thus , left - handedness is a desirable trait for potential professionals as it will provide a particular skill advantage in batting . at the professional level ,most hitters face the same set of pitchers and compete indirectly with one - another for roster spots .they should therefore be expected to experience the same skill advantage due to handedness . in other words, is a constant ( in sports like boxing , however , where individuals compete more frequently with others near their own rank , the skill advantage would be a rank - dependent function , ) .ranking the hitters by skill , we observe that the fraction left - handed above rank should satisfy where satisfies using this result , we plotted the predicted fraction left - handed as a function of rank as seen in figure 3 .this model predicts a non - trivial shape for the distribution of handedness within baseball that is consistent with the observed distribution . this is a strong indication that this athletic selection model provides a good mathematical approximation for the tryout - based selective mechanism within professional sportsdata used in generation figure 2 came from a variety of sources .the total number of participants came from surveys conducted by the sporting goods manufacturers association in 2009 , except for men s and women s fencing , where participant numbers were extrapolated from data published by the national federation of state high school associations .the number of professional players came from listings of top - rated players ( the only ones for which handedness was readily available ) at the internet urls indicated in table [ datatable ] , with the exception of baseball , football , and hockey , where numbers are absolute totals .when handedness was not available in tabulated form , it was evaluated based on public photos of players in action . in table[ datatable ] , the predictions for the fraction left - handed were generated using an estimate of the ideal equilibrium for each sport .the appropriate value for depends primarily on the degree of cooperation for the sport .this parameter is difficult to estimate in sports that possess clear cooperative and competitive elements .however , in order to observe fixed points other than , must exceed a threshold that appears to be relatively high for the types of transition rates considered in this paper ( see figure [ fig1 ] ) .so , we assumed that for sports primarily involving direct confrontations : baseball ( batters vs. pitchers ) , boxing , fencing , table tennis , hockey ( defensemen and forwards ) . some sports ( or particular positions within sports ) , however , possess highly lateralized equipment , positioning or strategy .for these sports , it is ideal for all individuals to possess the same handedness ; so , the minority handedness will be selected against .for example , in football , blocking schemes are often designed to protect a quarterback s blind side . as a result , it is beneficial for all quarterbacks on the roster to possess the same handedness in order to minimize variations of the offensive sets .consequently , we assume that for quarterbacks in football , golfers , and left and right wings in hockey the value of , i.e. , .in the probabilistic model , there are two unknown functions and .while general properties of these functions such as monotonicity are known , the appropriate form for these functions is unknown and is difficult to determine from data .the generic sigmoid functions satisfy restrictions on for and capture the essential fixed point behaviour ( set the steepness of the curves ) .unfortunately , these equations introduce two new parameters that may alter the dynamics . to examine the sensitivity of the model to these parameters, we assumed was the fixed point of the system .we then computed the partial derivatives of with respect to each parameter . in the vicinity of , the observed ratio of left - handers in human populations , and in the range of , and , we found that thus for a wide range of parameter values , and near the fixed point .in other words , the location of the fixed point is more sensitive to than and by several orders of magnitude for physically allowable -values . therefore we are justified in ignoring the effects of individual choices of and in order to focus on the effects of the choice of .we believe these results are robust for various different sigmoid functions .
an overwhelming majority of humans are right - handed . numerous explanations for individual handedness have been proposed , but this population - level handedness remains puzzling . here we use a minimal mathematical model to explain this population - level hand preference as an evolved balance between cooperative and competitive pressures in human evolutionary history . we use selection of elite athletes as a test - bed for our evolutionary model and account for the surprising distribution of handedness in many professional sports . our model predicts strong lateralization in social species with limited combative interaction , and elucidates the rarity of compelling evidence for `` pawedness '' in the animal world . * keywords : * laterality ; mathematical model ; evolution ; athletics ; handedness
it is fundamental to ask how an amplification of canonical variables modifies the phase - space distribution of amplified states under the physical constraint due to canonical uncertainty relations .the standard theory to address this question is the _ so - called _ amplification uncertainty principle .it describes the property of inevitable noise addition on canonical variables when the field amplitude of an unknown state is linearly transformed through a quantum channel .this traditional form of quantum amplification limits is directly derived from the property of canonical variables , and gives an important insight on a wide class of experiments in quantum optics , quantum information science , and condensed matter physics .unfortunately , the linearity of amplification maps assumed in this theory is hardly satisfied in the experiments , although this assumption corresponds to a covariance property that works as an essential theoretical tool to analyze a general property of amplification and related cloning maps .it is more realistic to consider the performance of amplifiers in a limited input space .in fact , one can find a practical limitation by focusing on a set of input states or an ensemble of input states .there has been a growing interest in implementing probabilistic amplifiers in order to overcome the standard limitation of the traditional amplification limit . in these approaches, one can obtain essentially noiselessly amplified coherent states with a certain probability by conditionally choosing the output of the process .recent theoretical studies have determined amplification limits for such cases of probabilistic quantum channels or general quantum operations .certainly , these results can reach beyond the coverage of the traditional theory .however , it seems difficult to find a precise interrelation between these theories .for example , it is not clear whether the traditional form can be reproduced as a special case of the general theory . at this stage, we may no longer expect an essential role of canonical uncertainty relations in determining a general form of amplification limits .another topical aspect on the probabilistic amplification is its connection to entanglement distillation . on the one hand ,the no - go theorem of gaussian entanglement distillation tells us that gaussian operations are unusable for distillation of gaussian entanglement . on the other hand, it has been shown that a specific design of non - deterministic linear amplifier ( nla ) can enhance entanglement , and experimental demonstrations of entanglement distillation have been reported in .thereby , such an enhancement of entanglement could signify a clear advantage of no - gaussian operations over the gaussian operations .interestingly , a substantial difference between an optimal amplification fidelity for deterministic quantum gates and that for probabilistic physical processes has been shown in ref . . in there ,a standard gaussian amplifier is identified as an optimal deterministic process for maximizing the fidelity , while the nla turns out to achieve the maximal fidelity for probabilistic gates in an asymptotical manner .however , these amplification fidelities have not been associated with the context of entanglement distillation .hence , it is interesting if one can find a legitimate amplification limit for gaussian operations such that the physical process beyond the limit demonstrates the advantage of non - gaussian operations .more fundamentally , we may ask whether an amplification limit for gaussian operations could be derived as a consequence of the no - go theorem . the fidelity - based amplification limit defined on an input - state ensemble called the _gaussian distributed coherent states_. this ensemble has been utilized to demonstrate a non - classical performance of continuous - variable ( cv ) quantum teleportation and quantum memories .the main idea underlying this ensemble is to consider an effectively uniform set of input states in a cv space by using a gaussian prior .we can sample coherent states with modest input power around the origin of the phase - space with a relatively flat prior while a rapid decay of the prior enables us to suppress the contribution of impractically high - energy input states .given this ensemble , an experimental success criterion for cv gates is to surpass the classical limit fidelity due to _ entanglement breaking _ ( eb ) maps .the classical fidelity was determined for unit - gain channels in ref . and for lossy / amplification channels in ref . ( see also ref .further , the framework was generalized to include whole completely - positive ( cp ) maps , i.e. , general quantum operations .recently , a different form of such classical limits has been derived using an uncertainty product of canonical variables .it gives an optimal trade - off relation between canonical noises in order to outperform eb maps for general amplification / attenuation tasks .this suggests that , instead of the fidelity , one can use an uncertainty product of canonical variables to evaluate the performance of amplifiers .however , for a general amplification process , it remains open ( i ) how much excess noise is unavoidable on canonical variables and ( ii ) whether there exists a simple trade - off relation between noises of the canonical pair .in this paper we resolve above questions by presenting an uncertainty - product form of amplification limits for general quantum operations based on the input ensemble of gaussian distributed coherent states .it is directly derived by using canonical uncertainty relations and retrieves basic property of the traditional amplification limit .we investigate attainability of our amplification limit and identify a parameter regime where gaussian channels can not achieve our bound but the nla asymptotically achieves our bound .we also point out the role of probabilistic amplifiers for entanglement distillation . using the no - go theorem for gaussian entanglement distillationwe find a condition that a probabilistic amplifier can be regarded as a local filtering operation to demonstrate entanglement distillation .this condition establishes a clear benchmark to verify an advantage of non - gaussian operations beyond gaussian operations with a feasible input set of coherent states and standard homodyne measurements .the rest of this paper is organized as follows . in section [ ourlimit ] ,we present our amplification limit which is regarded as an extension of the traditional amplification limit for two different directions : ( i ) it determines the limitation with an input ensemble of a bounded power ; ( ii ) it is applicable to stochastic quantum processes as well as quantum channels . in section [ attainabilityamplimit ] , we consider attainability of our amplification limit for gaussian and non - gaussian amplifiers . in section [ distilbound ] , we address the connection between our amplification limit and entanglement distillation .we conclude this paper with remarks in section [ concremark ] .in this section we present a general amplification limit for gaussian distributed coherent states which is applicable to either probabilistic or deterministic quantum process .we review the fidelity - based results of amplification limits in subsection [ filimit ] partly as an introduction of basic notations .we present our main theorem in subsection [ delimit ] .we consider transmission of coherent states drawn from a gaussian prior distribution with an inverse width we call the state ensemble the gaussian distributed coherent states . a main motivation to use the gaussian prior of eq .is to execute a uniform sampling of the input amplitude around the origin of the phase - space with keeping out the contribution of higher power input states for by properly choosing the inverse width .a uniform average over the phase - space or an ensemble of completely unknown coherent states can be formally described by taking the limit .let us refer to the following state transformation as the phase - insensitive amplification / attenuation task of a gain , we say the task is an amplification ( attenuation ) if ( ) .we may specifically call the task of the _ unit gain _ task .we define an average fidelity of the phase - insensitive task for a physical map as note that we use the following notation for the density operator of a coherent state throughout this paper : the fidelity - based amplification limit is given as follows : for any quantum operation , i.e. , a cp trace - non - increasing map , it holds that where is the probability that gives an output state for the ensemble .it is defined as as we will see in the next subsection , this probability represents a normalization factor when acts on a subsystem of a two - mode squeezed state .note that if is a quantum channel , i.e. , a cp trace - preserving map . in analogous to eq ., we may define a symmetric phase - conjugation task associated with the state transformation : thereby , we may define an average fidelity of this task as the fidelity - based phase - conjugation limit is given by note that one can generalize the fidelity - based quantum limits in eqs .( [ fresult1 ] ) and for phase - sensitive cases by introducing modified tasks as where is a squeezing unitary operation and represents the degree of squeezing .the quantum limited fidelity values of eqs . and are invariant under the addition of unitary operators since the optimal map can absorb the effect of additional unitary operators . we may consider a general phase - sensitive amplification / attenuation task in terms of phase - space quadratures so that average quadratures of the input coherent state of eq .are transformed as where the gain pair of the amplification / attenuation task is a pair of non - negative numbers , and the mean quadratures for the coherent state are defined as throughout this paper we assume the canonical commutation relation for canonical quadrature variables = i ] , the expression of eq .turns to the variance of the output quadrature & = \tr [ \hat z ^2 \mathcal e ( \rho_\alpha ) ] - ( \tr [ \hat z \mathcal e ( \rho_\alpha ) ] ) ^2 \nonumber \\ & = { \langle \delta ^2 \hat z \rangle}_{\mathcal e ( \rho _ \alpha ) } .\end{aligned}\ ] ] however , it is impractical to consider that the linearity of the transformation =\sqrt \eta_z z _\alpha \label{liniconcon } \end{aligned}\ ] ] holds in experiments for every input amplitude .we thus proceed our formulation _ without _ using this condition . instead of the point - wise constraint on , we consider an average of the quadrature deviations with the gaussian prior distribution of eq . .we seek for the physical process that minimizes the _ mean square deviations _ ( msd ) of canonical quadratures : where the lower sign of the second expression is for the case of the phase - conjugation task in eq . .the msds of eq . can be observed experimentally by measuring the first and the second moments of the quadratures for the output of the physical process . due to canonical uncertainty relations , and not be arbitrary small , simultaneously .we can find a rigorous trade - off relation between and from the following theorem . * theorem 1. * for any given , , and , any quantum operation ( or stochastic quantum channel ) satisfies \ge \frac{1}{4}\left| \frac { \sqrt{\eta_x \eta_p } } { 1 + \lambda } \mp 1 \right|^2 \label{aup2 } \end{aligned}\ ] ] where and are defined in eqs . and , respectively. moreover , the lower signs of eqs . and correspond to the case of the phase - conjugation task in eq . .* proof.*let be a density operator of a two - mode system described by = [ \hat x_b , \hat p_b ] = i ] and use the property of the coherent state and . similarly , starting from we have \nonumber \\ & = & \tr_a \int\frac{d^2 \alpha } { \pi}(\hat p_a - g_p p_\alpha ) ^2 { \left \langle \alpha ^ * \right |}j { \left | \alpha ^ * \right \rangle}_b -\frac{g_p^2}{2}. \label{forp}\end{aligned}\ ] ] next , suppose that is prepared by an action of a quantum operation as where is a two - mode squeezed state with and ] , fulfills the equality of eq .( [ aup2 ] ) for ] . the upper sign and lower signs in eq .respectively indicate the cases of the normal amplification / attenuation process and the phase - conjugation process .we may focus on the property of added noise terms : it tells us an amount of additional noise imposed by the channel because the second terms in eqs .( [ adnoise ] ) represent the variance of an input state .the aup gives a physical limit for cp trace - preserving maps satisfying eq .( [ lin ] ) : note that in ref . the aup is defined through the added noise number . in order to link eq .( [ aur0 ] ) to our amplification limit in eq .( [ aup2 ] ) , we consider the input of coherent states with the shorthand notation of eq . .it implies using eqs .and we can write , \nonumber \\ { \langle \delta^2 \hat y_p \rangle}&= & { \langle \hat y_p ^2 \rangle } - { \langle \hat y_p \rangle}^2 = \tr \left [ ( \hat p \mp \sqrt{g_p } p_\alpha)^2 \mathcal e ( \rho_\alpha ) \right ] .\label{lin3 } \ ] ] due to the linearity assumption , we can write any average of the variance over the coherent - state amplitude as the variances for a single coherent state .hence , it holds that concatenating eqs . , , , and we can write d^2 \alpha}_{\bar v_x ( g_x , \lambda ) } - g_x /2 , \nonumber \\ \mathcal n_p & = \underbrace{\int p_\lambda ( \alpha ) \tr \left [ ( \hat p \mp \sqrt{g_p } p_\alpha)^2 \mathcal e ( \rho_\alpha ) \right ] d^2 \alpha}_{\bar v_p ( g_p , \lambda ) } -g_p/2 , \label{adnoise2}\end{aligned}\ ] ] where the underbracing terms , and , come from eq . .substituting eqs . into eq .we can re - express the aup as \ge \frac{1}{4 } \left| \sqrt{g_x g_p } \mp 1 \right|^2 . \end{aligned}\ ] ] it would be instructive to illustrate the gain - dependence for symmetric cases as in fig .[ fig : ampfig1.eps](b ) . for the normal amplification process with and have similarly , for the phase - conjugation process , we have we thus apparently observe that the structures of eqs . and are the same as those of eqs . and , respectively . on the other hand , substituting in eq . and assuming is a cp trace - preserving map we can write our amplification limit as \ge \frac{1}{4}\left| { \sqrt{g_x g_p } } \mp 1 \right|^2 \label{aup2d}. \end{aligned}\ ] ] comparing this relation with eq .we can see that our amplification limit coincides with the aup in the limit of .it is clear from fig .[ fig : ampfig1.eps](b ) that the inequalities of eq .[ eq . ] can be violated for any finite width of the distribution whenever [ .to be specific , the linearity condition means that eq. holds for _ any _ input state .physically , the output amplitude of an amplifier saturates at a certain power of the input field , and the amplifier would be simply broken when the power of the input field is too strong . therefore , any realistic amplifier can not satisfy the linearity condition . practically , we may use the term `` linear amplifier '' when the linearity relation approximately holds for _ some _ input states .
traditionally , quantum amplification limit refers to the property of inevitable noise addition on canonical variables when the field amplitude of an unknown state is linearly transformed through a quantum channel . recent theoretical studies have determined amplification limits for cases of probabilistic quantum channels or general quantum operations by specifying a set of input states or a state ensemble . however , it remains open how much excess noise on canonical variables is unavoidable and whether there exists a fundamental trade - off relation between the canonical pair in a general amplification process . in this paper we present an uncertainty - product form of amplification limits for general quantum operations by assuming an input ensemble of gaussian distributed coherent states . it can be derived as a straightforward consequence of canonical uncertainty relations and retrieves basic properties of the traditional amplification limit . in addition , our amplification limit turns out to give a physical limitation on probabilistic reduction of an einstein - podolsky - rosen uncertainty . in this regard , we find a condition that probabilistic amplifiers can be regarded as local filtering operations to distill entanglement . this condition establishes a clear benchmark to verify an advantage of non - gaussian operations beyond gaussian operations with a feasible input set of coherent states and standard homodyne measurements .
in coding theory an interesting but at the same time a difficult problem is to determine the weight distribution of a given code .the weight distribution is important because it plays a significant role in determining the capabilities of error detection and correction of a code . for cyclic codesthis problem is even more important because this kind of codes possess a rich algebraic structure . on the other hand, it is known that cyclic codes with few weights have a great practical importance in coding theory and cryptography , and this is so because they are useful in the design of frequency hopping sequences and in the development of secret sharing schemes .a characterization of a class of optimal three - weight cyclic codes of dimension 3 , over any finite field , was recently presented in , and almost immediately after this , a generalization for the sufficient numerical conditions of such characterization was given in . by means of this generalizationit was found a class of optimal three - weight cyclic codes of dimension greater than or equal to 3 that includes the class of cyclic codes characterized in .the main purpose of this work is to show that the numerical conditions that were found in are also necessary . as we will see later, an interesting feature of the present work is that , in clear contrast with and , we use some new and non - conventional methods in order to achieve our goals .more specifically , we will use the remainder operator ( see next section for a formal definition of it ) as one of the key tools of this work .in fact , through this remainder operator , we not only were able to extend the characterization in , but also present a less complex proof for such extended characterization , which avoids the use of some of the sophisticated but at the same time complex theorems ( for example the davenport - hasse theorem ) , that are the key arguments of the proofs given in and .as a consequence , we were also able to present a simplified and self - contained proof of our extended characterization .as a further result , we also find the parameters for the dual code of any cyclic code in our extended characterization class . in fact , after the analysis of some examples , it seems that such dual codes always have the same parameters as the best known linear codes . in order to provide a detailed explanation of what are the main results of this work ,let and be positive integers such that is a power of a prime number , and fix .also let be a fixed primitive element of .for any integer , denote by ] cyclic code over , with the weight distribution given in table i. in addition , if , with , is the number of words of weight in the dual code of , then , and + + thus , if , then this dual is a single - error - correcting code with parameters ] will denote the minimal polynomial of .furthermore , we will denote by " the trace mapping from to .lastly , by using and we will denote , respectively , the canonical additive characters of and .a common integer operator in programming languages is the remainder , or modulus operator .this operator is commonly denoted as " , and it is interesting to note that it is rarely used in mathematics , and this is so because the remainder of a division of two integers is commonly handled by means of the usual congruence relation among integer numbers . however , as we will see , this remainder operator will be especially important for this work , and therefore a formal definition of it is needed .let and be two integers such that .then , ( we read it as the _ remainder of _ _ ) , will represent the unique integer such that , and . as examples of the previous definition we have and .we , now set for this section and the rest of this work , the following : * main assumption . * from now on , we are going to suppose that ( unless otherwise stated , is just any integer ) . therefore , throughout all this work , we are going to reserve the greek letters and to represent any two integers such that , , and . in order to see that such pair of integers exists ,assume that and are integers such that .then , we just need to take and .an important type of irreducible cyclic codes are the so - called one - weight irreducible cyclic codes .the following is a characterization for them ( see , for example , theorem 2 in ) : [ teotres ] let be any integer , and let , , and be positive integers so that , , and . then , is the parity - check polynomial of an ] linear code , over , exists .given the values of , and , a central problem of coding theory is to determine the actual value of .a well - known lower bound ( see and ) for is [ teocuatro ] ( griesmer bound ) with the previous notation , with the aid of the previous lower bound , we now present the following : [ lemauno ] suppose that is a ] two - weight irreducible cyclic code , over , whose parity check polynomial is .suppose that , for some integer , and some prime .thus , if and are the nonzero weights of , then .if , then , and since , for , .suppose .thus , for a positive integer , let denote the sum of the -digits of . then , since is a ] cyclic code due to part ( a ) .let be a fixed subset of so that .now , for each and , we define as the vector of length over , which is given by : thanks to delsarte s theorem ( see , for example , ) it is well known that thus the hamming weight of any codeword , will be equal to , where .that is , we have and , by using the notation of lemma [ lemasiete ] , we have but and ; therefore , after applying corollary [ coruno ] , we get consequently , the assertion about the weight distribution of comes now from the fact that the hamming weight of any codeword in is equal to , and also due to the fact that and .lastly , is an optimal cyclic code , due to lemma [ lemauno ] , and the assertion about the weights of the dual code of can now be proved by means of table i and the first four identities of pless ( see , for example , pp .259 - 260 in ) .we continue by presenting now a formal proof of theorem [ teodos ] .suppose that is a cyclic code of length , over , whose weight distribution is given in table i. through the sum of the frequencies of such table , it is easy to see that must be a cyclic code of dimension . consequently , the degree of the parity - check polynomial , of , must be equal to .now , note that for any integer we have that , therefore , thanks to lemma [ lemados ] , there must exist an integer such that .let be an irreducible divisor of , thus , if , then ( owing to lemma [ lemados ] ) , and .also , let be the irreducible cyclic code of length , over , whose parity - check polynomial is . since , the cyclic code has at most two nonzero weights , and , in accordance with table i , these nonzero weights may only be and . thus , owing to lemma [ lematres ] , and since , can not be a two - weight irreducible cyclic code .suppose then that is a one - weight irreducible cyclic code of length .now , by further supposing that , we obtain , thanks to theorem [ teotres ] , that the nonzero weight of is .therefore , this nonzero weight can not be equal to either or . in consequence, is the parity - check polynomial of a ] cyclic code over which , by the way , has the same parameters as the best known linear code , according to the tables of the best known linear codes maintained by markus grassl at http://www.codetables.de/. [ ejeuno ] with our notation , let and . then , owing to corollary [ cordos ] , the total number of different cyclic codes of length , over , and dimension , with weight enumerator polynomial , is .in fact , these cyclic codes are : , , , , , , , , , , , , , , , and .as we already mentioned , in coding theory the weight distribution problem of a cyclic code is an important issue .however , most of the conventional methods employed for the weight distribution computations require the use of , for example , gauss and/or jacobi sums along with very sophisticated but at the same time complex theorems ( for example the davenport - hasse theorem ) . in this work we used some new and non - conventional methods in order to extend a characterization for the weight distribution of a class of three - weight cyclic codes of dimension 3 , to a characterization for the weight distribution of a class of three - weight cyclic codes of dimension greater than or equal to 3 , that includes the first characterized class . more specifically , we used the remainder operator , which is quite common in programming languages , in order to show that the numerical conditions given in theorem 11 of are also necessary , and as a consequence of this , we were able to upgrade such theorem to an extended characterization ( theorems [ teouno ] and [ teodos ] ) that includes the characterization given in . furthermore , we would like to emphasize that by using the remainder operator , we were also able to present a simplified and self - contained proof of our extended characterization. finally , we also found the parameters for the dual code of any cyclic code in our extended characterization class , and after the analysis of some examples , it seems that such dual codes always have the same parameters as the best known linear codes .
a characterization of a class of optimal three - weight cyclic codes of dimension 3 over any finite field was recently presented in . shortly after this , a generalization for the sufficient numerical conditions of such characterization was given in . the main purpose of this work is to show that the numerical conditions found in , are also necessary . as we will see later , an interesting feature of the present work , in clear contrast with these two preceding works , is that we use some new and non - conventional methods in order to achieve our goals . in fact , through these non - conventional methods , we not only were able to extend the characterization in , but also present a less complex proof of such extended characterization , which avoids the use of some of the sophisticated but at the same time complex theorems , that are the key arguments of the proofs given in and . furthermore , we also find the parameters for the dual code of any cyclic code in our extended characterization class . in fact , after the analysis of some examples , it seems that such dual codes always have the same parameters as the best known linear codes . _ keywords : _ cyclic codes , weight distribution , exponential sums , griesmer lower bound .
in the last few years , some very effective frameworks for image restoration have been proposed that exploit non - locality ( long - distance correlations ) in images , and/or use patches instead of pixels to robustly compare photometric similarities .the archetype algorithm in this regard is the non - local means ( nlm ) .the success of nlm triggered a huge amount of research , leading to state - of - the - art algorithms that exploit non - locality and/or the patch model in specialized ways ; e.g. , see , to name a few .we refer the interested reader to for detailed reviews. of these , the best performing method till date is perhaps the hybrid bm3d algorithm , which effectively combines the nlm framework with other classical algorithms . to setup notations , we recall the working of nlm .let be some linear indexing of the input noisy image .the standard setting is that is the corrupted version of some clean image , where is iid .the goal is to estimate ( approximate ) from the noisy measurement , possibly given a good estimate of the noise floor .in nlm , the restored image is computed using the simple formula where is some weight ( affinity ) assigned to pixels and . here is the neighborhood of pixel over which the averaging is performed . to exploit non - local correlations, is ideally set to the whole image domain . in practice , however , one restricts to a geometric neighborhood , e.g. , to a sufficiently large window of size around .the other idea in nlm is to set the weights using image patches centered around each pixel . in particular , for a given pixel , let denote the restriction of to a square window around .letting be the length of this window , this associates every pixel with a point in ( the patch space ) .the weights in standard nlm are set to be where is the euclidean distance between and as points in , and is a smoothing parameter .along with non - locality , it is the use of patches that makes nlm more robust in comparison to pixel - based neighborhood filters .recently , it was demonstrated in that the denoising performance of nlm can be improved ( often substantially for images with sharp edges ) by replacing the regression in nlm with the more robust regression .more precisely , given weights , note that is equivalent to performing the following regression ( on the patch space ) : and then setting to be the center pixel in .indeed , this reduces to once we write the regression in terms of the center pixel .the idea in was to use regression instead , namely , to compute and then set to be the center pixel in .note that is a convex optimization , and the minimizer ( the euclidean median ) is unique when .the resulting estimator was called the non - local euclidean medians ( nlem ) .a numerical scheme was proposed in for computing the euclidean median using a sequence of weighted least - squares .it was demonstrated that nlem performed consistently better than nlm on a large class of synthetic and natural images , as soon as the noise was above a certain threshold .more specifically , it was shown that the bulk of the improvement in nlem came from pixels situated close to edges .an inlier - outlier model of the patch space around an edge was proposed , and the improvement was attributed to the robustness of in the presence of outliers . in this paper , we show how a simple extension of the above idea can dramatically improve the denoising performance of nlm , and even that of nlem .this is the content of section ii . in particular ,a general optimization and algorithmic framework is provided that includes nlm and nlem as special cases .some numerical results on synthetic and natural images are provided in section iii to justify our claims .possible extensions of the present work are discussed in section iv .it is well - known that minimization is more robust to outliers than minimization .a simple argument is that the _ unsquared _ residuals in are better guarded against the aberrant data points compared to the squared residuals .the former tends to better suppress the large residuals that may result from outliers .this basic principle of robust statistics can be traced back to the works of von neumann , tukey , and huber , and lies at the heart of several recent work on the design of robust estimators ; e.g. , see , and the references therein .a natural question is what happens if we replace the regression in by regression ?in general , one could consider the following class of problems : the intuitive idea here is that , by taking smaller values of , we can better suppress the residuals induced by the outliers .this should make the regression even more robust to outliers , compared to what we get with .we note that a flip side of setting is that will no longer be convex ( this is essentially because is convex if and only if ) , and it is in general difficult to find the global minimizer of a non - convex functional .however , we do have a good chance of finding the global optimum if we can initialize the solver close to the global optimum .the purpose of this note is to numerically demonstrate that , for all sufficiently large , the obtained by solving ( and letting to be the center pixel in ) results in a more robust approximation of as , than what is obtained using nlm .henceforth , we will refer to as non - local patch regression ( nlpr ) , where is generally allowed to take values in the range ] .( e ) set to be the center pixel in .we noticed in that a simple heuristic often provides a remarkable improvement in the performance of nlm . in, one considers all patches drawn from the geometric neighborhood of pixel .however , notice that when a patch is close to an edge , then roughly half of its neighboring patches are on one side ( the correct side ) of the edge .following this observation , we consider only the top of the the neighboring patches that have the largest weights .that is , the selected patches correspond to the ] , this is defined to be , where .we first consider the test image of _ checker _ used in .this serves as a good model for simultaneously testing the denoising quality in smooth regions and in the vicinity of edges .we corrupt _ checker _ as per the noise model in .we then compute the denoised image using algorithm [ algo1 ] , with the exception that we skip steps ( b ) and ( c ) , that is , we use the full neighborhood .we initialize the iterations of the irls solver using .for all the experiments in this paper , we fix the parameters to be and .these are the settings originally proposed in .the results obtained using these settings are not necessarily optimal , and other settings could have been used as well . the point is to fix all the parameters in algorithm [ algo1 ] , except .this means that the same are used for different .we now run the above denoising experiment for , and for .the results are shown in figure [ psnr_sigma_p ] .we notice that , beyond a certain noise level , nlpr performs better when is close to zero .in fact , the psnr increases gradually from to , for a fixed . at lower noise levels , the situation reverses completely , and nlpr tends to perform better around .a possible explanation is that the true neighbors in patch space are well identified at low noise levels , and since the noise is gaussian , regression gives statistically optimal results .an analysis of the above results shows us that , as , the bulk of the improvement comes from pixels situated in the vicinity of edges .a similar observation was also made in for nlem . to understand this better , we recall the ideal - edge model used in .this is shown in figure [ em1 ] .we add noise of strength to the edge , and denoise it using nlpr .we examine the regression at a reference point situated just right to the edge ( cf .figure [ em2 ] ) .the patch space at this point is specified using and .the distribution of patches is shown in figure [ in - out ] .note that the patches are clustered around the centers and .for the reference point , the points around are the outliers , while the ones around are the inliers .we now perform regression on this distribution for and .the results obtained ( algorithm [ algo1 ] , steps ( b ) and ( c ) skipped ) from a single noise realization are shown in figure [ in - out ] .the exact values of the estimate in this case are ( ) , ( ) , and ( ) .the average estimate over noise realizations are ( ) , ( ) , and ( ) .we note that the working of the irls algorithm provides some insight into the robustness of regression .note that when ( nlm ) , the reconstruction in is linear ; the contribution of each noisy patch is controlled by the corresponding weight . on the other hand, the reconstruction is non - linear when .the contribution of each is controlled not only by the respective weights , but also by the multipliers .in particular , the limiting value of the multipliers dictate the contribution of each in the final reconstruction .figure gives the distribution of the sorted multipliers ( at convergence ) for the experiment described above . in this case, the large multipliers correspond to the inliers , and the small multipliers correspond to the outliers .notice that when , the tail part of the multipliers ( outliers ) has much smaller values ( close to zero ) compared to the leading part ( inliers ) .in a sense , the iterative algorithm gradually ` learns ' the outliers from the patch distribution as the iteration progresses , which are finally taken out of estimation ..comparison of nlm and nlpr ( ) at noise levels ( results averaged over noise realizations ) [ cols="<,^ , > , > , > , > , > , > , > , > , > , > " , ] we compare the psnrs obtained using nlpr ( ) with that of nlm for some standard natural images in table [ table1 ] . we notice that , for each of the images , nlpr consistently outperforms nlm at large noise levels .the gain in psnr is often as large as db . the results obtained for _ barbara _ using nlm and nlpr are compared in figure [ barbararesults ] .note that , as expected , robust regression provides a much better restoration of the sharp edges in the image than nlm .what is probably surprising is that the restoration is superior even in the textured regions .note , however , that nlm tends to perform better in the smooth regions .for example , we some more noise grains in the smooth regions in figure [ nlpr ] compared that in figure [ nlm ] .this suggests that an ` adaptive ' optimization framework , which combines regression ( in smooth regions ) and regression ( in the vicinity of edges ) , might possibly perform better than a fixed regression .some other possible extensions of the present work are as follows : ( i ) local convergence analysis of the present irls algorithm , and ways of improving it ; ( ii ) possibility of using more efficient numerical algorithms for solving ; ( iii ) finding better ways of estimating the denoised pixel from the estimated patch ( the projection method used here is probably the simplest ) ; ( iv ) use of ` better ' weights than the ones used in standard nlm ; and ( v ) formulation of a ` joint ' optimization framework for , where the optimization is performed with respect to and .m. aharon , m. elad , a. bruckstein , `` k - svd : an algorithm for designing overcomplete dictionaries for sparse representation , '' ieee transactions on signal processing , vol .4311 - 4322 , 2006 .i. daubechies , r. devore , m. fornasier , c. s. gunturk `` iteratively reweighted least squares minimization for sparse recovery , '' communications on pure and applied mathematics , vol .63 , pp . 1 - 38 , 2009
it was recently demonstrated in that the denoising performance of non - local means ( nlm ) can be improved at large noise levels by replacing the mean by the robust euclidean median . numerical experiments on synthetic and natural images showed that the latter consistently performed better than nlm beyond a certain noise level , and significantly so for images with sharp edges . the euclidean mean and median can be put into a common regression ( on the patch space ) framework , in which the norm of the residuals is considered in the former , while the norm is considered in the latter . the natural question then is what happens if we consider regression ? we investigate this possibility in this paper . image denoising , non - local means , non - local euclidean medians , edges , inlier - outlier model , robustness , sparsity , non - convex optimization , iteratively reweighted least squares .
in a wide range of astrophysical problems heat is transported by both radiation and convection .examples include stellar envelopes and stellar cores , where convection may be coupled with rotation , pulsation , and diffusion . to be useful, a convection model must be both manageable and reliable .local convection models satisfy both criteria but they are restricted to regions of strong to moderately strong convection , while in most cases one needs to model moderately strong to weak convection . numerical simulations ( , and ) have been applied to 2d and 3d stellar atmospheres .their successful reproduction of spectral line profiles and solar granulation statistics has proved their reliability .however , they are restricted to layers near the stellar surface because of the huge thermal time scales characterizing stellar interiors .in addition , they currently are too expensive for applications such as non - linear stellar pulsation calculations for rr lyrae stars or for use in everyday spectrum synthesis of large wavelength ranges or large parameter sets for a m stars .hence , they do not satisfy the criterion of manageability for the whole range of problems where convection occurs .convection models based on the non - local , hydrodynamic moment equations provide a possible alternative .they describe the moments of an ensemble average of the basic fields : velocity , temperature , and density ( or pressure ) .dynamic equations for the moments are derived directly from the fully compressible navier - stokes equations ( nse , ) .the `` ensemble '' used in this averaging process consists of realizations of solutions of the nse , or of `` convective elements '' of different velocity and temperature .however , the moment equations entail higher order moments and thus require closure assumptions . to obtain a closed set of equations most convection modelshave invoked a mixing length .the models derived in canuto ( 1992 , 1993 , 1997 ) and in avoid a mixing length by providing a fully non - local set of dynamic equations for the second order moments .recently , has presented the first comparison of a variant of these convection models with numerical simulations of compressible convection for a stellar - like scenario .he used the downgradient approximation ( dga ) for the third order moments . to avoid its shortcomings , here we use a more complete model .it is based on a model introduced in .the latter was successfully applied to the convective boundary layer of the terrestrial atmosphere .we describe the physical scenario for our comparison and present results for two sample problems .we also consider the potential of such models for application to envelope convection in a and f stars .in the fully compressible nse were solved for a 3d plane parallel geometry with a constant gravity pointing to the bottom of a simulation box .periodic boundary conditions were assumed horizontally .a constant temperature was prescribed at the top and a constant input flux at the bottom of the box ( , denotes the vertical direction , i.e. top to bottom ) .top and bottom were taken to be impenetrable and stress free .a perfect gas law was assumed and radiation was treated in the diffusion approximation .stable and unstable layers were defined through , which initially was a piecewise linear function in units of the adiabatic gradient : , where . to define the stability properties of the layers a rayleigh number for one zone where and is specified together with a prandtl number and an initial temperature contrast .this yields the radiative conductivity .both and are kept fixed and place regions of stable and unstable stratification at different vertical locations in the simulation box .a similar approach was used by and others . the comparison to simulations with a prescribed viscosityavoids the need to use a subgrid scale model . according to our numerical experiments with the moment equations, molecular viscosity can decrease the efficiency of convection as measured by by up to 15% and smooths out the numerical solution in stably stratified regions .using the reynolds stress approach , has derived a convection model which consists of four differential equations for the basic second order moments ( turbulent kinetic energy ) , ( mean square of temperature fluctuations , i.e. thermal potential energy ) , ( is the convective flux ) , and the vertical turbulent kinetic energy .the model was rederived by ( * ? ? ?* cd98 ) using a new turbulence model based on renormalization group techniques . in cd98 notation ,the convection model reads ( , ) : \ ] ] here , is the volume compressibility ( for a perfect gas ) , is the superadiabatic gradient , , and . the sare time scales .we use ( 25a ) , ( 27b ) , and ( 28b ) of cd98 to relate the latter to the dissipation time scale .compressibility effects are represented by given by equations ( 42)(48 ) of .we neglected a few terms of that were too small to contribute to the solution of the moment equations . in equation ( [ eq_diffeps ] ) , , and is a constant given by ( 24d ) of cd98 , for which we take the kolmogorov constant , while is given by the low viscosity limit of ( 11f ) of cd98 .we optionally included molecular dissipation by restoring the largest ( i.e. second order moment ) terms containing the kinematic viscosity ( i.e. , etc . ) .they are important when is of order unity rather than zero .hence , we included them in all the examples shown below .for the same reason , we optionally included a term in ( [ eq_epsilon ] ) that accounts for molecular dissipation effects .we use , , and where while elsewhere ( as in cd98 ) .the local limit of ( [ eq_epsilon ] ) , with and , fails to yield reasonable filling factors and was thus avoided . to calculate the mean stratification we solve where .equations ( [ eq_hydrostat])([eq_temp ] ) are taken from equation ( 103 ) of ( excluding the higher order term in his equation ( 104 ) ) , and from equation ( 18c ) of cd98 ( for the latter , was substituted to to account for non - boussinesq effects , canuto , priv .communication ) . in the stationary limit , ( [ eq_temp ] )yields where the kinetic energy flux is given by and the radiative flux .non - locality is represented by the terms , , , and , which require third order moments ( toms ) .using the downgradient approach ( dga ) for the toms found qualitatively acceptable results for the convective flux and filling factors . but quantitatively the results were not satisfactory , which corroborated the shortcomings of the dga found in and in . to improve over the dga , one must solve the dynamic equations for the toms ( see ) .we investigated both the fully time dependent case which requires 6 additional differential equations and various approximations of their stationary limit which do not entail further differential equations . here , we consider the following `` intermediate '' model for the toms : we take the stationary limit of the dynamic equations given in , but neglect the boussinesq terms which depend on .this model is similar in robustness to the dga , but yields significantly better results .for the boundary conditions we impose at the top , while at the bottom .this choice permits stable numerical solutions which are consistent with the boundary conditions for the numerical simulations .the equation for the mean pressure is constrained by varying at the top such that for a given the resulting density stratification is consistent with mass conservation .we have used simulation data representing two configurations : model 3j , a convection zone embedded between two strongly stable layers , and model 211p , a stably stratified layer embedded between two unstable layers ( with a small stable layer at the bottom , ) . in both cases , radiation transports at least 80% of the input energy and .each unstable layer has a thickness of .model 3j has an initial adiabatic temperature contrast of 3.5 and encompasses 4.2 while model 211p has a contrast of 6.0 and encompasses about 4.8 .the numerical simulations were done for grid points and have successfully been compared to higher resolution simulations ( grid points , ) .all simulation data shown here are statistical averages over many dozen sound crossing times .the moment equations were solved by centered finite differences on a staggered mesh .solutions were calculated using 72 grid points and were successfully verified by comparing them with results obtained from higher resolutions of 128 to 512 grid points .time integration was done by the euler forward method until a stationary state was reached after 1.2 to 2.5 thermal time scales .verification of stationarity was done by testing whether all errors in time derivatives were formally less than in relative units at all grid points ( i.e. far smaller than the truncation error ) and by checking the strict energy flux and mass conservation required for stationary solutions .[ fig1 ] compares the convective flux of model 211p with two convection models : the non - local model with intermediate tom and its local counterpart ( the stationary , local limit of the non - local model , see cd98 ) .clearly , only the non - local model successfully reproduces the simulations also in the central overshooting region which connects both convection zones of model 211p .[ fig2 ] compares the filling factor , i.e. the relative area in each layer covered by upwards flowing material , of the numerical simulations of model 3j with the filling factor computed from the non - local moment equations . for the latter we usedthe prescription described in cd98 .as is an important topological quantity describing the inhomogenous nature of convection , the reasonable agreement found here is a very promising result .it illustrates the importance of improving the toms , because the dga used in only indicated a correct trend , while the more complete tom used here also provides a closer quantitative agreement .the local convection model predicts a structureless .[ fig3 ] compares the convective flux from numerical simulations for model 3j with solutions from the moment equations using the intermediate tom . clearly, the latter has improved over the dga both qualitatively and quantitatively . except for the overshooting region the dga barely differed from the local model which is shown here for comparison .though the present convection model provides a substantial improvement , the convective flux still falls short in the middle of the convection zone of 3j and the extent of the lower overshooting zone is underestimated .this may be due to neglecting effects of radiative losses on the time scales and as well as because several contributions to the toms are neglected in our `` intermediate model '' , or due to the boussinesq treatment of the toms in , or incompleteness of equation ( [ eq_epsilon ] ) .[ fig4 ] compares local length scales as used in mixing length theory with the length scale obtained with the non - local moment equations for the case of model 211p .obviously , there is no that brings the length scales in agreement .thus , there is no alternative but to solve at least the full equations ( [ eq_epsilon])([eq_diffeps ] ) to obtain . in conclusion ,we have found not only qualitative , but also quantitative agreement between numerical simulations and the non - local moment equations , provided one avoids the dga for the toms and employs the stationary solution of their dynamic equations as suggested in appendix b of and neglects boussinesq type factors ( involving ) .moreover , we have found the mean values for t , p , and to be accurate to within 2% in comparison with 4% found for local models with optimized mixing length parameter .convective and radiative flux are typically accurate to within 20% .the improvements are largest in regions of weak convection and in stably stratified layers .we have used the closure constants suggested in and cd98 except for , which was increased from 0.2 to 0.5 , because this improved the results for the planetary boundary layer ( canuto priv .communication ) and enhanced the numerical stability .we have not found the dga to be able to yield a similar agreement for both 3j and 211p even when tuning the closure constants individually for 3j and 211p .the situation is even worse for the local model . in will deal with this problem in detail .finally , while each 3d simulation took between several days and several weeks on a modern workstation to obtain a thermally relaxed solution , the moment equations took a couple of minutes to half an hour .this holds true already for a low numerical resolution and for an explicit time integration method .the results found here are promising for the application of the non - local convection model in particular to envelope convection zones of a and f stars , because they feature a similar range of convective efficiency , thickness in terms of , and interaction among neighbouring convection zones . for a and fstars the thermal structure is not known in advance .hence , thermal relaxation and thus the gain on speed in comparison with numerical simulations is essential .the computational savings are very attractive also for problems studied on hydrostatic time scales , whenever the full information provided by a simulation is not needed .improvements of the convection model studied are possible , nevertheless it is a more promising basis for asteroseismological studies of pulsating a and f stars ( sct , dor , roap , etc . ) than local convection models and also a new basis for related work such as diffusion calculations .detailed results for a broader range of physical parameters , in particular for efficient convection and deeper convection zones , and thus of high importance also for other types of stars , have yet to corroborate this study .i am indebted to h.j .muthsam for permission to use his simulation code and data .i am grateful to v.m .canuto and m.s .dubovikov for discussions on turbulent convection models .the research was performed within project _`` convection in stars '' of the austrian fonds zur frderung der wissenschaftlichen forschung .atroshchenko , i. n. , and gadun , a. s. 1994 , , 291 , 635 canuto , v. m. 1992 , , 392 , 218 canuto , v. m. 1993 , , 416 , 331 canuto , v. m. 1997 , , 482 , 827 canuto , v. m. , minotti , f. , ronchi , c. , ypma , r. m. , and zeman , o. 1994 , j. atm .sci . , 51 ( no .12 ) , 1605 canuto , v. m. , and dubovikov , m. s. 1998 , , 493 , 834 ( cited as cd98 ) cattaneo , f. , brummel , n. , toomre , j. , malagoli , a. , and hurlburt , n. 1991 , , 370 , 282 chan , k. w. , and sofia , s. 1996 , , 466 , 372 feuchtinger , m. u. 1998 , , 337 , l29 freytag , b. , ludwig , h .-g . , and steffen , m. 1996 , , 313 , 497 grossman , s. a. , narayan , r. , and arnett , d. 1993 , , 407 , 284 grossman , s. a. 1996 , , 279 , 305 hurlburt , n. e. , toomre , j. , massaguer , j. m. , and zahn j.p .1994 , , 421 , 245 kim , y .- c . , and chan , k. l. 1998 , , 496 , l121 kupka , f. 1999a , theory and tests of convection in stellar structure , a. gimenez , e. f. guinan and b. montesinos , asp conf . ser . 173 , 157 kupka , f. 1999b , paper i , to be submitted to kupka , f. , muthsam h. j. 1999 , paper ii , to be submitted to muthsam , h. j. , gb , w. , kupka , f. , liebich , w. , and zchling , j. 1995 , , 293 , 127 muthsam , h. j. , gb , w. , kupka , f. , and liebich , w. 1999 , newa , in print porter , d.h . , and woodward , p. r. 1994 , , 93 , 309 singh , h. p. , roxburgh , i. w. , and chan , k. l. 1995 , , 295 , 703 sofia , s. , and chan , k. l. 1984 , , 282 , 550 steffen , m. , and ludwig , h .-1999 , theory and tests of convection in stellar structure , a. gimenez , e. f. guinan and montesinos , asp conf .173 , 217 stein , r. f. and nordlund , 1998 , , 499 , 914 xiong , d. r. 1986 , , 167 , 239 xiong , d. r. , cheng , q. l. , and deng , l. 1997 , , 108 , 529
the non - local hydrodynamic moment equations for compressible convection are compared to numerical simulations . convective and radiative flux typically deviate less than 20% from the 3d simulations , while mean thermodynamic quantities are accurate to at least 2% for the cases we have investigated . the moment equations are solved in minutes rather than days on standard workstations . we conclude that this convection model has the potential to considerably improve the modelling of convection zones in stellar envelopes and cores , in particular of a and f stars .
during the last thirty years high performance computing ( hpc ) has become an increasingly - important tool in scientific research .hpc studies enhance understanding of experimental findings , allow researchers to test theories on model systems , and even make it possible to investigate phenomena which can not be investigated via classical experiments .one class of computer experiments is of special interest : molecular dynamics ( md ) simulations .md is used to simulate materials on an atomic ( or coarser - grained ) level using various interaction models . through advances in compute capabilities and algorithms , md simulationshave gradually expanded their range of applicability from modeling tiny systems of a few hundred atoms for up to a few thousand time steps , to performing short multi - billion atom simulations or multi - billion time - step simulations of smaller systems .while this is already impressive in itself , a single cubic centimeter of matter contains on the order of atoms , and to model only one second of its time propagation , time steps ( typically a femtosecond each ) would be required . therefore , the interest in accelerating md simulations is unstinting and of great interest for many computational scientists. easily programmable graphics cards ( gpus ) represent a disruptive technology development that allows radical departure from recent years gradual improvements in md simulation speed . by harnessing the compute capability of gpus, md practitioners will be able to simulate much larger systems for much longer simulated times .gpus represent a jump in the performance - to - cost ratio of at least a factor of five .gpus also achieve more flops - per - watt than corresponding cpu hardware , making next - generation gpu - based hpc supercomputers more feasible from an operating energy cost perspective .the cuda programming language is currently the most widely used programming model for gpus .since its introduction , many scientific programmers have used cuda to write extremely fast software , thereby enabling previously - impossible investigations . among thoseare also a number of md codes which have shown speed - ups of 5 - 100x over existing cpu - based codes . in this paper , we present our own implementation of a gpu - md code called lammps , which is introduced as an extension to the widely used md code lammps . with its 26 different force fields , lammps can model atomic , polymeric , biological , metallic , granular , and coarse - grained systems up to 20 times faster than a modern quad core workstation by harnessing a modern gpu . at the same time it offers unprecedented multi - gpu support for an md code . by providing very effective scaling of simulations on up to hundreds of gpus, lammps enables scientists to harness the full power of the world s most advanced supercomputers , such as the world s fastest supercomputer , the tianhe-1a at the chinese national supercomputing center in tianji .we start with a description of the design objectives of our implementation and an overview of the features of lammps .then we discuss aspects of our gpu implementations of lammps s pair force calculations .performance results are then presented for various md simulations on single gpus .this is followed by a discussion of strategies that enable gpu - based md codes to scale well on systems with many gpus .we also report and analyze lammps performance results on ncsa s lincoln cluster , using up to 256 gpus .parameters of the benchmark simulations are listed in appendix [ sec : app_simulations ] and hardware configuration are given in appendix [ sec : app_hardware ] .numerous gpu - md codes have been under development during the past several years . some of those are new codes ( hoomd , acemd ) , others are extensions or modifications of existing codes ( e.g. namd , amber , lammps ) .most of these projects are of limited scope and can not compete with the rich feature sets of legacy cpu - based md codes .this is not surprising considering the amount of development time which has been spent on the existing codes ; many of them have been under development for more than a decade .furthermore , some of these gpu - md codes have been written to accelerate specific compute - intensive tasks , limiting the need to implement a broad feature set .our goal is to provide a gpu - md code that can be used for simulation of a wide array of materials classes ( e.g. glasses , semiconductors , metals , polymers , biomolecules , etc . ) across a range of scales ( atomistic , coarse - grained , mesoscopic , continuum ) .lammps can perform such simulations on cpu - based clusters .it is a classical md code that been under development since the mid 1990s , is freely - available , and includes a very rich feature set . since building such simulation software from scratch would be an enormous task , we instead leverage the tremendous effort that has gone into lammps , and enable it to harness the compute power of gpus .we have written a lammps `` package '' that can be built along with the existing lammps software , thereby preserving lammps rich feature set for users while yielding tremendous computational speedups .other important lammps features include an extensive scripting system for running simulations , and a simple - to - extend and modular code infrastructure that allows for easy integration of new features .most importantly it has an mpi - based parallelization infrastructure that exhibits good scaling behavior on up to thousands of nodes .finally , starting with an existing code like lammps and building gpu versions of functions and classes one by one allows for easy code verification .our objectives can be summarized as follows ( in order of decreasing priority ) : a. maintain the rich feature set and flexibility of lammps , b. achieve the highest possible speed - ups , c. allow good parallel scalability on large gpu - based clusters , d. minimize code changes , e. write the code so that it is easy to maintain , f. include gpu support for the full list of lammps capabilities , g. make the gpu capabilities easy for lammps users to invoke .all of these design objectives have implications for design decisions , yet in many cases they are competing objectives .for example objective ( i ) implies that the different operations of a simulation have to be done by different modules , and that the modules have to be able to be used in any combination requested by the user .this in turn means that data , such as the particle positions , are loaded multiple times during a single simulation step from the device memory , which results in a considerably negative effect on the performance of the simulation .another slight performance hit is caused by the use of templates for the implementation of pair forces and communication routines .while this greatly enhances maintainability , it adds some computational overhead . by keeping full compatibility with lammpswe were able to minimize the gpu - related changes that users will need to make to existing input scripts . in order to use lammps is often enough to add the line `` accelerator cuda '' at the beginning of an existing input script .this triggers use of gpus for all gpu - enabled features in lammps , while falling back to the original cpu version for all others .another big influence on design decisions comes from the limiting factors of the targeted architecture . sincethose have been discussed in detail elsewhere , here we only list the most important factors : a. in order to use the full gpu , thousands or even tens of thousands of threads are needed , b. data transfer between the host and the gpu is slow , c. the ratio of device memory bandwidth to computational peak performance is much smaller than on a cpu , d. latencies of the device memory are large , e. random memory accesses on the gpu are serialized . f. 32 threads are executed in parallel considering ( b ) , we decided to minimize data transfers between device and host by running as many parts of the simulation as possible on the gpu .this distinguishes our approach from other gpu extensions of existing md codes , where only the most computationally - expensive pair forces are calculated on the gpu . a work - flow chart of our implementation is shown in figure [ fig : workflow ] .work - flow , dashed boxes are done on the cpu , while solid boxes are done on the gpu.,scaledwidth=25.0% ] currently lammps supports 26 pair force styles ; long range coulomb interactions via a particle - particle / particle - mesh ( pppm ) algorithm ; nve , nvt , and npt integrators ; and a number of lammps `` fixes '' .in addition , pair force calculations on the gpu can be overlapped with bonded interactions and long range coulomb interactions if those are evaluated on the cpu .all of the bond , angle , dihedral , and improper forces available in the main lammps program can be used .simulations can be performed in single ( 32 bit floats ) and double ( 64 bit floats ) precision , as well as in a mixed precision mode , where only the force calculation is done in single precision while the time integration is done in double precision .in addition to the requirements of lammps , only the cuda toolkit ( available for free from nvidia ) is needed .currently only nvidia gpus with a compute capability of 1.3 or higher are supported .this includes geforce 285 , tesla c1060 as well as gtx480 and fermi c2050 gpus .the package is available under the gnu public license and can be downloaded from http://code.google.com/p/gpulammps/ , where detailed installation instructions and feature lists can be found .lammps , which is encapsulated in the user - cuda package of lammps , should not be confused with lammps `` gpu '' package , which has some overlapping capabilities ( see figures [ fig : bench_system_size ] and [ fig : scaling ] ) and is also available from the same website .we analyzed two variants of short range force calculations : a cell list approach and a neighbor list approach .while most cpu - based md codes use a neighbor list approach for the force calculation , it has been suggested that the cell list approach is better suited for gpu implementations .the idea of the cell list approach is a spatial decomposition of the simulation box into a regular grid of small sub - cells , with a maximum number of atoms per cell .because lammps uses neighbor lists , additional effort is required to re - order the existing data structures for the gpu calculation and to convert the data back into the original lammps format for every usual computation not done on the gpu . in order to implement this idea on the gpu ,we associate every _ cell _ with a cuda thread _ block _ and have each of the _ threads _ of it calculate the forces for one _ particle _ in the cell .furthermore it is necessary to choose the cell size ( see fig . [fig : cell_lists](a ) ) and the maximum number of atoms per cell . for a given force cut - off radius , we choose in order to keep the average distance between particles in the cell and to limit the frequency of re - assigning atoms to their cells .also , should be large enough to contain at least 32 particles in order to not to leave gpu threads idle .accordingly , is automatically chosen as a multiple of 32 , depending on the particle density .[ fig : cell_lists ] when performing the force calculations in the cell list approach , at least two more optimizations can be used .the first is to use newton s third law to save half of the force calculation time . in 2d , forces need to be explicitly computed for only 4 of 8 the neighboring cells , with the other 4 obtained via newton s third law during other cells updates .figure [ fig : cell_lists](b ) depicts an example of such an update pattern , with the explicitly - computed neighbors of cell e connected with cell e by a solid black line , and the other 4 neighbors of cell e connected with cell e by solid gray lines .every cell then follows this pattern , and the interactions between all neighboring cells are then considered exactly once , as verified for cell e. in 3d , only 13 of the 26 neighboring cells are explicitly considered .note , however , that not every selection of 13 neighboring cells fulfills the required periodicity .execution of gpu thread blocks can be in any order , whether in sequence or in parallel .therefore , write conflicts may occur .for example , in figure [ fig : cell_lists](b ) , cell a and cell d might try to update the forces in cell b at the same time . in order to avoid such a write conflict and a resulting error in the calculation ,the code has been written to execute only non - interfering groups of cells simultaneously .if only one neighbor shell needs to be considered , there are six such groups in 2d and 18 such groups in 3d .this does not significantly affect performance since groups are executed , each in approximately of the original time .the second optimization is the use of shared memory for the positions of the particles in the neighboring cells . if a cell contains more atoms than will fit in shared memory , the particles have to be loaded to shared memory in groups one after another . for a more detailed discussion on this topic ,see . in designing a neighbor list approach that uses blocks of threads, it becomes clear that there are two main ways that the force calculation work can be divvied up among the threads .the first possibility is to use one thread per atom ( tpa ) , where the thread loops over all of the neighbors of the given atom .the second possibility is to use one block per atom ( bpa ) , where each of the threads in the block loop over its designated portion of the neighbors of the given atom . in the following , pseudo - code for both algorithmsare given : tpa algorithm : .... 1 i = blockid*threadsperblock+threadid ; 2 load(i ) // coalesced access 3 for(jj = 0 ; jj < numneigh[i ] ; 4 jj++ ) { 5 j < - neighbors[i][jj ] 6 load(j ) //random access 7 ftmp+=calcpairforce(i , j ) 8 } 9 10 ftmp - > f[i ] // coalesced access ....bpa algorithm : .... 1 i = blockid 2 load(i ) // coalesced access 3 for(jj = 0 ; jj < numneigh[i ] ; 4 jj+=threadsperblock ) { 5 j < - neighbors[i][jj ] 6 load(j ) //random access 7 ftmp+=calcpairforce(i , j ) 8 } 9 reduce(ftmp ) 10 ftmp - > f[i ] // coalesced access ....both algorithms ostensibly have the same number of instructions ; however , when considering looping it becomes clear that the bpa algorithm requires the execution of a larger total number of lines of code .the bpa algorithm also requires the relatively expensive reduction of that is not required by the tpa algorithm .bpa also requires use of a much larger total number of blocks . for further clarification , table [ tab : bpa - tpa - nexec ] lists the number of times each line of code is executed , taking into account the number of blocks used , and considering that 32 threads of each block are executed in parallel ..[tab : bf ] number of executions per line for the bpa and tpa algorithms [ cols="^,^,^ " , ] while this seems to indicate that tpa would always be faster , one has to take into account cache usage as well . in order to reduce random accesses in the device memory while loading the neighbor atoms ( limiting factor ( e ) ), one can cache the positions using the texture cache .( we also tested global cache on fermi gpus , but it turns out to be slower due to its cache line size of 128 bytes . )this strategy improves the speed of both algorithms considerably , but it helps bpa more than tpa .the underlying reason is that less atoms are needed simultaneously with bpa than with tpa . as a result, bpa allows for better memory locality , and therefore the re - usage of data in the cache is increased ( assuming atoms are spatially ordered ) .revisiting table [ tab : bpa - tpa - nexec ] makes it evident that this better cache usage becomes increasingly important with an increasing number of neighbors , corresponding to an increased pair cutoff distance .consequently , one can expect a crossover cutoff for each type of pair force interaction , where tpa is faster for smaller cutoffs and bpa for larger .unfortunately it is hard to predict where this cutoff lies .it not only depends on the complexity of the given pair force interaction , but also on the hardware architecture itself .therefore , a short test is the best way to determine the crossover cutoff .the timing ratios shown in figure [ fig : bench_bpavstpa ] indicate that the force calculation time can depend significantly on the use of bpa or tpa .generally the differences are larger when running in single precision than when running in double precision . for the lj system ,an increase of the cutoff from to can turn the 30% tpa advantage into a 30% bpa advantage .therefore we decided to implement both algorithms , and allow dynamic selection of the faster algorithm using a built - in mini benchmark during the setup of the simulation .this ensures that the best possible performance is achieved over a wide range of cutoffs .while our particular findings are true only for nvidia gpus , one can expect that similar results would be found on other highly parallel architectures with comparable ratios of cache to computational power . ) ; hardware : cl ( see appendix [ sec : app_hardware]),scaledwidth=45.0% ] we tested the cell list approach and the neighbor list approach for a small lj system as a function of cutoff radius ( see fig .[ fig : cell_neigh ] ) .for the neighbor list approach , three distinct regions can be seen , as labeled in the figure . for regions ia and ib the tpa method is faster than the bpa method . above bpa algorithm is faster than the tpa method , so the code automatically switches to bpa , resulting in a different slope for region ii .the two different slopes in ia and ib are most likely a result of the limited texture cache size . for small cutoffs ,most neighbors fit into the texture cache , facilitating efficient re - usage of data .but at some point the collective number of neighbors becomes large enough that the texture cache can no longer be used efficiently .this changes the scaling behavior as a function of the cutoff radius .figure [ fig : cell_neigh ] clearly demonstrates that the cell - list - based force evaluation is considerably slower than the neighbor - list - based approach for all cut - offs . in this section, we will make a simple argument why this is not only true for the above example , but has to be expected in general .figure [ fig : cell_neigh ] is based on an earlier program version that still featured both cell and neighbor lists . due to the weak performance of the cell list approach ,we have completely dropped it and have focused our efforts on the optimization of the neighbor - list - based force calculation . while further improvements might have been possible for the cell list approach as well , the superiority of the neighbor list approach appears to be inevitable , as explained below . particles . in regioni ( ) the tpa algorithm is used , while in region ii the bpa algorithm is used .( the faster algorithm is automatically selected for each region . ) in sub - region ia texture cache is used effectively by the tpa algorithm , but in region ib the cache must be flushed frequently .the jumps in the cell list curve are caused by the gpu requirement of being a multiple of 32 and the resulting unsteady proportion of started threads versus those that are actually needed .system : lj ( see appendix [ sec : app_simulations ] ) ; hardware : ws ( see appendix [ sec : app_hardware]),scaledwidth=45.0% ] obviously , the time for processing a single interaction force consists of two parts : memory access ( i.e. reading the other atom s position ) and the evaluation of the force formula .navely one might assume that the total time needed for all force calculations equals times the number of interactions .however , both the cell and the neighbor list algorithms first load all potential interaction partners to check if they are within the cut - off radius .whenever one thread finds an atom close enough ( ) and evaluates the force formula , the other threads processing interactions with have to wait until every thread in the warp has completed its calculations .therefore , the time for both memory access and for the evaluation of the force formula scale with the number of possible interaction partners , i.e. it is reasonable to say .still both factors depend on which algorithm is chosen ( cell or neighbor list ) . to determine which is faster , we examine the ratio of their computational times : for geometric reasons , .figure [ fig : cell_benchmark_explanation](a ) illustrates the 3d situation in a 2d sketch .the cell list approach requires loading all atom positions from surrounding ( cubic ) cells , each of edge length , while the neighbor list includes only atoms within a sphere of radius .the cell list approach is wasteful in the sense that many non - interacting atoms are loaded into memory .since the cell list approach requires the loading of roughly 3.3 times more data into memory than the neighbor list approach , and since the time for the evaluation of the force formula is the same in both cases , the cell list approach can only be faster if its memory access time is smaller. this could be possible due to coalesced memory accesses that can be done in the cell list approach . in order to find out whether this is realistic , we model the situation with two parameters : * : the factor by which the coalesced memory accesses are faster than random accesses ( ) .* : the fraction of which is assumed to be spent on memory accesses ( ) .clearly , the cell lists need both high and high in order to gain the advantage with their faster memory accesses . with a little algebra ( see appendix [ sec : calc_comp ] ) , we can quantify some limits for and . in order to make the cell list method viable ,its memory accesses have to be at least 3.3 times faster than the memory accesses used in the neighbor list method , and at least 70% of the total neighbor list force calculation time has to be spent on memory accesses . in figure[ fig : alphagamma ] is shown for .while % is not unrealistic for computation of inexpensive pair forces , the use of texture reads in the pair force kernels limits the advantage of coalesced memory accesses considerably ( i.e. decreasing ) , thus making the cell list approach always slower than the neighbor list approach . in practice ,the product of and is always below the solid line in figure [ fig : alphagamma ] , making the neighbor list approach the preferred alternative .approaches its maximum performance only for system sizes larger than 200,000 particles .the same system was also run using hoomd version 0.9.1 and the `` gpu '' package of lammps .the cpu curve has also been plotted with a scaling factor of 40 to make it easier to see .system : lj ( see appendix [ sec : app_simulations ] ) ; hardware : ws ( see appendix [ sec : app_hardware]),scaledwidth=45.0% ] to assess the possible performance gains of harnessing gpus , we have performed benchmark simulations of several important classes of materials . both the regular cpu version of lammps and lammps run on our workstation b ( ws ) with an intel i7 950 quad core processor and a gtx 470 gpu from nvidia .simulations on the gpu were carried out in single , double , and mixed precision .we compare the loop times for 10,000 simulation steps .the results shown in figure [ fig : bench_single_gpu ] are proof of an impressive performance gain .even in the worst - case scenario , a granular simulation ( which has extremely few interactions per particle ) , the gpu is 5.3 times as fast as the quad core cpu when using single precision and 2.0 times as fast in double precision . in the best - case scenariothe speed - up reaches a factor of 13.5 for the single precision simulation of a silicate glass involving long range coulomb interactions .single precision calculations are typically twice as fast as double precision calculations , while mixed precision is somewhere in between .it is worthy to note that this factor of two between single and double precision is reached on consumer grade geforce gpus , despite the fact that their double precision peak performance is only 1/8th of their single precision peak performance .this is a strong sign that lammps is memory bound . generally ,the speed - up increases with the complexity of the interaction potential and the number of interactions per particle .additionally the speed - up also depends on the system size .as stated in section [ sec : design ] the gpu needs many threads in order to be fully utilized .this means that the gpu can not reach its maximum performance when there are relatively few particle - particle interactions .this point is illustrated in figure [ fig : bench_system_size ] , where the number of atom - steps per second is plotted as a function of the system size . as can be seen ,at least 200,000 particles are needed to fully utilize the gpu for this lennard - jones system .in contrast the cpu core is already nearly saturated with only 1,000 particles .all systems used to produce figure [ fig : bench_single_gpu ] were large enough to saturate the gpu .we have also plotted the performance curves for the gpu - md program hoomd ( version 0.9.1 ) and the `` gpu '' package of lammps in figure [ fig : bench_system_size ] for comparison purposes .the characteristics of hoomd are very similar to lammps .it reaches its top performance at about 200,000 particles .interestingly hoomd is somewhat slower than lammps at very high particle counts , while it is significantly faster at system sizes of 16,000 particles and below .this can probably be explained by the fact that hoomd is a single gpu code , whereas lammps has some overhead due to its multi - gpu capabilities .the `` gpu '' package of lammps reaches its maximum performance at about 8,000 particles . while it is faster than lammps for smaller systems ( and even faster than hoomd for fewer than 2,000 particles ), it is significantly slower than lammps and hoomd for this lj system at large system sizes .the reason is most likely that the `` gpu '' package of lammps only off - loads the pair force calculations and the neighbor list creation to the gpu , while the rest of the calculation ( e.g. communication , time integration , thermostats ) is performed on the cpu .this requires a lot of data transfers over the pci bus , which reduces overall performance and sets an upper limit on the speed - up . on the other hand , at very low particle countsthe cpu is very efficient at doing these tasks that are less computationally demanding and memory bandwidth limited .while a gpu has a much higher bandwidth to the device memory than does the cpu to the ram , the whole data set can fit into the cache of the cpu for small system sizes .so for the smallest system sizes , the cpu can handle these tasks more efficiently than the gpu , leading to the higher performance of the `` gpu '' package for small system sizes .p0.08p0.45p0.45 & * lj * & * silicate ( cutoff ) * + 90 & & + 90 & & + [ fig : scaling ] in order to simulate large systems within a reasonable wall clock time , modern md codes allow parallelization over multiple cpus . lammps s spatial decomposition strategy was specifically chosen to enable this parallelization , allowing lammps to run efficiently on modern hpc hardware .depending on the simulated system , it has been shown to have parallel efficiencies times the number of atoms divided by the wall clock time : .let denote the atom - steps per second for a single cpu run , and let denote the atom - steps per second for an cpus run .parallel efficiency , , is then the ratio of to multiplied by : . ] of 70% to 95% for up to several ten thousand cpu cores . to split the work between the available cpus , lammps s spatial decomposition algorithm evenly divides the simulation box into as many sub - boxes as there are processors .mpi is used for communication between processors . during the run, each processor packs particle data into buffers for those particles that are within the interaction range of neighboring sub - boxes .each buffer is sent to the processor associated with each neighboring sub - box , while the corresponding data buffers from other processors are received and unpacked . while the execution time of most parts of the simulation should in principle scale very well with the number of processors , communication time is a major exception . with an increasing number of processors , the fraction of the total simulation time which is used for inter - processor communication increases .this is already bad enough for cpu - based codes , where switching from 8 to 128 processors typically doubles or triples the relative portion of communication . but for gpu - based codes , the situation is even worse since the compute - intensive parts of the simulation are executed much faster ( typically by a factor of 20 to 50 times ) .it is therefore understandable why it is essential to perform as much of the simulation as possible on the gpu .consider the following example : in a given cpu simulation , 90% of the simulation time is spent on computing particle interaction forces . running only that part of the calculation on the gpu , and assuming a 20-fold speed - up in computing the forces , the overallspeed - up is only a factor of 6.9 .if we then assume that with an increasing number of processors the fraction of the force calculation time drops to 85% in the cpu version , then the overall speed - up would be only a factor of 5.2 .on top of the usual parallel efficiency loss of the cpu code , additional parallel efficiency is lost for the gpu - based code if only calculating the pair forces on the gpu .if one processes the rest of the simulation on the gpu as well , the picture gets somewhat better .most of the other parts of the simulation are bandwidth bound , i.e. typical speed - ups are around 5 . taking the same numbers as before yields an overall speed - up of 15.4 and 13.8 , respectively .so if parts of the code that are less optimal for the gpu are also ported , not only will single node performance be better , but the code should also scale much better . while the above numbers are somewhat arbitrary , they illustrate the general trend . in order to minimize the processing time on the host , as well as minimize the amount of data sent over the pci bus , lammps builds the communication buffers on the gpu .the buffers are then transferred back to the host and sent to the other processors via mpi . similarly , received data packagesare transferred to the gpu and only opened there .actual measurements have been performed on ncsa s lincoln cluster , where up to 256 gpus on 128 nodes were used ( see figure [ fig : scaling ] ) .we compare weak and strong scaling behavior of lammps versus the cpu version of lammps for two systems : lj and silicate ( cutoff ) . in the weak scaling benchmark ,the number of atoms per node is kept fixed , such that the system size grows with increasing number of nodes . in this way, the approximate communication - to - calculation ratio should remain fairly constant , and the gpus avoid underutilization issues . in the strong scaling benchmark ,the total number of atoms is kept fixed regardless of the number of nodes used .this is done in order to see how much a given fixed - size problem can be accelerated .note that in figure [ fig : scaling ] , we plot the number of quad - core cpus rather than the number of individual cores .please also note that in general the lincoln - cluster would not be considered a gpu-``based '' cluster since the number of gpus per node is relatively small and two gpus share a single pcie2.0 8x connection .this latter issue represents a potential communication bottleneck since there are synchronization points in the code prior to data exchanges .consequently , both gpus on a node attempt to transfer their buffers at the same time through the same pcie connection . on systems where each of the ( up to four ) gpus of a node has its own dedicated pcie2.0 16x slot, the required transfer time would be as little as one fourth of the time on lincoln , thus allowing for even better scaling .since lincoln is not intended for large - scale simulations , it features only a single data rate ( sdr ) infiniband connection with a network bandwidth that can become saturated when running very large simulations .nevertheless , figure [ fig : scaling ] shows that very good scaling is achieved on lincoln .there , the number of atom - steps per second ( calculated by multiplying the number of atoms in the system by the number of executed time - steps , and dividing by the total execution time ) is plotted against the number of gpus and quad - core cpus that were used .we tested two different systems : a standard lennard - jones system ( density 0.84 , cutoff 3.0 ) , and a silicate system that uses the buckingham potential and cutoff coulombic interactions ( density 0.09 , cutoff 15 ) .while keeping the number of atoms per node constant , the scaling efficiency of lammps is comparable to that of regular cpu - based lammps . even at 256 gpus ( 128 nodes ), a 65 % scaling efficiency is achieved for the lennard - jones system that includes 500,000 atoms per node . anda surprising 103 % scaling efficiency is achieved for the silicate system run on 128 gpus ( 64 nodes ) and 34,992 atoms per node .this means that for the silicate system , 128 gpus achieved more than 65 times as many atom - steps per second than 2 gpus . in this case , a measured parallel efficiency slightly greater than unity is probably due to non - uniformities in the timing statistics caused by other jobs running on lincoln at the same time .we were also able to run this lennard - jones system with lammps s `` gpu '' package .as already seen in the single gpu performance , the gpu package is about a factor of three slower than lammps for this system .the poorer single gpu performance leads to slightly better scaling for lammps s gpu package . comparing the absolute performance of lammps with lammps at 64 nodes gives a speed - up of 6 for the lennard - jones system and a speed - up of 14.75 for the silicate system . translating that to a comparison of gpus versus single cpu cores means speed - ups of 24 and 59 , respectively .such larger speed - ups are observed up to approximately 8 nodes ( 16 gpus ) in the strong scaling scenario , where we ran fixed - size problems of 2,048,000 lennard - jones atoms and 139,968 silicate atoms on an increasing number of nodes . with 32 gpus ( 16 nodes )the number of atoms per gpu gets so small ( 64,000 and 4,374 atoms , respectively ) that the gpus begin to be underutilized , leading to much lower parallel efficiencies ( see figure [ fig : bench_system_size ] ) . at the same time, the amount of mpi communication grows significantly .in fact , for the silicate system with its large 15 cutoff , each gpu starts to request not only the positions of atoms in neighboring sub - boxes , but also positions of atoms in next - nearest neighbor sub - boxes .this explains the sharp drop in parallel efficency seen at 32 gpus . in consequence ,256 gpus can not simulate the fixed - size silicate system significantly faster than 16 gpus . on the other hand ,those 16 gpus on 8 nodes are faster than all 1024 cores of 128 nodes when using the regular cpu version of lammps .we also tested the `` gpu '' package of lammps for strong scaling on the lennard - jones system . for this test ,its parallel efficency is lower than that of lammps up to 32 gpus . for more than 32 gpus, the `` gpu '' package shows stronger scaling than lammps .this can be ascribed to lammps s faster single node computations and subsequently higher communication - to - computation ratio .( note that each of the versions of lammps discussed here have the same mpi communication costs . ) in lammps , the time for the mpi data transfers actually reaches 50 % of the total runtime when using 256 gpus .a simple consideration explains why the mpi transfers are a main obstacle for better scaling .since the actual transfer of data can not be accelerated using gpus , it constitutes the same absolute overhead as with the cpu version lammps . considering that the rest of the code runs 15 to 60 times faster on a process - by - process basis, it is obvious that if 1 % to 5 % of the total time is spent on mpi transfers in the cpu lammps code , communication can become the dominating time factor when using the same number of gpus with lammps . ) ; hardware : lincoln ( see appendix [ sec : app_hardware]),scaledwidth=45.0% ] that the mpi transfer time is indeed the main cause of the poor weak scaling performance can be shown by profiling the code .figure [ fig : bench_mpi ] shows the total simulation time of the lennard - jones system versus the number of gpus used .it is broken down into the time needed for the pair force calculation , a lower estimate of the mpi transfer times and the rest .the lower estimate of the mpi transfer time does not include any gpu communication .it only consists of the time needed to perform the mpi send and receive operations while updating the positions of atoms residing in neighboring sub - boxes . all other mpi communicationis included in the `` other '' time .clearly , almost all of the increase in the total time needed per simulation step can be attributed to the increase in the mpi communication time .furthermore at 64 gpus a sharp increase in the mpi communication time is observed .we presume that this can be attributed to the limited total network bandwidth of the single data rate infiniband installed in lincoln . considering the relatively modest communication requirements of an md simulation ( at least for this simple lennard - jones system ) ,this finding illustrates how important high throughput network connections are for gpu clusters . in order to somewhat mitigate this problem ,we have started to implement lammps modifications that will allow a partial overlap of force calculations and communication .preliminary results suggest that up to three quarters of the mpi communication time can be effectively hidden by that approach . ) ; hardware : lincoln ( see appendix [ sec : app_hardware]),scaledwidth=45.0% ] as a further example of what is possible with lammps , we performed another large - scale simulation . using 288 gpus on lincoln, we ran a one billion particle lennard - jones system ( density : 0.844 , cutoff : 2.5 ) .this simulation requires about 1 tb of aggregate device memory . to the best of our knowledge ,this is the largest md simulation run on gpus to date . in figure[ fig : bench_billion ] loop times for 100 time - steps are shown for lincoln , red storm ( a cray xt3 machine with 10368 processors sited at sandia national laboratories ) , and bluegene / l ( a machine with 65536 processor sited at lawrence livermore national laboratory ) .the data for the latter two machines was taken from the lammps homepage ( http://lammps.sandia.gov/bench.html ) . using 288 gpus , lincoln required 28.7 s to run this benchmark , landing between red storm using 10,000 processors ( 25.1 s ) and the bluegene / l machine using 32k processors ( 30.2 s ) .in this paper we have presented our own implementation of a general purpose gpu - md code that we call lammps .this code already supports 26 different force field types .we discussed multiple approaches for performing pair force calculations and concluded that an adaptive neighbor - list - based approach yields the best results .specifically , we have shown that the cell list approach is generally slower .if running on a quad - core workstation with a single gpu , users can expect a 5x to 14x reduction in time - to - solution by harnessing the gpu , depending on the simulated system class ( i.e. biomolecular , polymeric , granular , metallic , semiconductor ) . with a strong focus on scalability , lammps can efficiently use the upcoming generation of gpu - based hybrid clusters , such as tianhe-1a , nebulae and tsubame 2.0 ( the first , third , and fourth fastest supercomputers on the november 2010 top500 list ) . by performing scaling benchmarks on up to 256 gpus, lammps was shown to achieve general speed - ups of 20x to 60x using the latest generation of c1060s versus modern cpu cores , again depending on the simulated system class .these numbers imply that using lammps on a 32 node system with 4 gpus per node can achieve the same overall speed as the original cpu version of lammps on a conventional cpu - based cluster with 1024 nodes .this work was partially supported by the national center for supercomputing applications by providing access to the lincoln gpu cluster .sandia national laboratories is a multi - program laboratory managed and operated by sandia corporation , a wholly owned subsidiary of lockheed martin corporation , for the u.s .department of energys national nuclear security administration under contract de - ac04 - 94al85000 .* * lj*. potential : lennard - jones ( lj / cut ) , cutoff : 2.5 , density : 0.84 , temperature : 1.6 . * * silicate ( cutoff)*. potential : buckingham + coulomb ( buck / coul / cut ) , cutoff : 15.0 , density : 0.09 , temperature : 600 k. * * silicate*. potential : buckingham + coulomb ( buck / coul / long ) , atoms : 11,664 , cutoff : 10.0 , density : 0.09 , long range coulomb solver : pppm ( precision : 2.4e-6 ) , temperature : 600 k. * * eam*. potential : embedded atom method ( eam) , atoms : 256,000 , cutoff : 4.95 , density : 0.0847 , temperature : 800 k. * * coarse grained*. potential : coarse grained systems ( cg - cmm) , atoms : 160,560 , cutoff : 15.0 , temperature : 300 k. * * rhodopsin*. potential : charmm force field + coulomb ( lj / charmm / coul / long) , atoms : 32,000 , cutoff : 10.0 , long range coulomb solver : pppm ( precision : 1e-7 ) , temperature : 300 k. * * granular*. potential : granular force field ( gran / hooke) , atoms : 1,152,000 , density : 1.07 , temperature : 19 .* workstation server a ( ws ) intel q9550 @ 2.8ghz 8 gb ddr2 ram @ 800 mhz mainboard : evga 780i 3xpcie2.0 16x 2 x nvidia gtx 280 centos 5.4 * workstation server b ( ws ) intel i7 950 @ 3.0ghz 24 gb ddr3 ram @ 1066 mhz mainboard : asus p6x58d - e 3xpcie2.0 16x 2 x nvidia gtx 470 centos 5.5 * gpu cluster ( cl ) 2 x intel x5550 @ 2.66ghz 48 gb ddr3 ram @ 1066 mhz mainboard : supermicro x8dtg - qf r 1.0a 4xpcie2.0 16x 4 x nvidia tesla c1060 scientific linux 5.4 * ncsa lincoln ( lincoln ) 192 nodes 2 x intel x5300 @ 2.33ghz 16 gb ddr2 ram 2 x nvidia tesla c1060 on one pcie2.0 8x ( two nodes share one s1070 ) sdr infiniband red hat enterprise 4.8in this section , we make use of the defintions from [ sec : comp_neigh_cell ] , e.g. is the time for processing a single interaction and the fraction $ ] of the time is assumed to be spent on memory accesses , i.e. : while the actual force calculation for one interaction is the same for both approaches , the time for memory access varies : it is assumed to be a factor faster for the cell list approach , due to the_ coalesced accesses_. the cell list will read neighbor cells and thus , while the neighbor list method will read atoms .we assume a homogeneous density and thus the same proportionality factor for both , i.e. ( ) : a. if _ all _ of the pair force time is used for memory accesses ( ) , then has to be at least 3.3 .b. if the memory accesses of the cell list approach take no time at all ( ) , then must still be %. 22 s. plimpton , j comp .phys . * 117 * , 1 - 19 ( 1995 ) .top500 supercomputing sites , http://www.top500.org/lists/2010/11 j.a .anderson , c.d .lorenz , and a. travesset , j. comp . phys .* 227 * , 5342 - 5359 ( 2008 ) .m. harvey , g. giupponi , and g. de fabritiis , j. chem .theory and comput . * 5 * , 1632 ( 2009 ) .phillips , r. braun , w. wang , j. gumbart , e. tajkhorshid , e. villa , c. chipot , r.d .skeel , l. kale , and k. schulten .j. comp . chem . * 26 * , 1781 - 1802 ( 2005 ) .http://www.ks.uiuc.edu/research/namd/ d.a .case , t.e .cheatham , iii , t. darden , h. gohlke , r. luo , k.m .merz , jr ., a. onufriev , c. simmerling , b. wang and r. woods . j. computat . chem . * 26 * , 1668 - 1688 ( 2005 ) . w.m .brown , p. wang , s.j .plimpton , and a.n .tharrington , comp .comm . * 182 * , 898 - 911 ( 2011 ) .programming guide for cuda toolkit 3.1.1 http://developer.download.nvidia.com/ compute / cuda/3_1/toolkit / docs/ nvidia_cuda_c_programmingguide_3.1.pdf j.a .van meel , a. arnold , d. frenkel , portegies , r.g .belleman . molecular simulation ,3 . ( 2008 ) , pp . 259 - 266 .lars winterfeld , accelerating the molecular dynamics program lammps using graphics cards processors and the nvidia cuda technology http://db-thueringen.de/ servlets / documentservlet?id=16406 daw , baskes , phys .lett . , * 50 * , 1285 ( 1983 ) .daw , baskes , phys .b , * 29 * , 6443 ( 1984 ) . shinoda , devane , klein , mol ., * 33 * , 27 ( 2007 ) .shinoda , devane , klein , soft matter , * 4 * , 2453 - 2462 ( 2008 ) .mackerell , bashford , bellott , dunbrack , evanseck , field , fischer , gao , guo , ha , __ , j. phys .chem.,*102 * , 3586 ( 1998 ) .brilliantov , spahn , hertzsch , poschel , phys .e , * 53 * , 5382 - 5392 ( 1996 ) .silbert , ertas , grest , halsey , levine , plimpton , phys .e , * 64 * , 051302 ( 2001 ) .zhang and makse , phys .e , * 72 * , 011301 ( 2005 ) .
we present a gpu implementation of lammps , a widely - used parallel molecular dynamics ( md ) software package , and show 5x to 13x single node speedups versus the cpu - only version of lammps . this new cuda package for lammps also enables multi - gpu simulation on hybrid heterogeneous clusters , using mpi for inter - node communication , cuda kernels on the gpu for all methods working with particle data , and standard lammps c++ code for cpu execution . cell and neighbor list approaches are compared for best performance on gpus , with thread - per - atom and block - per - atom neighbor list variants showing best performance at low and high neighbor counts , respectively . computational performance results of gpu - enabled lammps are presented for a variety of materials classes ( e.g. biomolecules , polymers , metals , semiconductors ) , along with a speed comparison versus other available gpu - enabled md software . finally , we show strong and weak scaling performance on a cpu / gpu cluster using up to 128 dual gpu nodes .
multilayer networks are emerging as a powerful paradigm for describing complex systems characterized by the coexistence of different types of interactions .multilayer networks represent an appropriate descriptive model for real networked systems in disparate contexts , such as social , technological and biological systems .for example , global infrastructures are formed by several interdependent networks , such as power grids , water supply networks , and communication systems , and studying their properties require to account for the presence of such interdependencies .cell function and/or malfunction ( yielding diseases ) can not be understood if the information on the different nature of the interactions forming the interactome ( protein - protein interactions , signaling , regulation ) are not integrated in a general multilayer scenario .similarly , the complexity of the brain is encoded in the different nature of the interactions existing at the functional and the structural levels . a multilayer networksis composed of a set of networks forming its layers .nodes can be connected within and across layers .it has been shown that multilayer networks are much more fragile than isolated networks just because of the presence of interdependencies among the layers of the system .in particular , the fragility of the system increases as the number of layers increases .such a feature has an intuitive explanation . in the standard percolation model for multilayer networks ,the probability that a node is damaged equals to the probability that at least one of its interdependent nodes is damaged . as the number of layers increases , the probability of individual failures grows thus making the system more fragile .this scenario leads , however , to the conundrum : if the fragility of a system is increased by the number of layers of interactions , why are there so many real systems that display multiple layers of interactions ?further , the addition of new layers of interactions in a preexisting multilayer network has generally a cost , so it does nt seem reasonable to spend resources just to make the system less robust .the purpose of the current paper is to provide a potential explanation by introducing a new model for percolation in networks composed of multiple interacting layers . in the model, we will assume that a node is damaged only if all its interdependent nodes are simultaneously damaged .the model is perfectly equivalent to the standard one when the number of layer equals two .additional layers , however , provide the system with redundant interdependencies , generating backup mechanisms against the failure of the system , and thus making it more robust .the robustness of multilayer networks in presence of redundant interdependencies is here investigated using a message - passing theory ( also known as the cavity method ) .we build on recent advances obtained in standard interdependent percolation theory to propose a theory that is valid for multilayer networks with link overlap as long as the multilayer network is locally tree - like .this limitation is common to all message - passing approaches for studying critical phenomena on networks .corrections have been recently proposed on single networks to improve the performace of message - passing theory and similar approximations valid for loopy multilayer networks might be envisaged in the future .we consider a multilayer network composed of layers with .every layer contains nodes .exactly one node with the same label appears in every individual layer .nodes in the various layers sharing a common label are called _ replica nodes _ , and they are considered as interdependent on each other .nodes in the network are identified by a pair of labels , with and , the first one indicating the index of the node , and the second one standing for the index of the layer . for every node label , the set of replica nodes is given by the nodes corresponding to pairs of labels with with ( see figure [ fig : multiplex2 ] ) .when at least two replica nodes and are connected to two corresponding replica nodes and we say that the multilayer network displays link overlap . given a multilayer network as described above , we consider a percolation model where some of the nodes are initially damaged .we assume that the interdependencies are redundant , i.e. , every node can be active only if at least one its interdependent nodes is also active .we refer to this model as `` redundant percolation model . '' as an order parameter for the model , we define the so - called redundant mutually connected giant component ( rmcgc ) .the nodes that belong to the rmcgc can be found by following the algorithm : * the giant component of each layer is determined , evaluating the effect of the damaged nodes in each single layer ; * _ every replica node that has no other replica node in the giant component of its proper layer is removed from the network and considered as damaged _ ; * if no new damaged nodes are found at step ( ii ) , then the algorithm stops , otherwise it proceeds , starting again from step ( i ) .the set of replica nodes that are not damaged when the algorithm stops belongs to the rmcgc .the main difference with the standard percolation model on multilayer networks and the consequent definition of mutually connected giant component ( mcgc ) is that step ( ii ) must be substituted with `` every replica node that has at least a single replica node not in the giant component of its proper layer is removed from the network and considered as damaged , i.e. , if a replica node is damaged all its interdependent replica nodes are damaged '' .in particular , the rmcgc and the mcgc are the same for layers , but they differ as long as the number of layers . in the latter case ,the rmcgc naturally introduces the notion of redundancy among interdependent nodes .as we will see in the following , the main effect of redundancy is to let the robustness of the system increases as the number of layers increases .layers , and nodes is shown .every node has interdependent replica nodes with . in this figure ,triplets of replica nodes are also identified by their color . ]we assume that interactions within each layer are described by elements } ] ) or not ( }=0 ] in every layer where node is connected to node , i.e. , with }_{ij}=1 ] ) or not ( }=0 ] if and only if all the following conditions are met : * node is connected to node in layer , and both nodes and node are not damaged , i.e. , }_{ij}=1 ] .put together , the former conditions lead to the algorithm for the messages } ] therefore will equal one if at least one message is arriving to node from a neighboring node , while it will be equal to zero , otherwise . for and , otherwise . indicates in how many layers node is connected to the rmcgc assuming that node also belongs to the rmcgc , i.e. , v_ij&=&_=1^m .[ v ] therefore indicates the number of initially undamaged replica nodes that either receive at least one positive messages from nodes or are connected to the undamaged replica nodes .finally , the replica node belongs to the rmcgc if ( i ) it is not damaged , ( ii ) it is connected to the rmcgc in layer , and ( iii ) it receives at least another positive message in a layer .these conditions are summarized by _i&=&s_i(1-_n_(i)(1-n^[]_i ) ) + & & \{1-_ } .[ s1 ] the average number of replica nodes belonging to the rmcgc is computed as s=_=1^m_i=1^n _ i. [ s ] the system of eqs .( [ m1 ] ) , ( [ v ] ) , ( [ s1 ] ) , and ( [ s ] ) represents a complete mathematical framework to estimate the average size of the rmcgc for a given network and a given initial configuration of damage .the solution can be obtained by first iterating eqs .( [ m1 ] ) and ( [ v ] ) to obtain the values of the messages } ] and are either or .the variables can assume instead integer values in the range ] , and the probability that the replica node belongs to the rmcgc by . the message - passing algorithm determining the values of } ] indicates the preimposed degree of node in layer , if and , otherwise , and is the normalization factor indicating the total number of networks in the ensemble .averaging over the network ensemble allows us to translate the message - passing equations into simpler expressions for the characterization of the percolation transition .let us consider a random multilayer network obeying the probability of eq .( [ ens ] ) , and a random realization of the initial damage described by the probability of eq .( [ ps ] ) . the average message in layer , namely }=1}\right\rangle} ] .if there are no correlations between the degrees of a node in different layers , the degree distribution can be factorized as p(*k*)=_p^[](k^ [ ] ) , [ eq : pk ] where }(k) ] and }(z) ] of layer are given by h_0^[](x)&=&_kp^[](k ) x^k , + h_1^[](x)&=&_kp^[](k ) x^k-1.[slmcgc ] finally the average number of replica nodes in the rmcgc is given by s= _s_. if we consider the case of equally distributed poisson layers with average degree , we have that eq .( [ eq : pk ] ) is p^[](k)=z^ke^-z for every layer .then , using eqs .( [ su2 ] ) , one can show that , , and is determined by the equation s = p(1-e^-zs)\{1-[1-p+pe^-zs]^m-1}. [ p1 ] this equation has always the trivial solution .in addition , a nontrivial solution indicating the presence of the rmcgc , emerges at a hybrid discontinuous transition characterized by a square root singularity , on a line of points , determined by the equations h_z , p(s_c)&=&0 , +.|_s = s_c&=&0 , where h_z , p(s)&=&s - p(1-e^-zs ) + & & \{1-[1-p+pe^-zs]^m-1}=0 . for is a rmcgc , for there is no rmcgc .the entity of the discontinuous jump at in the fraction of replica nodes in the rmcgc is given by .the percolation threshold as a function of the average degree of the network is plotted in figure for .it is shown that as the number of layers increases the percolation threshold decreases for every value of the average degree .additionally also the discontinuous jump decreases as the number of layer increases for very given average degree ( see figure ) .therefore as the number of layers increases the multilayer networks becomes more robust .is plotted versus the average degree of each layer for poisson multilayer networks with layers indicated respectively with with blue solid , red dashed , green dot - dashed and orange dotted lines . ] of the rmcgc at the percolation threshold , is plotted versus the average degree of each layer for poisson multilayer networks with layers indicated respectively with blue solid , red dashed , green dot - dashed and orange dotted lines . ] in this section , we compare the robustness of multilayer networks in presence of ordinary interdependencies and in presence of redundant interdependencies . to take a concrete example, we consider the case of a multilayer network with poisson layers , each layer having the same average degree . in this casethe fraction of replica nodes in the rmcgc is given by the solution of eqs .( [ p1 ] ) while the fraction of replica nodes in the mcgc is given by s=(1-e^-zs)^m .[ o ] in eq .( [ o ] ) , it is assumed that every replica node of a given node is damaged simultaneously ( with probability ) . on the contrary , in presence of redundant interdependenciesit is natural to assume that the initial damage is inflicted to each replica node independently ( with probability ) .therefore , in order to compare the robustness of the multilayer networks in presence and in absence of redundant interdependencies , we set , i.e. , replica nodes are not initially damaged , and compare the critical value of the average degree at which the percolation transition occurs respectively for the rmcgc and for the mcgc .additionally we will characterize also the size of the jump in the size of the rmcgc and the mcgc at the percolation transition . in fig .[ fig : comp_m ] , we display the values of and as a function of the number of layers for the rmcgc and the mcgc . for ,the two models give the same results as they are identical . for , differences arise . in presence of redundant interdependencies ,multilayer networks become increasingly more robust as the number of layers increases .this phenomenon is apparent from the fact that the rmcgc emerges for multilayer networks with an average degree of their layers which decreases as the number of layers increases . on the contrary , in ordinary percolationthe value of for the emergence of the mcgc is an increasing function of .additionally , the size of the discontinuous jumps at the transition point decreases with for the rmcgc , while increases with for the mcgc showing that the avalanches of failures have a reduced size for the rmcgc . of the average degree as a function of the number of network layers .results for the rmcgc model are displayed as red diamonds .results for the mcgc model are denoted by blue triangles .( b ) height of the jump at the transition point as a function of the number of network layers . ] in this section , we compare the results obtained with eqs .( [ m1 ] ) , ( [ v ] ) , ( [ s1 ] ) , and ( [ s ] ) on a single instance of damage with the predictions the message - passing algorithm described in eq .( [ su2 ] ) characterizing the size of the rmcgc in an ensemble of networks .specifically , we consider the case of a multilayer network with poisson layers with the same average degree . in order to draw the percolation diagram for single instances of initial damage as a function of the probability of damage , we associate each replica node with a random variable drawn from a uniform distribution and we set s_i=\ { ccc 1 & & r_ip + 0 & & r_i > p .[ rs ] fig .[ fig : mp_single ] displays the comparison between the two approaches , showing an almost perfect agreement between them .poisson layers poisson with average degree and no link overlap , and the message - passing results over single network realization and given configuration damage .we consider different values of the average degree .points indicate results of numerical simulations : blue circles ( ) , red squares ( ) , green diamonds ( ) , and orange triangles ( ) .message - passing predictions are denoted by lines with the same color scheme used for numerical simulations .simulations results are performed on a single instance of a multilayer network with nodes . ] additionally in fig .[ fig : single_average ] , we compare simulation results averaged over several realizations of the initial damage and several instances of the multilayer network model with the theoretical predictions given by the numerical solution of eqs .( [ su1])-([p1 ] ) , obtaining a very good agreement . , but for averages over instances of the multilayer network model and configurations of random initial damage . ]in isolated networks , two nodes can be either connected or not connected . in multilayer networks instead , the complexity of the structure greatly increases as the ways in which a generic pair of nodes can be connected is given by possibilities .a very convenient way of accounting for all the possibilities with a compact notation is to use the notion of multilink among pairs of nodes .multilinks },m^{[2]},\ldots , m^{[m]}\right) ] , describe any of the possible patterns of connections between pairs of nodes in a multilayer network with layers .specifically , }=1 ] indicates that the connection in layer does not exists .in particular , we can say that , in a multilayer network with layers , two nodes and are connected by the multilink _ ij=(a_ij^[1],a_ij^[2 ] , , a_ij^[m ] ) . in order to distinguish the case in which two nodes are not connected in any layer with the case in which in at least one layer the nodes are connected , we distinguish between the trivial multilink and the nontrivial multilinks .the trivial multilink indicates the absence of any sort of link between the two nodes .using the concept of multilinks , one can define multiadjacency matrices whose element indicates whether ( ) or not a node is connected to node by a multilink .the matrix elements of the multiadjacency matrix are given by a_ij^=_=1^m(m^[],a_ij^ [ ] ) .using multiadjacency matrices , it is straightforward to define multidegrees .the multidegree of node indicated as is the sum of rows ( or columns ) of the multiadjacency matrix , i.e. , k_i^=_j a_ij^ , and indicates how many multilinks are incident to node . using a multidegree sequence , it is possible to build multilayer network ensembles that generalize the configuration model . this way , overlap of links is fully preserved by the randomization of the multilayer network .these ensembles are specified by the probability attributed to every multilayer network of the ensembles , where is given by ( ) = _ i=1^n _ ( k_i^,_j=1^n a_ij^ ) , with normalization constant equal to the number of multilayer networks with given multidegree sequence .our goal here is to generalize the message - passing algorithm already given by eqs .( [ m1 ] ) , ( [ v ] ) , ( [ s1 ] ) , and ( [ s ] ) for a generic single instance of a multilayer network and single realization of initial damage to the cases of ( i ) random multilayer networks with given multidegree sequence and/or ( ii ) random realizations of the initial damage . the extensions for both cases has been already considered for the case of multilayer networks without link overlap . in presence of link overlap , however , the approach is much more cumbersome . for two nodes and in fact , the messages } ] are explicitly dependent on the state of all replicas of node .this state is indicated by the variables where specifies whether the replica node is initially damaged or not . as a consequence of this property , when averaging over random realizations of initial damage , message - passing equations are written in terms of the messages explicitly accounting for the probability that node is sending to node the set of messages },n_{i\to j}^{[2]}\ldots n_{i\to j}^{[\alpha ] } , \ldots n_{i\to j}^{[m]}) ] from neighbors different from and no positive message is reaching node in the layer where }=1 ] .therefore , messages depend only on & = & _ = 1^m m^ [ ] , + & = & _ = 1^m n^ [ ] , + & = & _ = 1^m s_jm^ [ ] . ] ( }=0 ] . in this multilayer network , each pair of nodes and is connected by a multilink _ ij=(a_ij^[1],a_ij^[2 ] , a_ij^ [ ] , , a_ij^[m ] ) .any two nodes and are connected by a nontrivial multilink is implying that at least one link between the two nodes is present across the layers .we assume that the initial damage configuration is known and that it is given by the set of variables where indicates if a replica is initially damaged ( ) or not ( ) .the message passing algorithm given in sec .iii of the main text allows us to determine for any given initial damage configuration , if any replica node is in the rmcgc ( ) or not ( ) as long as the multilayer network is locally tree - like . specifically the variables are determined in terms the set of messages _ ij=(n_ij^[1],n_ij^[2], ,n_ij^[], n_ij^[m ] ) going from any node to any node joined by a nontrivial multilink .the messages are determined according to the following recursive equation n_ij^[]=(v_ij,2)a^[]_ijs_js_i , [ rm1 ] where indicates the set of nodes that are neighbor of node in layer and where is the step function with values for and for . herethe variable indicates in how many layers node is connected to the rmcgc assuming that node also belongs to the rmcgc , v_i j&=&_=1^m\{s_i+s_is_ja^[]_ij_n_(i)j(1-n^[]_i)}. [ rv ] finally the variables are expressed in terms of the messages and are given by _i&=&s_i\{1-_ } .[ rs1 ] in many situations , however , the initial configuration of the damaged is not known , and instead it is only known the probability distribution of the initial damage configuration . in this case , one aims to know the probability that a replica node is in the rmcgc for a random configuration of the initial damage .the value of , on a locally treelike multilayer network is determined by a distinct message passing algorithm that can be derived from the message passing algorithm valid for single realization of the initial damage , by performing a suitable average of the messages .particular care should be taken when one aims to perform this average .in fact depends on all the messages } ] and , ^,_ij&=&_=1^m^n^[]_=1^m^(1-n^[])m^[]s_j , [ ra1 ] * if }=1 ] and , ^,_ij=1-_^,_ij , where is determined in terms of the messages as _ ij=_^_ij,_ij .finally a replica node is in the rmcgc ( ) or not ( ) depending on the messages it receives from its neighbors , i.e. _ i&=&s_i\{1-_ } .by averaging eqs . we can derive the message passing algorithm predicting the probability that a replica node is in the rmcgc when the initial damage is randomly drawn for the probability distribution . assuming that each replica node is damaged independently the probability distribution is given by ( \{s_i})=_i=1^n_=1^m p^s_i(1-p)^1-s_i .[ rps ] the message passing algorithm valid for a random distribution of the initial disorder , is written in terms of the messages .the messages take real values between zero and one .they indicate the probability that node send to node a message given that node is connected to node by a multilink and that node has initial damage configuration , i.e. .let us indicate with the probability of a local initial damage configuration given by ( ) = _= 1^m p^s_(1-p)^1-s _ [ rlps ] and let us indicate with the vector = ( r^[1],r^[2 ] , , r^ [ ] , , r^[m])of elements }=0,1 ] and , ^,_ij()&= & _ _ i|_s_i>1(_i)_|r^[]=0 ( n^[]+(1-n^[])m^[]s_)=0 c^,(_i , , ) + & & , [ rb1 ] where ^,(_i,,)=_=1^m , [ rc ] * if }=1 ] and , ^,_ij()&= & _ _ i|_ s_i>1(_i)s_is_a_ij^[]\ { 1-_n(i)j ( 1-_|(n^)^[]>0_i^_i(_i ) ) .+ & & -_|r^[]=0_(1-s_i)^(1-r^)(s_i)^r^_n(i)j(1-_|_(n^)^r^>0_i^_i(_i ) ) + & & + ._|r^[]=0_(1-s_i)^(1-r^)(s_i)^r^_n(i)j(1-_|_(n^)^>0_i^_i(_i ) ) } , [ rb2 ] * if }=0 ] s^,()&= & _\{k^}p(\{k^})__i|_s_i>1(_i)_|r^[]=0 ( n^[]+(1-n^[])m^[]s_)=0 c^,(_i , , ) + & & , where ^,(_i,,)=_=1^m , * if }=1 ] s^,()&= & _ \{k^}p(\{k^ } ) _ _ i|_ s_i>1(_i)s_is_a_ij^[]\ { 1-_ ( 1-_|(n^)^[]>0s^(_i))^k^-(, ) . + & & -_|r^[]=0_(1-s_i)^(1-r^)(s_i)^r^_(1-_|_(n^)^r^>0s^(_i))^k^-(, ) + & & + ._|r^[]=0_(1-s_i)^(1-r^)(s_i)^r^_(1-_|_(n^)^>0s^(_i))^k^-(, ) } , * if }=0 ] and },a_{ij}^{[2]}\ldots , a_{ij}^{[m]}) ] we can use the following identity & & _= 1^m ( y_+z_)^p^[]=_|p^[]>0(y_+z _ ) = _|r^[]=0 p^[]=0 _= 1^m , [ r ] where in the last expression we perform a sum over all the -dimensional vectors = ( r^[1],r^[2 ] , , r^ [ ] , , r^[m] ) , with }=0,1 ] and }=0 ] . using this expansion for the products in eq . we obtain ^,_ij&= & _ |r^[]=0 ( n^[]+(1-n^[])m^[]s_j)=0c^,(_i,,)_n(i)j , [ rd2 ] where is given by eq . . by using the fact that the messages take only values zero or one , that that out of all the messages from node to node only one is actually equal to one , and all the others are zero, we can rewrite eq . as ^,_ij=_ |r^[]=0 ( n^[]+(1-n^[])m^[]s_j)=0c^,(_i,,)_n(i)j(1-_|_(n^)^[]r^[]>0_i^_i).finally , averaging over the probability distribution of the configuration of the initial damage of node , in the locally treelike approximation we obtain for the messages the eq . that we rewrite here for convenience , ^,_ij(_j)&= & _ _i|_s_i>1p(_i)_|r^[]=0 ( n^[]+(1-n^[])m^[]s_j)=0c^,(_i , , ) + & & _n(i)j(1-_|_(n^)^[]r^[]>0_i^_i(_i ) ) .
in the standard model of percolation on multilayer networks , a node is functioning only if its copies in all layers are simultaneously functioning . according to this model , a multilayer network becomes more and more fragile as the number of layers increases . in this respect , the addition of a new layer of interdependent nodes to a preexisting multilayer network will never improve its robustness . whereas such a model seems appropriate to understand the effect of interdependencies in the simplest scenario of a network composed of only two layers , it may seem not reasonable for real systems where multiple network layers interact one with the other . it seems in fact unrealistic that a real system , such a living organism , evolved , through the development of multiple layers of interactions , towards a fragile structure . in this paper , we introduce a model of percolation where the condition that makes a node functional is that the node is functioning in at least two of the layers of the network . the model reduces to the standard percolation model for multilayer networks when the number of layers equals two . for larger number of layers , however , the model describes a scenario where the addition of new layers boosts the robustness of the system by creating redundant interdependencies among layers . we prove this fact thanks to the development of a message - passing theory able to characterize the model in both synthetic and real - world multilayer graphs .
one important question in multi - objective evolutionary algorithms ( moeas ) is how the structure of the interactions between the variables of the problem influences the different objectives and impacts in the characteristics of the pareto front ( e.g. discontinuities , clustered structure , etc . ) .the analysis of interactions is also important because there is a class of moeas that explicitly capture and represent these interactions to make a more efficient search . in this paper, we approach this important question by combining the use of a multi - objective fitness landscape model with the definition of probability distributions on the search space and different factorized approximations to these joint distributions .our work follows a similar methodology to the one used in to investigate the relationship between additively decomposable single - objective functions and the performance of estimation of distribution algorithms ( edas ) .landscapes models are very useful to understand the behavior of optimizers under different hypothesis about the complexity of the fitness function .perhaps the best known example of such models is the nk fitness landscape , a parametrized model of a fitness landscape that allows to explore the way in which the neighborhood structure and the strength of interactions between neighboring variables determine the ruggedness of the landscape .one relevant aspect of the nk - fitness landscape is its simplicity and wide usability across disciplines from diverse domains .another recently introduced landscape model is the nm - landscape .it can be seen as a generalization of the nk - landscape .this model has a number of attributes that makes it particularly suitable to control the strength of the interactions between subsets of variables of different size .in addition , it is not restricted to binary variables and allows the definition of functions on any arity . in ,the nm - landscape was extended to multi - objective problems and used to study the influence of the parameters in the characteristics of the mop .we build on the work presented in to propose the use of the multi - objective nm - landscape ( mnm - landscape ) for investigating how the patterns of interactions in the landscape model influence the shape of the pareto front .we go one step further and propose the use of factorized approximations computed from the landscapes to approximate the pareto fronts .we identify the conditions in which these approximations can be accurate .let denote a vector of discrete variables .we will use to denote an assignment to the variables . will denote a set of indices in , and ( respectively ) a subset of the variables of ( respectively ) determined by the indices in .a fitness landscape can be defined for features using a general parametric interaction model of the form : where is the number of terms , and each of the coefficients . for , , where is a set of indices of the features in the term , andthe length is the order of the interaction . by convention , it is assumed that when , . also by convention, we assume that the model is defined for binary variables represented as .the nm models comprise the set of all general interactions models specified by equation [ eq : intmodel ] , with the following constraints : * all coefficients are non - negative . *each feature value ranges from negative to positive values . *the absolute value of the lower bound of the range is lower or equal than the upper bound of the range of .one key element of the model is how the parameters of the interactions are generated . in , each generated from , where is a random number drawn from a gaussian distribution with mean and standard deviation .increasing determines smaller range and increasing clumping of fitness values . in this paper, we use the same procedure to generate the parameters .we will focus on nm - models defined on the binary alphabet . in this case, the nm - landscape has a global maximum that is reached at .the multi - objective nm - landscape model ( mnm - landscape ) is defined as a vector function mapping binary vectors of solutions into real numbers , where is the number of variables , is the number of objectives , is the -th objective function , and . is a set of integers where is the maximum order of the interaction in the -th landscape .each is defined similarly to equation as : where is the number of terms in objective , and each of the coefficients . for , , where is a set of indices of the features in the term , andthe length is the order of the interaction .notice that the mnm fitness landscape model allows that each objective may have a different maximum order of interactions .the mnm - landscape is inspired by previous extensions of the nk fitness landscape model to multi - objective functions .one of our goals is to use the mnm - landscape to investigate the effect that the combination of objectives with different structures of interactions has in the characteristics of the mop . without lack of generality, we will focus on bi - objective mnm - landscapes ( i.e. , ) and will establish some connections between the objectives . in this sectionwe explain how the constrained mnm - landscapes are designed .as previously explained , the nm - model is defined for .however , we will use a representation in which . the following transformation maps the desired representation to the one used by the mnm - landscape .given the analysis presented in , it also guarantees that the pareto set will comprise at least two points , respectively reached at and for objectives and . where and are the new variables obtained after the corresponding transformation have been applied to .when the complete space of solutions is evaluated , we add two normalization steps to be able to compare landscapes with different orders of interactions . in the first normalization step, is divided by the number of the interaction terms ( ) . in the second step ,we re - normalize the fitness values to the interval $ ] , this is done by subtracting the minimum fitness value among all the solutions , and dividing by the maximum fitness value minus the minimum fitness value .another constraint we set in some of the experiments is that , if then , for all .this means that all interactions contained in are also contained in , but will also contain higher order interactions .starting from a single mnm - landscape of order we will generate all pairs of models , where .the coefficients for and will be set as in .the idea of considering these pairs of objectives is to evaluate what is the influence in the shape of the pareto front , and other characteristics of the mops , of objectives that have different order of interactions between their variables .the relationship between the fitness function and the variables dependencies that arise in the selected solutions can be modeled using the boltzmann probability distribution . the boltzmann probability distribution is defined as where is a given objective function and is the system temperature that can be used as a parameter to smooth the probabilities .the key point about is that it assigns a higher probability to solutions with better fitness . the solutions with the highest probability correspond to the optima .starting from the complete enumeration of the search space , and using as the fitness function the objectives of an mnm - landscape , we associate to each possible solution of the search space probability values according to the corresponding boltzmann probability distributions .there is one probability value for each objective and in this paper we use the same temperature parameter for all the distributions . using the boltzmann distributionwe can investigate how potential regularities of the fitness function are translated into statistical properties of the distribution .this question has been investigated for single - objective functions in different contexts but we have not found report on similar analysis for mop .one relevant result in single - objective problems is that if the objective function is additively decomposable in a set of subfunctions defined on subsets of variables ( definition sets ) , and the definition sets satisfy certain constraints , then it is possible to factorize the associated boltzmann distribution into a product of marginal distributions .factorizations allow problem decomposition and are at the core of edas .in our experiments we investigate the following issues : * how the parameters of mnm model determine the shape of the pareto front ? * how is the strength of the interactions between variables influenced by the parameters of the model ? * under which conditions can factorized approximations of the boltzmann probability reproduce the shape of the pareto front ?algorithm [ alg : approach ] describes the steps of our simulations .we use a reference nm landscape ( , ) and create a bi - objective mnm model from it using different combinations of parameters and .simulation approach [ alg : approach ] define the mmn model using its parameters . for each objective: determine the pareto front using the objective values .determine the approximation of the pareto front using the univariate factorizations of all the objectives .we investigate how the parameters of mnm model determine the shape of the pareto .figure [ fig : mnm_mod ] ( column 1 ) shows the evaluation of the solutions that are part of the search space for and different values of and . from row 1 to row 4 , the figures respectively show the objective values of the mnm landscape for different combination of its parameters : ( ) , ( ) , ( ) , ( ) .the influence of can be seen by comparing the figure in row 1 with the figure in row 2 , and doing a similar comparison with figures in row 3 and row 4 .increasing from to produces a clustering of the points in the objective space .one reason for this behavior is that several genotypes will map to the same objective values .the clustering effect in the space of objectives is a direct result of the clumpiness effect described for the nm - model when is increased .the effect of the maximum order of the interactions can be seen by comparing the figure in row 1 with the figure in row 3 , and the figures in rows 2 and 4 . for , adding interactions transforms the shape of the pareto front from a line to a boomerang - like shape . for ,the points are transformed into a set of stripes that seem to be parallel to each other . in both cases ,the changes due to the increase in the order of the interactions are remarkable . in the next experiments , and in order to emphasize the flexibility of the mnm - landscape, we allow the two objectives of the same mnm - landscape to have different maximum order of interactions .figure [ fig : conpf_sigma36 ] shows the objective values and pareto fronts of the mnm model for for the situation in which has a maximum order of interactions and has a maximum order of interactions .it can be observed that the shapes of the fronts are less regular than in the previous experiments but some regularities are kept .figure [ fig : mnm_mod ] ( column 2 ) shows the boltzmann probabilities associated to each mnm - landscape model described in column 1 , i.e. , .the boltzmann distribution modifies the shape of the objective space but it does not modify the solutions that belong to the pareto set .this is so because the dominance relationships between the points are preserved by the boltzmann distribution .however , the boltzmann distribution `` bends '' the original objective space . this effect can be clearly appreciated in rows 1 and row 4 . in the first case , the line is transformed into a curve . in the second case the parallel lines stripes that appear in the original objective space change direction .the boltzmann distribution can be used as an effective way to modify the shape of the pareto while keeping the dominance relationships .this can be convenient to modify the spacing between pareto - optimal solutions , for more informative visualization of the objective space , and for investigating how changes in the strength of selection could be manifested in the shape of the pareto front approximations .figure [ fig : mnm_mod ] ( column 3 ) shows the approximations of the boltzmann distributions for the two objectives , each approximation computed using the corresponding product of the univariate marginals , i.e. , . for ,the approximations are identical to the boltzmann distribution .this is because the boltzmann distribution can be exactly factorized in the product of its univariate marginal distributions .therefore , as a straightforward extension of the factorization theorems available for the single - objective additive functions , we hypothesize that if the structure of all objectives is decomposable and the decompositions satisfy the running intersection property , then _ the associated factorized distributions will preserve the shape of the pareto front_. however , the univariate approximation does not always respect the dominance relationships and this fact provokes changes in the composition and shape of the pareto front .this can be appreciated in rows 3 and 4 , where the univariate approximation clearly departs from the boltzmann distribution . still , as shown in row 4, some characteristics of the original function , as the discontinuity in the space of objectives , can hold for the univariate factorization .an open question is under which conditions will the univariate approximation keep the dominance relationship between the solutions .one conjecture is that if the factorized approximation keeps the ranking of the original functions for all the objectives then the dominance relationship will be kept , but this condition may not be necessary .the answer to this question is beyond the scope of this paper .nevertheless , we include the discussion to emphasize why explicit modeling of interactions by means of the mnm landscape together with the use of the boltzmann distribution is relevant for the study of mops . by computing bivariate and univariate marginals from the boltzmann distribution and computing the mutual information for every pair of variables we can assess which are the strongest pair - wise interactions . in the mutual information.,width=336 ] in this sectionwe analyze how the maximum order of the interactions and the parameter affect the dependencies in the boltzmann distribution . a reference nm model with ( )was generated and by varying the parameters and ) we generated different mnm landscapes .the results presented in this section are the average of models for each combination of parameters .we focus on the analysis of the dependencies in only one of the objectives .figure [ fig : mutinf ] shows the values of the mutual information for the combinations of the maximum order of the interactions and . when the maximum order of the interactions is , the approximation given by the univariate factorization is exact , therefore , the mutual information between the variables are for all values of .the mutual information is maximized when the maximum number of interactions is . for these mnm landscapeswe would expect the univariate approximation to considerably distort the shape of the pareto front , as shown in figure [ fig : mnm_mod ] , column 3 , rows 3 and 4 .figure [ fig : mutinf ] shows that can be used to tune the strength of the interactions between the variables .as increases the mutual information also increases .this fact would allow us to define objectives that have interactions of the same maximum order but with different strength .we summarize some of the findings from the experiments : * univariate factorizations are poor approximations for mnm models of maximum order two and higher . *the mutual information between the variables of the nm landscape is maximized for problems with maximum order of interaction . *the parameter can be used for changing the shape of the pareto fronts and increasing the strength of the interactions in the objectives . in particular, there is a direct effect of in the discontinuity of the pareto front and the emergence of clusters .we have shown how the mnm landscape can be used to investigate the effect that interactions between the variables have in the shapes of the fronts and in the emergence of dependencies between the variables of the problem .we have shown that the boltzmann distribution can be used in conjunction with the mnm model to investigate how interactions are translated into dependencies .a limitation of the boltzmann distribution is that is can be computed exactly only for problems of limited size .the idea of using the boltzmann distribution to modify the pareto shape of the functions can be related to previous work by okabe et al . on the application of deformations , rotations , and shift operators to generate test functions with difficult pareto sets .however , by using the boltzmann distribution we explicitly relate the changes in the shape of the pareto to the relationship interactions - dependencies determined by the boltzmann distribution .this can be considered as an alternative path to other approaches to creation of benchmarks for mops , like the combination of single - objectives functions of known difficulty or the maximization of problem difficulty by applying direct optimization approaches .our results can be useful for the conception and validation of moeas that use probabilistic modeling . in this direction, we have advanced the idea that the effectiveness of a factorized approximation in the context of mops may be related to the way it preserves the original dominance relationships between solutions .we have shown that the boltzmann distribution changes the shape of the fronts but does not change which solutions belong to the pareto front .this work has been partially supported by it-609 - 13 program ( basque government ) and the tin2013 - 41272p ( spanish ministry of science and innovation ) project .r. santana acknowledges support from the program science without borders no . : 400125/2014 - 5 ) .h. aguirre and k. tanaka .insights and properties of multiobjective mnk - landscapes . in _ proceedings of the 2004 congress on evolutionary computation cec-2004 _ , pages 196203 , portland , oregon , 2004 .ieee press .d. brockhoff , t .- d .tran , and n. hansen .benchmarking numerical multiobjective optimizers revisited . in _ proceedings of the companion publication of the 2015 on genetic and evolutionary computation conference _ , pages 639646 , madrid , spain , 2015 . m. lpez - ibnez , a. liefooghe , and s. verel .local optimal sets and bounded archiving on multi - objective nk - landscapes with correlated objectives . in _parallel problem solving from nature ppsn xiii _ , pages 621630 .springer , 2014 .l. marti , j. garcia , a. berlanga , c. a. coello , and j. m. molina . on current model - building methods for multi - objective estimation of distribution algorithms : shortcommings and directions for improvement .technical report giaa2010e001 , department of informatics of the universidad carlos iii de madrid , madrid , spain , 2010 .h. mhlenbein and t. mahnig .evolutionary algorithms and the boltzmann distribution . in k. a. dejong , r. poli , and j. rowe , editors , _ foundation of genetic algorithms 7 _ , pages 133150 .morgan kaufmann , 2002 .m. pelikan , k. sastry , and d. e. goldberg .multiobjective estimation of distribution algorithms . in m. pelikan , k. sastry , and e. cant - paz , editors , _ scalable optimization via probabilistic modeling : from algorithms to applications _ , studies in computational intelligence , pages 223248 .springer , 2006 .r. santana , c. bielza , and p. larraaga .conductance interaction identification by means of boltzmann distribution and mutual information analysis in conductance - based neuron models ., 13(suppl 1):p100 , 2012 .r. santana , p. larraaga , and j. a. lozano .interactions and dependencies in estimation of distribution algorithms . in _ proceedings of the 2005 congress on evolutionary computation cec-2005 _ ,pages 14181425 , edinburgh , u.k . , 2005 .ieee press .r. santana , r. b. mcdonald , and h. g. katzgraber . a probabilistic evolutionary optimization approach to compute quasiparticle braids . in _ proceedings of the 10th international conference simulated evolution and learning ( seal-2014 ) _ , pages 1324 .springer , 2014 .r. santana , a. mendiburu , and j. a. lozano .evolving mnk - landscapes with structural constraints . in _ proceedings of the ieee congress on evolutionary computation cec 2015 _ ,pages 13641371 , sendai , japan , 2015 .ieee press .r. santana , a. mendiburu , and j. a. lozano .multi - objective nm - landscapes . in _ proceedings of the companion publication of the 2015 on genetic and evolutionary computation conference _ , pages 14771478 , madrid , spain , 2015 .
nm - landscapes have been recently introduced as a class of tunable rugged models . they are a subset of the general interaction models where all the interactions are of order less or equal . the boltzmann distribution has been extensively applied in single - objective evolutionary algorithms to implement selection and study the theoretical properties of model - building algorithms . in this paper we propose the combination of the multi - objective nm - landscape model and the boltzmann distribution to obtain pareto - front approximations . we investigate the joint effect of the parameters of the nm - landscapes and the probabilistic factorizations in the shape of the pareto front approximations . + * keywords * : multi - objective optimization , nm - landscape , factorizations , boltzmann distribution
we consider the problem of an agent seeking to optimally invest in the presence of proportional transaction costs .the agent can invest in a stock , modeled as a geometric brownian motion with drift and volatility , and in a money market with constant interest rate .the agent pays proportional transaction cost for trading stocks , with the goal of optimizing the total utility of wealth at the final time , when she would be required to close out her stock position and pay the resulting transaction costs .the utility function is given by , where .we refer to this optimized utility of wealth as the value function . in this paper , we compute the asymptotic expansion of the value function up to and including the order we also find a simple _ nearly - optimal " _ trading policy that , if followed , produces an expected utility of the final wealth that asymptotically matches the value function at the order of in section [ sec : setup ] of this paper we define our model , state the hjb equation , and state merton s result for the case of zero transaction costs . under the smoothness assumption of the value function , in section [ sec : heuristic ] we provide a heuristic expansion of the value function in powers of . in the next section we use this heuristicexpansion in order to build smooth functions , which we later prove to be upper and lower bounds on the value function .these functions also turn out to be sub- and supersolutions for the hamilton - jacobi - bellman ( hjb ) equation .it is then possible to apply the comparison principle for viscosity solutions to conclude that the value function , which is a viscosity solution for the hjb equation , has to be between the super- and subsolutions .however , this method is only applicable for . therefore we use a verification argument from stochastic calculus . in the final sectionwe construct a simple policy and in theorem [ thm : comp ] prove that are indeed upper and lower bounds on the value function . as a corollary we also get that the expected utility of the final wealth from the constructed policy is order close to the value function , which makes this policy a _ nearly - optimal " _ policy . in the case of zero transaction cost, the agent s optimal policy is to keep a constant proportion of wealth , which we call the _ merton proportion _ , invested in stock .see pham , or alternately the original paper of merton where a solution to a similar investment and consumption problem with infinite time horizon appears . when , the optimal policy is to trade as soon as the position is sufficiently far away from the _merton proportion_. more specifically , the agent s optimal policy is to maintain her position inside a region that we refer to as the _ no - trade " _( nt ) region .if the agent s position is initially outside the nt region , she should immediately sell or buy stock in order to move to its boundary .the agent then will trade only when her position is on the boundary of the nt region , and only as much as necessary to keep it from exiting the nt region , while no trading occurs in the interior of the region ; see davis , panas , & zariphopoulou .not surprisingly , the width of the nt region depends on time , which makes it difficult to pinpoint exactly the optimal policy .moreover , the nt region degenerates when the _ merton proportion _ , i.e. it is optimally to be fully invested in stock , since in this case , the agent only needs to trade at the initial time to buy stock , and the final time to liquidate his position .we will not consider this case .the approach of this paper , is to expand the value function into a power series in powers of .this approach , which leads to explicit results , was pioneered by janeek & shreve in solving the infinite horizon investment and consumption problem .many other papers have used asymptotical expansion including goodman & ostrov , who showed how the first term in asymptotical expansion of the value function relates to a free boundary problem that minimizes a cost function .they also showed that the quasi - steady state density of the portfolio is constant in the nt region .janeek & shreve used it to solve a problem of optimal investment and consumption with one futures contract , and bichuch applied it to the case of two correlated futures contracts .dewynne , howison , law & lee heuristsically found a time independent policy in a finite horizon problem with multiple correlated stocks . under the assumption that _ the principle of smooth fit _ holds and that the boundaries are symmetrical around the merton proportion , they heuristically computed the asymptotic location of the boundaries of the nt region .we prove this result rigorously for a problem with one risky asset , and quantify the optimality of the proposed policy .numerical results provided by gennotte & jung and liu & loewenstein show that the optimal boundaries are not symmetrical around the merton proportion and that they are complicated functions of time .for instance , dai & yi find a time , of order close to final time , after which the agent would no longer buy stock .the intuitive explanation is that it is wasteful spending to buy extra stocks , standing very close to final time , only to sell them all a moment later , without realizing virtually any profit , since the agent held them for very little time .our goal instead is to find a simple _ nearly - optimal " _ policy .we rely on the results obtained by dai & yi , who use a pde approach to problem to show a connection between the optimal investment problem and a double obstacle problem . using the theory of the obstacle problem, they show that the value function is smooth ( see theorem 5.1 for exact formulation ) .they also characterize the behaviors of the free boundaries .transaction costs were introduced into merton s model by magill & constantinides .their analysis of the infinite time horizon investment and consumption problem , despite being heuristic , gives an insight into the optimal strategy and the existence of the nt region .a more rigorous analysis of the same infinite time horizon problem was given by davis & norman , who under certain assumptions showed that the value function is smooth .the viscosity solution approach to that infinite time horizon problem was pioneered by shreve & soner , who significantly weakened the assumptions of davis & norman .an alternative to the dynamic programming approach above is to use the martingale duality approach .cvitani & karatzas in a finite time horizon investment problem using duality proved the existence of an optimal strategy , under the assumption that a dual minimization problem admits a solution .later cvitani & wang proved the existence of a solution to the dual problem . in a more general framework with multiple assetskabanov proved the existence of an optimal strategy , also assuming the existence of a minimizer to the dual problem .subsequent existence results under more relaxed assumptions were proved in deelstra , pham & touzi and campi & owen .while the problem of optimal investment in the presence of transaction costs is important in its own right , it has further value in the study of contingent claim pricing .hodges and neuberger proposed to price an option so that a utility maximizer is indifferent between either having a certain initial capital for investment or else holding the option but having initial capital reduced by the price of the option .this produces both a price and a hedge , the latter being the difference in the optimal trading strategies in the problem without the option and the problem with the option .this utility - based option pricing is examined in , , , .a formal asymptotic analysis of such an approach appears in whalley & wilmott .they assume a power expansion for the value function and compute the leading terms of it for both the case of holding the option liability and the case without it .their proof corresponds to the heuristic derivation section in this paper .we believe this paper is a step in the direction of providing a rigorous proof to a corresponding result with power utility .the set - up of the model is similar to shreve & soner , only with finite time horizon .an agent is given an initial position of dollars in the money market and dollars in stock .the stock price is given by where and are positive constants and is a standard brownian motion on a filtered probability space .we assume a constant positive interest rate .the agent must choose a policy consisting of two adapted processes and that are nondecreasing and right - continuous with left limits , and . represents the cumulative dollar value of stock purchased up to time , while is the cumulative dollar value of stock sold .let denote the wealth invested in the money market and the wealth invested in stock , with , .the agent s position evolves as the constant appearing in these equations accounts for proportional transaction costs , which are paid from the money market account .[ remark : l.s.c ] from and it follows that is a lower semi - continuous function .define the _ solvency region _ the policy is _ admissible _ for the initial position , if starting from and given by ( [ eq : position1 ] ) , ( [ eq : position2 ] ) is in for all .since the agent may choose to immediately rebalance his position , we agree the initial time to be .we denote by the set of all such policies .we note that if and only if .we introduce the agent s _ utility function _ defined for all by for .( an analysis along the lines of this paper is also possible for , but we omit that in the interest of brevity . ) for convenience we agree to treat when here and in the rest of this paper . define the value function as the supremum of the utility of the final cash position , after the agent liquidates her stock holdings ,\quad ( t , x , y)\in \zeroclosedtclosed\times\overline{\sv}. \label{eq : v}\ ] ] for and we also define an auxiliary value function .\label{eq : v - beta}\ ] ] clearly for the rest of this paper we will concentrate on finding [ lemma : zero_boundary ] for , and , the only admissible policy is to jump immediately to the origin and remain there . in particular , , and when and when proof : the proof of this lemma is a modification of remark 2.1 in shreve & soner . the problem with is similar to the problem solved by merton .it can be easily seen that the optimal policy always keeps a wealth proportion in the stock , see pham .we call the _ merton proportion_. for , where note , that .this is clear in case of , and can not increase as increases .this is not the case in the infinite time horizon case , when a condition on the parameters is required to assure the finiteness of the value function .[ remark : beta ] fix any such that . forthe rest of this paper we will deal with that fixed and for convenience we will drop the subscript and refer to the value function simply as .it turns out that this is easier to find than , but because of there is no loss of generality in doing so .when the choice is suitable , however , the case when requires a strictly positive .the term can be understood as the optimal growth rate in the sense of akian , menaldi & sulem , and as the investor impatience .the following theorem is parallel to the one proved by davis , panas , & zariphopoulou and shreve & soner .[ thm : hjb ] the value function defined by is a viscosity solution of the following hjb equation ( [ eq : hjb ] ) on : where the second - order differential operator is given by together with the terminal condition power utility functions lead to _ homotheticity _ of the value function : for , this is because .consequently , the problem reduces to that of two variables . with , we define in other words , we make the change of variables , , which maps the solvency region onto the interval .then the counterpart to theorem [ thm : hjb ] for the reduced - variable function is the following lemma .it is parallel to proposition 8.1 from shreve & soner .[ lemma : reduction ] on , is a viscosity solution of the hjb equation where with the terminal condition . for future convenience for also define two first - order differential operators dai & yi show that the optimal policy can be described in terms of two functions which define the _ no - trade " _ region as a function of in this region is zero . here and in the rest of this paper , the derivative with respect to at or be understood as the right - sided or left - sided derivative respectively .moreover , if the second derivative with respect to does not exist , then the desired property should be satisfied with both one - sided second derivatives . if one should buy stock in order to bring this ratio to the boundary of the _ no - trade " _ region . in this region zero . if one should sell stock in order to bring this ratio to the other boundary of the _ no - trade " _ region . in this region is zero ; see davis , panas , & zariphopoulou and shreve & soner .[ sec : heuristic ] in this section we derive several terms of a power series expansion of the value function by a heuristic method .similar to shreve & soner and janeek & shreve , we will assume that the _ no - trade " _ region in the reduced variable form is and that .[ remark : order ] in the line above and for the rest of this paper , we have used the following standard notation : for a function defined on we say that if there exist a constant independent of such that for all small enough .we say that if is true for any .similar definition can be made if is just a function of and one additional variable . to be even more precise , in either case, we will allow to depend only on the constants , unless noted otherwise .we believe that has the form for all times except those very close " to .intuitively a change in strategy for times close to will affect the expected utility of the final wealth only at order , since buying an extra stock and holding it time only affect wealth at .however we can neglect this effect , since we are only looking to find the value function up to the order of .it is not hard to see that is continuous on and in this paragraph we will also assume that .it follows that .moreover , for we will assume that equations ( [ eq : hjbp1 ] ) and ( [ eq : hjbp3 ] ) are consequences of the directional derivative of being zero in the directions of transaction in the regions in which it is optimal to buy stock and to sell stock , respectively .these equations imply for that there is no explicit solution to the free boundary problem ( [ eq : hjbp1 ] ) - ( [ eq : hjbp3 ] ) .we thus assume that in the region has an expansion around the value function with zero transaction costs in powers of , and we expect the coefficient of to be zero . in order to work with this expansion, we need to also include the variable , and we do that using powers of .for we assume we can now compute and equate the derivatives of with respect to across the boundaries of the region , similar to what is done in janeek & shreve , section 3 , _ heuristic derivation by taylor series_. for sake of brevity this computation is omitted .the result is that for +o(\lambda^{\frac53}),\nonumber\end{aligned}\ ] ] where the coefficient is irrelevant for the rest of this paper and for convenience we also define the constant so that we can write [ remark : heuristic_res ] the heuristic method and the results above are very similar to the ones in janeek & shreve , and the method is essentially similar to the one in whalley & wilmott .it should not come as a surprise that even though is not important , but , for example , is , since the later term would add a contribution of order in . also notice that in case ( ) or , i.e. the agent is not invested in stock at all , or is fully invested , there is no loss of the value function at the order of , because these positions do not require trading except possibly at the initial and final times , so the loss will only be at the order of and as previously stated , we exclude these two cases .[ sec : expansion ] in this section we build the functions and prove in theorem [ thm : comp ] that they are tight lower and upper bounds on the value function .they also turn out to be sub- and supersolutions of the hjb equation ; see , , .we have already stated the first classical theorem [ thm : hjb ] and its corollary lemma [ lemma : reduction ] , asserting that the value function is a viscosity solution of the hjb equation .one way to proceed to establish that supersolutions and subsolutions are indeed upper and lower bounds on the value function is to use a comparison theorem .theorem 8.2 from crandall , ishii & lions asserts that any supersolution dominates any subsolution .since the value function is both a viscosity sub- and super solution , the desired result would follow .however , a standard comparison theorem requires finite boundary values . in our case , that means that it can be applied only when and the value function is zero on . in the case it can not be applied since the value function is on the boundary of the solvency region .therefore , similar to janeek & shreve , we instead choose to use a version of the verification lemma from stochastic calculus that can be applied to both cases ; see theorem [ thm : comp ] .the main theorem of this paper is : [ thm : main ] assume and fix a compact .for small enough such that , and for , the value function satisfies where the remainder holds independently of but depends on the compact .moreover , there exist a simple strategy , constructed in lemma [ lemma : existence ] , which is nearly optimal " .that is , for , the expectation of the discounted utility of the final wealth for this strategy satisfies \\ & & = \frac{1}{p}e^{pa(t - t)}-\bigl(\frac{9}{32}(1-p)\,\theta^4(1-\theta)^4\bigr ) ^{\frac{1}3 } ( t - t)e^{pa(t - t)}\,\sigma^2\,\lambda^{\frac23 } + o(\lambda),\end{aligned}\ ] ] where , is the diffusion associated with this trading strategy .in other words , it matches the value function at the order of . here again the term holds independently of but depends on the compact .however , first we need to prove an auxiliary theorem : [ thm1 ] assume and then there exist four smooth functions , defined in lemma [ lemma : roots ] , additionally , there exist two continuous functions with the following properties . the functions are twice continuously differentiable with respect to in except on the curves on these curves have one - sided limits of their second derivatives .moreover , they satisfy on where on curves the second derivative with respect to can be either one of the one - sided derivatives .in addition , satisfy the boundary condition if and if for , and the final time condition inequality .in addition , we have . the plan is then to rigorously argue that for and for any admissible trading strategy with the corresponding diffusion starting from and given by and that is a supermartingale .using the fact that , it follows that \label{eq : supermart}\\ & \ge & \e \left[\u(x_t+y_t-\lambda \abs{y_t}){\big|\mathcal{f}_{t}}\right].\nonumber\end{aligned}\ ] ] taking supremum over all admissible strategies and dividing by , it follows that for the other direction , we would need to find a nearly - optimal " policy + with the corresponding diffusion for starting from , such that is a submartingale . using the fact that , it follows that \label{eq : submart}\\ & & \le \e \left[\u\left(\x_t+\y_t-\lambda \abs{\y_t}\right){\big|\mathcal{f}_{t}}\right ] \le v(t , x , y).\nonumber\end{aligned}\ ] ] dividing by we conclude that hence on . finally , because and because for , we will conclude that . in theorem [ thm : comp ]it will also be shown that the expected utility of the nearly - optimal " policy , which is defined in section [ sec : nearly - optimal_strat ] is bounded below by .we make the above heuristic arguments precise in theorem [ thm : comp ] .the proof of theorem [ thm1 ] is divided into five steps : we recall and of ( [ eq : gamma - zeta ] ) , and respectively .set , where we set , chosen to make well defined .we next define recall that because of our assumption that .set additionally , for , we define functions [ lemma : roots ] for , there are continuous functions satisfying , . for , and are also twice differentiable . in the terms are uniform in consistent with remark [ remark : order ] .proof : the proof is given in the appendix .[ def : regions ] choose small enough that and all lie in .( we have since . )define the _ no - trade " _region the buy region , and analogously the sell region [ remark : bounds ] for small enough , it follows from definition [ def : regions ] and lemma [ lemma : roots ] that for and we conclude that for define as a reminder , we have agreed to treat , when .also note that if and were zero and were ignored , then in the region the formula for agrees with the power series expansion . the term in the definition of will be used to create the inequalities .outside of the region , we extend this definition so that would satisfy we then have the derivative formula for , [ remark : c1 ] the extensions and ensure that the operators from and satisfy and for and , respectively .moreover , the equations and guarantee that is defined and continuous at and for .we also have for the function is twice differentiable with respect to except on the curves and , where one - sided second derivatives with respect to exist and equal the respective one - sided limits of the second derivatives .for we use to calculate the derivatives with respect to time to be \frac{{\mathrm{d}}\zeta_1^{\pm}(t ) } { { \,\mathrm{d}t } } \right)\nonumber\\ & & = \left(\frac{1+\lambda z}{1+\lambda\zeta_1^{\pm}(t)}\right)^{p } w^{\pm}_t(t,\zeta_1^{\pm}(t)),\nonumber\end{aligned}\ ] ] where in the last equality we have used the fact that . indeed defined in satisfies on .the desired result follows because of continuous differentiability of with respect to .similarly for , the derivatives with respect to time is finally for we have that [ remark : c11 ] as before , we see that is differentiable with respect to except on the curves and , where one - sided derivatives exists and equal the respective one - sided limit of the derivatives . together with remark [ remark : c1 ]we conclude that recall the operators from , , and respectively .it suffices to verify we , thereby , simultaneously also develop an analogous inequality for needed in the subsequent section .therefore , .\end{aligned}\ ] ] writing and , and using we compute \lambda^\frac13,\label{eq : z - lambda23}\end{aligned}\ ] ] where the last inequality holds for small enough . from remark [remark : bounds ] we obtain e^{pa(t - t ) } \lambda^{\frac23}\nonumber\\ & & + \frac12\left[(1-p)-\frac{12\theta^2(1-\theta)^2}{\nu^3}\right](z-\theta)^2\sigma^2e^{pa(t - t)}\nonumber\\ & & \mp pa m\lambda \left ( 1- \frac{1}{2a } \sigma^2(1-p)(z-\theta)^2\right ) + o(\lambda).\nonumber \ ] ] the definitions of and imply that the first two terms on the right - hand side are zero .for small enough using , we have that we conclude that by the definition of equation can be made positive ( negative ) , since using it can be shown that the term above can be bounded by \lambda. ] at and . therefore for such that and we conclude that for small enough it follows from and that for sufficiently small =\\ & & pa ( w^{+}(t,\zeta_1^{+}(t ) ) - m\lambda ) - \gamma_2 e^{pa(t - t)}\lambda^{\frac23 } -paw^{+}(t,\zeta_1^{+}(t))\\ & & \quad + \frac12 \sigma^2 p(1-p ) k^2(z)w^{+}(t,\zeta_1^{+}(t))\\ & & \ge -pam\lambda -\gamma_2e^{pa(t - t)}\lambda^\frac23+\frac12\sigma^2p(1-p)\left[\frac14\nu^2\lambda^{\frac23}-\nu^2\xi(t)\lambda \right]\\ & & \qquad\times\left[\frac{1}{p } e^{pa(t - t ) } - \gamma_2(t)\lambda^\frac{2}3 + m\lambda-\frac{e^{pa(t - t)}}{\nu } h(\zeta_1^{+}(t)-\theta)\right]\\ & & = + e^{pa(t - t)}\left[-\gamma_2+\frac18\sigma^2(1-p)\nu^2\right ] \lambda^{\frac23}- \frac12\sigma^2(1-p)\nu^2\xi(t)e^{pa(t - t)}\lambda\\ & & \quad -pam\lambda\left ( 1- \frac{\nu^2}{2a}\sigma^2(1-p)\left[\frac14\lambda^{\frac23}-\xi(t)\lambda\right ] \right ) + o(\lambda^{\frac43}),\ ] ] where the term can be shown to be \left[\nu\gamma_2(t)\lambda^\frac{2}3 \right. ] and hence can be bounded by , for small enough . by definitions of and the term is zero .moreover , for small enough , we have that \ge \frac12 ] , we have .using this fact , we compute .\end{aligned}\ ] ] we know that and thus , to prove ( [ eq : g ] ) , it suffices to show for our fixed that is positive on ] is analogous .this completes the verification that on so far we have constructed two continuous differentiable functions and showed that they satisfy on by definition satisfy the boundary condition if and if for , and we have that . to conclude the proof of theorem [ thm1 ]we are left only to verify the final time conditions . for such that , from for small enough we have since and because of remark [ remark : bounds ] . when from we see that next , consider the case when .for small enough , from we have where the inequality follows because for small enough , satisfies and all the are uniform in consistent with remark [ remark : order ] .we now see that when , from it follows that analogously , we see that .this completes the proof of theorem [ thm1 ] .in this section we will show that the _ no - trade " _ , buy and sell regions and from definition [ def : regions ] and the strategy associated with these regions is a nearly - optimal " strategy ; see theorem [ thm : main ] . for the _ no - trade " _ region in the original variables as we define the buy and sell regions in the original variables .[ lemma : existence ] let and , and let be the strategy associated with the _ no - trade " _ , buy and sell regions and . then there exists a strong solution to and , such that proof : we define the strategy to be the trading strategy associated with with the _ no - trade " _ ,buy and sell regions and .this strategy requires trading anytime the position is inside the buy or sell regions until the position reaches the boundary of the _ no - trade " _ region .then the strategy calls for buying ( respectively selling ) stock whenever the position is on the boundary of ( respectively ) , so that agent s position does not leave . on the boundaries of the _ no - trade " _ region these trades increase or and push the diffusion in direction pointing to the inside of .we refer to these directions as ( oblique ) directions of reflection .note that this strategy is not optimal .it requires the agent to buy stocks , if she has a positive number of stocks but is still in the buy region , even at time , regardless of the fact that to compute the final utility she would have to convert her stock position into cash. however , this causes loss of , and we are able to prove that this is a nearly - optimal " strategy .define and assume for convenience that .the directions of reflection on are the same as on , as long as we stop the process at + so we treat the other two boundaries and as absorbing .the reader can verify that _ case 1 _ conditions of _ theorem 4.8 _ of dupuis & ishii are satisfied on which gives us the existence of the processes and the local time process on indeed there are two condition for _ case 1_. the first condition requires that there is a unique direction of reflection on the boundary and that it changes smoothly as a function of a point of the boundary . in our case ,the directions of reflection are and on the buy and sell boundaries respectively .the second condition requires that such that for where is a ball of radius centered at . in our case , it is easy to see that both conditions are satisfied . letting we get and on , for define and analogously note that exist .the only thing left to verify is that are semi - martingales , that is , are finite a.s. assume the opposite , that is that on some set of positive probability at least one of them , say consider the processes that correspond to the wealth invested in the money market and in stock respectively using the same strategy , but in a market without transaction costs .it follows that and analogously it follows that because for all and are increasing fix a big integer .then , there exists such that on a set , such that and .for that consider the strategy in the zero - transaction cost model that is identical to on the set and sells all the stock at time , that is , and for . ]we define and . [ remark : boundedness ] similar to the argument above , it is easily shown that for any and any admissible strategy and for any stopping time satisfying we have that [ thm : comp ] assume and let be the functions constructed in theorem [ thm1 ]. then and for .proof : if or then the claim follows from theorem [ thm1 ] and lemma [ lemma : zero_boundary ] . for ,let consider the upper bound case first . in light of ( [ eq : valuefct ] ), it suffices to prove that for fixed but arbitrary .let be an admissible policy for this initial position .the function is of class in except possibly on the curves , where , where is define , and let .we can mollify to obtain a function , where is a standard mollifier .apply it s rule to to get \nonumber\\ & & + \sigma\int_t^{\tau_n}e^{-\beta s}y_s(\psi^{+}_\varepsilon)_y(s , x_s , y_s ) \,{\mathrm{d}}w_s .\label{4.5}\end{aligned}\ ] ] since is then the limits as of and of the first derivatives of are respectively and the appropriate first derivatives of .then the limit as of exists , since from it can be expressed using and its first derivatives . by the dominated convergence theoremwe have where , and from theorem [ thm1 ] , the dominated convergence theorem and it follows that \,{\mathrm{d}}s \le 0. ] .if = -\infty ] , it then follows from remark [ remark : boundedness ] that a.s . and by lemma [ lemma : zero_boundary ] we have that and we are on a set of measure zero , and follows .we conclude that .take expectation of both sides of to get \le e^{-\beta t}\psi^{+}(t , x_0,y_0),\label{4.8.3}\ ] ] where we have used that by definition .we also have \le \e\left[e^{-\beta \tau_n}\psi^{+}(\tau_n , x_{\tau_n},y_{\tau_n})\ind_{\{\tau_n = t\}}\right ] .\label{4.8.4}\ ] ] the left hand side of converge by the monotone convergence theorem to + ] converges to zero .moreover , since then \rightarrow 0.\ ] ] it follows that \rightarrow 0.\ ] ] we conclude that \ge e^{-\beta t } \psi^{-}(t , x_0,y_0).\ \label{4.10}\ ] ] [ remark : nearly - optim - strat ] define the expected discounted utility of the final wealth associated with the nearly - optimal " strategy , ,~ ( t , x , y)\in\zeroclosedtclosed\times\overline\sv , \label{eq : v - bar}\\ & & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\tilde u(t , z ) \define \bar v(t,1-z , z),\qquad ( t , z ) \in \zeroclosedtclosed\times\overline\su . \label{eq : u - bar}\end{aligned}\ ] ] it follows from and that for we have that or equivalently for we can now finally prove theorem [ thm : main ] .proof of theorem [ thm : main ] : from theorem [ thm : comp ] we see that moreover , , and for fixed .it follows that .moreover , from remark [ remark : nearly - optim - strat ] we have that for we conclude that the strategy from lemma [ lemma : existence ] is nearly - optimal " , that is it matches the value function of the optimal strategy up to order .proof of lemma [ lemma : roots ] : we shall only consider of order . for such , , , so it follows from ( [ eq : f1 ] ) that for we have consider where , with satisfying . from itfollows that then .thus , when we have , and when we have for sufficiently small .therefore , for , and any for sufficiently small there exists satisfying .in other words , . + the proof of the existence of is analogous.for we note that for we have and by the implicit function theorem is a continuously differentiable function and its derivative is the second derivative can be computed similarly .this method can also be used to provide a rigorous proof of the famous result of asymptotic price an option with transaction costs , by .the reader should consult the papers of davis , panas , & zariphopoulou and whalley & wilmott for all the details . to recall some of the definitions , we define the cash value of a number of shares of stock the wealth at the final time , without and with the option liability ,\end{aligned}\ ] ] where is the options strike .the value function at the final time , without and with the option liability respectively ,\ ] ] for where we continuously extend at as we note that let it then follows that and analogously we define _ theorem 2 _ of then states that is a constrained viscosity solution of the hjb equation on . as a corollary we have [ cor : viscsol - ww ] and constrained viscosity sub- and supersolutions respectively of the hjb equation on the following is a slight modification of _ theorem 3 _ of [ thm : comparisson - ww ] let be a bounded upper semi - continuous viscosity subsolution of on and let be a bounded from below lower semi - continuous viscosity supersolution of in such that for all and on where and then on as a corollary we have [ cor : comparisson - ww ] let be a bounded upper semi - continuous viscosity subsolution of on and let be a bounded from below lower semi - continuous viscosity supersolution of ( in such that for all and on where and then on next , we define the sub- and super solutions in the as where is in _ no - trade " _region , with .we have kept the notation of , so for the final condition is and for we extend into the rest of the region as when and when let following the methodology of the _ rigorous asymptotic expansion _section , the reader can verify that are sub- and supersolutions of .it s not hard to see that also they satisfy the final conditions and for respectively . andalso that satisfy .similarly , from [ cor : viscsol - ww ] and are constrained viscosity sub- and supersolutions respectively , and the reader can verify that we conclude from _ theorem 2 _ and theorem [ thm : comparisson - ww ] that finally , using similarly arguments as in corollary [ cor : comparison ] we conclude that for a fixed value function is concave , so is monotone . by taking the derivative of in the and regions ( see ( [ eq : solutionbs ] ) and ( [ eq : solutionss ] ) ) we see that .\ ] ] it follows that for and , and hence for all ] .the mean - value theorem gives and using lemma [ lemma : uexpansion ] , we obtain we write for and similarly as in theorem [ thm : ntwidth ] > from , we now see that which implies this suggests that the optimal policy is to keep a wider wedge on the right side of the merton proportion .this extra width makes sense because consumption reduces the money market position. we can do the same calculation for , in which case .taking the square roots in ( [ 4.29 ] ) , ( [ 4.30 ] ) , we now have in fact , when , we have , as we saw in remark [ rem:100 ] . in the key formulas derived in this paper, the transaction cost parameter appears in combination with .according to theorem [ thm : main ] , the highest order loss in the value function due to transaction costs is > from theorem [ thm : ntwidth ] , we see that one way to see the intrinsic nature of the quantity is to define the proportion of capital in stock , , and apply it s formula when is generated by the optimal triple , for which and are continuous , to derive the equation we see that the response of to relative changes in the stock price is .when replicating an option by trading , the position held by the hedging portfolio , denominated in shares of stock , is called the _delta _ of the option , and the sensitivity of the delta to changes in the stock price is the _gamma_. we have here a similar situation , except that is the _ proportion _ of capital held in stock , rather than the _ number of shares _ of stock , and is the sensitivity of this proportion to _ relative _ changes in the stock price .we can now obtain the quantities in ( [ 4.31 ] ) and ( [ 4.32 ] ) by the following heuristic argument .suppose when the transaction cost is we choose to keep in an interval ] . however , for small , for each positive the distribution of in ] , i.e. , , where ] grows at rate per unit time .this causes a relative loss of capital per unit time .incurring this loss is equivalent to having zero transaction cost but with the mean rate of return of both the stock and the money market reduced by . according to ( [ 2.a])([2.c ] ) , the optimal expected utility in sucha problem is therefore , the loss due to transaction costs is approximately the total loss is approximately . setting and solving for , we obtain the leading term in ( [ 4.32 ] ) . with this value of , we have thereby obtaining the term in ( [ 4.31 ] ) .it is interesting to note that the quantity also plays a fundamental role in the formal asymptotic expansions of whalley & wilmott .in fact , even the constants and in ( [ 4.31 ] ) and ( [ 4.32 ] ) appear in , the first at the end of section 3.3 and the second in equation ( 3.10 ) ., option pricing via utility maximization in the presence of transaction costs : an asymptotic analysis , ceremade , univ .paris dauphine ( 1999 ) ., multivariate utility maximization with proportional transaction costs , preprint _ http://arxiv.org/abs/0811.3889 _( 2008 ) ., bounds on prices of contingent claims in an intertemporal economy with proportional transaction costs and general preferences , _ finance stoch . _ * 3 * , 345369 ( 1999 ) ., bounds on derivative prices in an intertemporal setting with proportional transaction costs and multiple securities , _ math . finance _ * 11 * , 331346 ( 2001 ) . , m. g. , evans , l. c. and lions , p .-l . , some properties of viscosity solutions of hamilton - jacobi equations , _ trans .soc . _ * 282 * , 487502 ( 1984 ) . , user s guide to viscosity solutions of second order partial differential equations , _ ams bulletin _ * 1 * , 167 ( 1992 ) . ,m. g. and lions , p .-, viscosity solutions of hamilton - jacobi equations , _ trans .soc . _ * 277 * , 142 ( 1983 ) ., hedging and portfolio optimization under transaction costs , _ math . finance _ * 6 * , 113165 ( 1996 ) . , a closed - form solution to the problem of super - replicating under transaction costs , _ finance stoch . _ * 3 * , 3554 ( 1999 ) ., on optimal wealth under transaction costs , _ j. math .econ . _ * 35 * , 223231 ( 2001 ) ., finite - horizon optimal investment with transaction costs : a parabolic double obstacle problem , _ journal of differential equations _ * 4 * , 14451469 ( 2009 ) . , portfolio selection with transaction costs , _ math .res . _ * 15 * , 676713 ( 1990 ) ., european option pricing with transaction costs , _siam j.control_ * 31 * , 470493 ( 1993 ) ., dual formulation of the utility maximization problem under transaction costs , _ ann . appl ._ * 11 * , 13531383 ( 2001 ) .correlated multi - asset portfolio optimization with transaction cost , preprint _ http://arxiv.org/abs/0705.1949 _( 2007 ) . , sdes with oblique reflection on nonsmooth domains , _ the annals of probability _ * 21 * , 55450 ( 1993 ) . ,investment strategies under transaction costs : the finite horizon case , _ management science _ * 40 * , 385404 ( 1994 ) ., balancing small transaction costs with loss of optimal allocation in dynamic stock trading strategies " , _ siam j of appl . math ._ * 70 * , 19771998 ( 2010 ) . , option replication of contingent claims under transaction costs , _ rev .futures markets _ * 8 * , 222239 ( 1989 ) ., asymptotic analysis for optimal investment and consumption with transaction costs , _ finance stoch ._ * 8 * , 181206 ( 2004 ) ., futures trading with transaction costs , _illinois j. of math ._ to appear ., optimal portfolio selection with transaction costs and finite horizons , _ rev .studies _ * 15 * , 805835 ( 2002 ) . , portfolio selection with transaction costs , _ j. econ. theory _ * 13 * , 245263 ( 1976 ) ., optimum consumption and portfolio rules in a continuous - time case , _ j. econ .theory _ * 3 * , 373413 ( 1971 ) [ erratum * 6 * , 213214 ( 1973 ) ] . , springer - verlag , 2010 ., optimal investment and consumption with transaction costs , _ ann. applied probab ._ * 4 * , 609692 ( 1994 ) . , an asymptotic analysis of an optimal hedging model for option pricing under transaction costs , _ math .finance _ * 7 * , 307324 ( 1997 ) .
we consider an agent who invests in a stock and a money market account with the goal of maximizing the utility of his investment at the final time in the presence of a proportional transaction cost . the utility function is of the form for . we provide a heuristic and a rigorous derivation of the asymptotic expansion of the value function in powers of we also obtain a nearly optimal " strategy , whose utility asymptotically matches the leading terms in the value function . transaction costs , optimal control , asymptotic analysis , utility maximization g13 90a09 , 60h30 , 60g44
in 1984 , bennett and brassard proposed a revolutionary concept that _ key distribution _ may be accomplished through public communications in quantum channels .hopefully , the privacy of the resulted key is to be guaranteed by quantum physical laws alone , quite independent of how much computational resource is available to the adversary .the primary quantum phase of the proposed protocol is a sequence of single photons produced by alice ( the sender ) and detected by bob ( the receiver ) .the security proof of the bb84-protocol ( or its many variants ) for adversaries with unrestricted power is a difficult mathematical problem , and has only been achieved with any generality in the last few years . in brief , the bb84-protocol is secure even with channel noise and possible detector faults for bob , provided that the apparatus used by alice to produce the photons is perfect .the purpose of this paper is to remove this last assumption , by proposing and giving a concrete design for a new concept , _ self - checking source _ , which requires the manufacturer of the photon source to provide certain tests ; these tests are designed such that , if passed , the source is guaranteed to be adequate for the security of the bb84-protocol , even though the testing devices may not be built to the original specification . a self - checking source must receive inputs from multiple locations ( two in our case ) and returns classical outcomes at these locations .the test needs only to consider the classical inputs and the classical outcomes .it is well known that there are clever ways to construct imperfect sources for the coding used in the bb84-protocol that behave quite normal on the surface , but seriously compromise the security . in other words ,the bb84 coding together with the standard test executed in the bb84-protcol are problematic because the external data can be reproduced by quantum apparatus which are not secure at all .we propose a different source that is self - checking and yet can be used to generate the bb84 coding .our result means that one does not have to perform an infinite number of ways to check all possible devious constructions . in some waysour test can be regarded as simple self - testing quantum programs .our result requires that , when the inputs to the source are fixed , the distribution of probability for the classical outcomes is also fixed .our result is that , if these distributions of probability ( associated with the different inputs ) are exactly as in the specification for our self - checking source , the state transmitted is a direct sum of states that are individually normally emitted by a perfect source . in practice, we can not expect these probabilities to be exactly as in the specification for the self - checking source .however , one can test that they are not too far away from this specification .furthermore , one should expect that the closer to their specified values these probabilities will be , the closer to the direct sum described above the source will be .this is usually sufficient to prove security . in section 2 ,we show how the main mathematical question arises from the security requirement from the bb84-protocol . in section 3, the precise question is formulated , and the main theorem stated .the proof of the main theorem is given in section 4 .ideally , the objective of key distribution is to allow two participants , typically called alice and bob , who initially share no information , to share a secret random key ( a string of bits ) at the end .a third party , usually called eve , should not be able to obtain any information about the key .in reality , this ideal objective can not be realized , especially if we give unlimited power to the cheater , but a quantum protocol can achieve something close to it . see ( and more recently ) for a detailed specification of the quantum key distribution task .one of the greatest challenges in quantum cryptography is to prove that a quantum protocol accomplishes the specified task .one can experimentally try different kinds of attacks , but one can never know in which way the quantum apparatus can be defective .in any case , such experiments are almost never done in practice because it is not the way to establish the security of quantum key distribution .the correct way is a properly designed protocol together with a security proof .recently , there has been a growing interest in practical quantum cryptography and systems have been implemented .however , proving the security of quantum key distribution against _ all _ attacks turned out to be a serious challenge . during many years , many researchers directly or indirectly worked on this problem . using novel techniques , a proof of security against all attacks for the quantum key distribution protocol of bennett and brassardwas obtained in 1996 .related results were subsequently obtained , but as yet is the only known proof of security against all attacks. a more recent version of the proof with extension to the result is proposed in .also , the basic ideas of might lead to a complete solution if we accept fault tolerant computation ( for example , see ) , but this is not possible with current technology . in the quantum transmission , alice sends photons to bob prepared individually in one of the four bb84 states uniformly picked at random .the bb84 states denoted , , and correspond to a photon polarized at , , and degrees respectively ( see figure [ bb84_picture ] ) .( we reserve the states and for further use : we will have to add two other states in our analysis . ) ( 40,30 ) ( 10,10)(1,0)14 ( 25,10 ) ( 10,10)(0,1)14 ( 10,25 ) ( 10,10)(1,1)10 ( 21,18 ) ( 10,10)(-1,1)10 ( -3,22 ) bob measures each photon using either the rectilinear basis or the diagonal basis uniformly chosen at random .the basic idea of the protocol is the following .both , eve and bob , do not know alice s bases until after the quantum transmission .eve can not obtain information without creating a disturbance which can be detected .bob also disturbs the state when he uses the wrong basis , but this is not a problem . after the quantum transmission , alice and bob announce their bases .alice and bob share a bit when their bases are identical , so they know which bits they share .the key point is that it s too late for eve because the photons are on bob s side .however , the security of the protocol relies on the fact that the source behaves as specified , and this is the main subject of this paper .informally , the source used in the original bb84-protocol can be described as a blackbox with two buttons on it : _base2-button _ and _ base3-button_. when alice pushes the base2-button , the output is either or , where and form an orthonormal basis of a two - dimensional system , with each possibility occurring with probability . after the base-button is pushed , of the output , only the vector goes out to bob ; bit is only visible to alice .similarly , if alice pushes the base3-button , the output is either or , with each possibility occurring with probability , where the suggested way in to achieve the above is to have the blackbox generates a fixed state , say , then the bit is uniformly chosen at random and this state is rotated of an appropriate angle to create the desired state ( assuming that the base-button is pressed ) .the security proof of the protocol extends to sources beyond mentioned above . to obtain our self - testing source, we need to consider a different type of sources .conjugate coding source _ consists of a pure state in a hilbert space , and two measurements ( each binary - valued ) , defined on but operating only on coordinates in . pushing base2-button , base3-button performs respectively measurement , .( we have restricted the form of the initial state to be a pure state instead of a general mixed state .this is without loss of generality for our result , as we will see . )let , where , denote the projection operators to the subspaces corresponding to the outcomes for measurement .( we sometimes use the notation to denote the measurement itself . ) after performing the measurement , only the coordinates in are made available for transmission . thus ,if button is pushed with outcome , the density operator in the transmitted beam is . for convenience, we sometimes identify with , and with .thus , if button is pushed with outcome , the density operator is .the security proof of the protocol is valid if the source satisfies , for , the conditions where and are orthonormal bases that satisfy equation ( 1 ) .it is well known ( and easy to see ) that the following source satisfies the above condition .let , each be a two - dimensional hilbert space .let be two pairs of orthonormal bases of related by equation ( 1 ) ; similarly let be two pairs of orthonormal bases related by equation ( 1 ) for .let be the bell state .let be two measurements on that operate only on the coordinates in .the measurement consists of the two orthogonal subspaces , .the measurement consists of the two orthogonal subspaces , .if we restrict them to only , the measurement , are the measurements in the bases and respectively .clearly , ( 2 ) is true .we call this source the _ perfect system_. more generally , the security proof extends to systems that behave like a mixture of orthogonal ideal systems .a source is an _ extended perfect system _ if there exist in orthogonal two dimensional subspaces ( , some index set ) , with denoting states in that respect the same ortogonality condition as the above states in and equation ( 1 ) , such that for some probability distribution on , now comes the question .if a manufacturer hands over a source and claims that it is a perfect system , how can we check this claims , or at least , makes sure that it is an extended perfect system ?if the source is a perfect system , let be the measurements operating on in exactly the same way as on .that is , let ( where ) be the projection operators to subspaces by with outcome ; project to , , and project to , , respectively . now observe that the following are true for , , we can ask the manufacturer to provide in addition two measuring devices outside the blackbox corresponding to .a test can be executed to verify that these equations are satisfied ( see the related discussion in the introduction ) .furthermore , as a matter of physical implementation , to make sure that and operate on , respectively , we can further demand that the buttons are replaced by two measuring devices outside the blackbox .is that sufficient to guarantee that we have at least an extended perfect system ?unfortunately , the answer is no .it is not hard to construct examples where ( 4 ) is satisfied , but it is not an extended ideal system ( and in fact , security is gravely compromised ) .however , as we will see , if we add one more measurement appropriately on each side , and perform the corresponding checks , then it gurantees to be an extended perfect system. that will be the main result of this paper .an object is called an _ ideal source _ if the following are valid : each of is a 2-dimensional hilbert space with being a pair of orthonormal basis of satisfing equation ( 1 ) , and being a pair of orthonormal basis of satisfying equation ( 1 ) ; is the bell state ; are the projection operators on the states , respectively ; are the projection operators on the states , respectively . to describe ,let ( ) be the state after being normalized to unit length .the states and have a particular status in our proof , and we alternatively denote and .then are respectively the projection operators on the states , . as usual, we consider and as two alternative notations for one and the same projection operators on .clearly , are the projection operators on corresponding to measuring with respect to three bases of ( the bases for , at an angle of , with repect to the basis for ) .the projection operators operate on coordinates in , and are similarly defined as the .let these numbers can be easily computed . for example , and . a _ self - checking source_ consists of an initial state , three measurements acting on coordinates in , and three measurements acting on coordinates in , such that the following conditions are satisfied : we will see that a self - checking source gives rise to an extended ideal system .an _ extended ideal source_ is an orthogonal sum of ideal sources in a similar sense as an extended perfect system in relation to perfect systems . that is , if there is an index set , orthogonal two dimensional subspaces with ( or alternatively ) denoting the state in , orthogonal two dimensional subspaces with ( or alternatively ) denoting the state in , such that for some ( possibly complex ) numbers on with , furthermore , for each , for every projection , acts exactly on like the corresponding projection on in the ideal source case .that is , if .the following fact is easy to verify .* fact 1 * any extended ideal source is a self - checking source . also , it is clear that from any self - checking source , by omitting the measurements , one obtains a conjugate coding source .* fact 2 * the conjugate coding source obtained from an extended ideal source must be an extended perfect system .the converse of fact 1 is our main theorem . *main theorem * any self - checking source is an extended ideal source . it follows from the main theorem and fact 2 that a self - checking source provides an adequate source for the bb84 quantum key distribution protocol .we remark that in our definition of self - checking source , the restriction of the initial state to a pure state instead of a mixed state is not a real restriction .given a source with a mixed state satisfying equation ( 6 ) , we can construct one with a pure state ( by enlarging appropriately ) satisfying ( 6 ) .we can apply the main theorem to this new source , and conclude that it also gives rise to an adequate source for the bb84-protocol .it is well known , from discussions about _ epr experiments _ ( see e.g. ) , that quantities such as exhibit behavior characteristic of quantum systems that can not be explained by classical theories .one may view our main result as stating that such constraints are sometimes strong enough to yield precise structural information about the given quantum system ; in this case it has to be an orthogonal sum of epr pairswe give in this section a sketch of the main steps in the proof .let be a self - checking source .we show that it must be an extended ideal source . in section 4.1 , we derive some structural properties of the projection operators as imposed by the self - checking conditions , but without considering in details the constraints due to the tensor product nature of the state space . in sections 4.2 and 4.3 , the stateis decomposed explicitly in terms of tensor products , and the properties derived in section 4.1 are used to show that this decomposition satisfies the conditions stated in the main theorem . in this subsection , we present some properties of the projected states ( such as ) as consequences of the constraints put on self - checking sources .the proofs of these lemmas are somewhat lengthy , and will be left to the complete paper . * lemma 1 * for every and , we have .let for , where are two hilbert spaces .we say that is _ isormorphic _ to if there is an inner - product - preserving linear mapping such that for all .let , and be elements of defined by * lemma 2 * is isomorphic to .* lemma 3 * let . then . *lemma 4 * let . then . since there is a symmetry between the projection operators and , the following is clearly true .* lemma 5 * lemmas 2 - 4 remain valid if the projection operators and are exchanged .we now prove that the state can be decomposed into the direct sum of epr pairs .we begin with a decomposition of , which is equal to by lemma 1 . * lemma 6 * one can write where is an index set , are complex numbers , and , are two respectively orthonormal sets of eigenvectors of the operators ( acting on ) and ( acting on ). _ proof _ the lemma is proved with the help of schmidt decomposition theorem .we omit the details here . let .define , and for .let be the subspace spanned by and ; let be the subspace spanned by and .the plan is to show that and that have all the properties required to satisfy the main theorem . in the remainder of this subsection, we use lemmas 2 - 5 to show that each ( ) behaves correctly under the projection operators ( ) . in the next subsection, we complete the proof by showing that all ( ) are orthogonal to each other .by lemma 2 , is isomorphic to . in particular, this implies that any linear relation must also be satisfied if are replaced by the appropriate projected states . now means that , for each , any linear relation must also be satisfied if we make the following substitutions : * lemma 7 * for each , is isomorphic to ._ proof _ use the preceding observation and the orthogonality between and , and the orthogonality between and .we omit the details here . note that by definition . > from lemma 7 , it is easy to see that is a unit vector perpendicular to .in fact , is mapped to the vector under the isomorphism in lemma 7 .> from lemma 7 , for the purpose of vectors in the space , the projection operators correspond to choosing the coordinate system obtained from the system rotated by the angle ; similarly , correspond to choosing a coordinate system obtained from the system rotated by the angle .it remains to show that correspond to the coordinate system itself . by definition .it remains to prove that .to do that , we use lemma 3 . observe that since by lemma 3 , we must have .this completes the proof that the projection operators behave as required on the subspace . as stated explicitly in lemma 5, we can obtain the symmetric statement that the the projection operators behave as required on the subspace .now that we have determined the behavior of the projection operators on , we can in principle calculate any polynomial of the projection operators on the state . by lemma 4, can be written as this gives after applying the rules and symplifying , we obtain as , this proves it remains to show that all are orthogonal to each other .( a symmetric argument then shows that all are also orthogonal to each other . )let .assume that is not orthogonal to .we derive a contradiction . by definition, is spanned by , and is spanned by . clearly, and are not orthogonal to each other , as all the other pairs are orthogonal .choose a coordinate system for the space spanned by the four vectors such that where .from our knowledge about the behavior of , we infer that where .similary , where .as the inner product of and is which is non - zero , we conclude that and are not orthogonal .this contradicts the fact that are projection operators to orthogonal subspaces .this completes the proof .the security problem for imperfect source is a difficult one to deal with . the present paper is a step in only one possible direction .we have also limited ourselves to the simplist case when the correlation probabilities are assumed to be measurable precisely .we leave open as future research topics for extensions to more general models .
quantum key distribution , first proposed by bennett and brassard , provides a possible key distribution scheme whose security depends only on the quantum laws of physics . so far the protocol has been proved secure even under channel noise and detector faults of the receiver , but is vulnerable if the photon source used is imperfect . in this paper we propose and give a concrete design for a new concept , _ self - checking source _ , which requires the manufacturer of the photon source to provide certain tests ; these tests are designed such that , if passed , the source is guaranteed to be adequate for the security of the quantum key distribution protocol , even though the testing devices may not be built to the original specification . the main mathematical result is a structural theorem which states that , for any state in a hilbert space , if certain epr - type equations are satisfied , the state must be essentially the orthogonal sum of epr pairs .
as i write these words in june 2014 , it has been just over a month since the retirement celebration for alan selman at the university at buffalo s center for tomorrow .i ca nt think of a more fitting location for the celebration , given that alan s technical contributions to the field are of such beauty and insight that they are as important to the field s future as they have been to its past and present . any retirement is bittersweet , but alanhas mentioned that he will be keeping his hand in the field in retirement . that happy fact helped all of us at the celebration focus on the sweet side of the bittersweet event .warm talks and memories were shared by everyone from the university s president , the department chair , and alan s faculty colleagues all the way up to the people who are dearest of all to alan his postdocs and students .the warmth was no surprise .alan is not just respected by but also is adored by those who have worked with him .anyone who knows alan knows why .alan is truly kind , shockingly wise , and simply by his nature devoted to helping younger researchers better themselves and the field .but in fact , i think there is more to say something far rarer than those all too rare characteristics .what one finds in alan is a true belief in an absolute , unshakable belief in the importance of understanding of the foundations of the field .now , one might think that alan holds that belief as an article of faith .but my sense is that he holds the belief as an article of understanding .like all the very , very best theoreticians , alan has a terrific intuition about what is in the tapestry of coherent beauty that binds together the structure of computation .he does nt see it all or even most of it no one ever has .but he knows it is there . andin these days when many nontheory people throw experiments and heuristics at hard problems , often without much of a framework for understanding behaviors or evaluating outcomes , not everyone can be said to even know that there is an organized , beautiful whole to be seen .further , alan has such a strong sense for what is part of the tapestry that far more than most people he has revealed the tapestry s parts and has guided his collaborators and students in learning the art of discovering pieces of the tapestry .and that brings us to the present article and its theme of the beauty of the structures and the structure that alan has revealed the notions , the directions , and the theorems . for all of uswhose understanding is nt as deep as alan s , the beauty of alan s work has helped us to gain understanding , and to know that that tapestry really is out there , waiting to be increasingly discovered by the field , square inch by square inch , in a process that if it stretches beyond individual lifetimes nonetheless enriches the lifetimes of those involved in the pursuit of something truly important . to summarize alan s career in a sentence that is a very high although utterly deserved compliment : alan is a true structural complexity theorist .it would be impossible to cover in a single reasonably sized article all or even most of alan s contributions to the field .so this article will celebrate alan s career in a somewhat unusual way .given that the heart of alan s contribution to the field is truly beautiful structures notions , directions and foundational results regarding those this article will simply present to the reader a few of those structures and point out ( when it is nt already apparent ) why they are beautiful and important .we wo nt be trying to survey the results that are by now known about the structures , although we ll often mention the results that alan and his collaborators obtained on the structures in their original work , and we ll sometimes mention some later results . but the core goal here is to present the beautiful structures themselves .we ll do that in section [ s : structures ] .but before turning to that , it would be a crime not to at least allude to the stunning breadth of alan s contributions in service and developing human infrastructure , and to the recognition he has received for his scholarship and service .alan has trained many ph.d .students ( his already graduated ph.d .advisees are joachim grollmann , john geske , roy rubinstein , ashish naik , aduri pavan , samik sengupta , and liyu zhang , and his current ph.d .advisees are andrew hughes , dung nguyen , and nathan russell ) and postdocs ( mitsunori ogihara , edith hemaspaandra , and christian glaer ) , and their contributions both under alan and beyond have been important to the field . those of us who were not his students or postdocs but have had the privilege of working with alan have been enriched , inspired , and uplifted by his insights and vision . the field and his school have recognized alan s research and service contributions with a long list of the highest awards and most important positions .alan is a fellow of the acm , a fulbright scholar , and the recipient of an alexander von humboldt foundation senior research award and a japan society for the promotion of science invitational fellowship .he has been awarded the acm sigact distinguished service prize , and has a long record of extensive editorial service to the field , including being the editor - in - chief of theory of computing systems since 2001 .he is the recipient of the state university of new york chancellor s award for excellence in scholarship and creative activities , and of the university of buffalo s exceptional scholar award for sustained achievement .alan was acting dean of the college of computer science at northeastern university , and chaired ub s computer science and engineering department for six years .with steve homer , alan wrote a computability and complexity textbook , and he edited the complexity theory retrospectives .alan was an important part of efforts to obtain stronger government grant support for the study of theoretical computer science .he was instrumental in the creation of , and served the first term as conference chair of , the ieee conference on structure in complexity theory ( now called the ieee conference on computational complexity ) , and for that won the ieee computer science society meritorious service award . that s some record !in this article , we ll present five of alan s beautiful notions concepts that alan has introduced or very substantially advanced .that is such a small number relative to the dozens of topics that alan has contributed to that we ll be skipping topics that for almost any other person would in and of themselves be the highlight of an entire career .in fact , we wo nt even be trying to pick off the `` top five '' notions , but rather will just be selecting a set of five very lovely notions .for example , we ll skip right over the seismic contribution of ladner , lynch , and selman to the definitions of and understanding of the relative powers of the rich range of polynomial - time reducibilities that are so widely used today . and similarly , since alan and his collaborators will soon be surveying issues related to these topics in a _ sigact news _complexity theory column , we also will leave untouched all issues related to disjoint pairs , promise problems , and propositional proof systems , even though alan s importance there is great , stretching across more than a half dozen papers , from the seminal 1980s work of even , selman , and yacobi to the remarkable 2010s work of hughes , pavan , russell , and selman .alan and his collaborators have in recent decades resolved a long - open issue , bringing unity to the understanding of mitoticity , completeness , and autoreducibility for many central complexity classes . among their advancesis that every ( many - one polynomial - time ) np - complete set is ( many - one polynomial - time ) autoreducible , and ( many - one polynomial - time ) autoreducibility and ( many - one polynomial - time ) mitoticity coincide .so , for example , every np - complete set is so repetitively structured that there exists a set such that and are infinite and ( many - one polynomial - time ) equivalent , and so certainly are themselves each ( many - one polynomial - time ) np - complete . also , alan and his collaborators have brought great light to the extent to which this type of behavior persists or fails for other types of reductions , such as for reductions more flexible than many - one reductions , or reductions in the logspace world . and yes , you ve guessed it , we also wo nt be covering any of that work here , since happily alan and his collaborators half a decade ago wrote for the _ sigact news _complexity theory column a survey article on the work in this line up to that point , although if you are interested ( as well you should be ! ) , please do nt miss their recent work on this line in icalp-13 and stacs-14 . just itself , but as it functions when accessed through relativization ( e.g. , and ) , and in its cousin known as `` strong '' computing , and in the so - called strong nondeterministic reductions based on that cousin plays an important role in complexity theory .alan was a central player in the key early work that built this collection of concepts .and we wo nt cover that here .we also wo nt cover what is now called the left - set technique , which was created by alan and which for example was used to devastating effect by ogihara and watanabe in their work showing that if any np - complete set polynomial - time bounded - truth - table reduces to a sparse set , then .there are many other beautiful themes , notions , and results in alan s work , which we not only wo nt cover below but which we also havent mentioned above .in fact , it should already be clear what we wo nt cover about alan s work is enough to fill three or four extremely satisfying , productive , important careers .but were we to go on listing the important and lovely notions due to alan that we _wo nt _ cover , there would be no room left to actually cover any structures that alan developed .so let us move right on to our select five . as a reminder , for these notions we wo nt at all be doing a survey of what is known , but rather we will be trying to convey what the notion is , why it is beautiful , and why its introduction by alan and his collaborators was important . and a shorthand, this article will sometimes say `` alan and his collaborators '' or even perhaps `` alan '' when speaking of work by alan joint with others ; this is not in any way meant as a slight to those collaborators , but is simply since the focus of this article is on alan .the citation labels ( e.g. , `` [ bls84 ] '' ) will generally make it clear to the reader when we are employing this shorthand .search issues are important not just in the area of theory . ai researchers also are intensely focused on how to explore spaces .so suppose you come to a fork in the road .there might be a treasure down one or both of the paths or neither might hold a treasure . which wayshould you go ?one of alan s beautiful structures addresses the issue of when polynomial - time functions can help you solve the above problem .after all , you do nt really need to know whether a given road has a treasure .that would be great to know , but maybe it is too much to hope for or too computationally expensive .happily , all you really need to be able to do in the above situation is to choose one of the roads such that it is true that _ if either of the roads holds a treasure , then the road you choose holds a treasure_. in a quite amazing series of papers , alan introduced and broadly explored the notion of p - selectivity , which captures precisely the above issue .we all know what polynomial time ( ) is .a set is in p exactly if there is a polynomial - time machine that accepts on each string in and rejects on each string in .that is , one has a polynomial - time decision algorithm for membership in the set . for p - selectivity ,one is required to have a polynomial - time semi - decision algorithm for the set .in particular and here we use a particular one of the various equivalent definitions of this quite robust concept a set is said to be p - selectiveexactly if there is a polynomial - time function that takes as its input two arguments , and , and has the property that , for each and , what this says is that a set is p - selective exactly if there is a polynomial - time function that , given any two elements , always chooses one of them , and if at least one of them is in the set , the one it chooses is in the set .one sometimes hears this described by saying that the function chooses the element `` more likely''or , better , `` no less likely''to be in the set .this is a fine description , at least in the `` no less likely '' version , as long as one keeps in mind that the probabilities here are all zero and one .p - selectivity is capturing the notion of wise search being able to decide which way to go at forks in the road .it also is one of a great variety of notions ( such as almost polynomial time , p / poly , p - closeness , near testability , nearly near testability , etc . )that try to capture a wider range of sets than p does , yet to still have some natural polynomial - time action at their core .although this lovely notion , p - selectivity , was introduced by alan , alan s seminal papers are quite open as to his inspiration .the p - selective sets ( which are also sometimes called the semi - feasible sets , since they are the sets having polynomial - time semi - decision algorithms ) were inspired by jockusch s notion from recursive function theory of the semi - recursive sets . in his career , alan has often drawn on his broad understanding of recursive function theory to improve computer science s pursuit of complexity - theoretic insights .this is of great value , given that so much of complexity theory from the polynomial hierarchy to reductions to completeness to the isomorphism conjecture to immunity and bi - immunity to oracles and much more is inspired by recursive function theory .the present article focuses on presenting the beautiful structures themselves , rather than surveying everything known about them .but in this case it is important to note that although the p - selective sets have been extensively studied since alan s original series of papers ( and indeed , there is even a monograph completely devoted to selectivity theory ) , alan s original series of papers already went breathtakingly far in exploring this concept .for example ,alan s original papers already proved that there are p - selective sets that are not in p. indeed , he ( see also ko regarding a more flexible type of left cut ) showed that the left cut set ( do nt worry if you do nt know what that term means ) of any real number is p - selective .it follows from this that there are arbitrarily hard p - selective sets , e.g. , there are p - selective sets that are so wildly undecidable that they are nt even in the arithmetical hierarchy .alan also showed that if even one np - hard set is p - selective , then .the proof is a lovely application of the self - reducibility of satisfiability .going back to the parallel with wise search , this basically says that for np - like search spaces that can be naturally cut in half repeatedly , having a p - time wise search algorithm is just too much to hope for .the proof is so crisp that it is worth sketching here .suppose that some np - hard set is p - selective . here is a polynomial - time algorithm for satisfiability .given a boolean formula whose satisfiability we want to test , set its lexicographically first variable to true , get the resulting formula , and by s np - hardness transform that into a question about .do the same for our original formula with its first variable set to false .take the outputs of these two processes , and use them as the inputs to the hypothesized polynomial - time p - selector function for .that function will output one of them , and based on that choice , we ll know either `` if the formula is satisfiable then there is a satisfying assignment in which the first variable is set to true , '' or `` if the formula is satisfiable then there is a satisfying assignment in which the first variable is set to false . ''so we take whichever assignment was just suggested to us by the selector , and we stick with it and similarly assign the second variable , again using the selector to fix it as being true or false , and so on until all the variables are assigned . at the end , we get to a single assignment such that if there is any assignment that satisfies the original formula , then _ that _ assignment satisfies the original formula .so we see if that one assignment satisfies the formula .if it does the formula is satisfiable , and otherwise the formula is not satisfiable . lovely .the results mentioned above are just a few examples of the broad investigation alan carried out . the facts that p - selective sets can by arbitrarily hard , and ca nt be np - hard unless , might lead one to say that this class is terribly far beyond p. however , using a tournament - inspired divide - and - conquer framework, ko proved that each p - selective set is information - wise very close to p. in particular , for each p - selective set there is a polynomial - time algorithm that given a polynomial amount of extra information based only on the _ length _ of the input string can correctly accept the set . in the jargon ,each p - selective set is in the complexity classp / poly , or equivalently , each p - selective set has small circuits .thus the p - selective sets have two faces : they can be so hard as to be undecidable , yet only a hair s breadth of information keeps them from being in .beautiful structures often attract much attention , and that has certainly been the case with the p - selective sets .much of that attention has been devoted to generalizing and extending concepts and results from alan s seminal papers .for example , although alan showed that no np - hard set ( i.e. , no set np - hard with respect to many - one polynomial - time reductions ) can be p - selective unless , there followed an intense and productive research line to see whether that claim could be extended beyond many - one reductions to more flexible types of reductions such as bounded - truth - table and truth - table reductions , see , and whether alan s just - stated result analogously extends to other complexity classes , and what holds for nondeterministic variants of selectivity .see for a survey of work along those lines and more generally for a survey of the broad research stream including important later work by alan launched by alan s beautiful structure known as p - selectivity . as a final comment , in section [ s : hnos ] of this articlewe ll soon see how selectivity theory surprisingly resolved an important , yet seemingly unrelated , question about removing the ambiguity of nondeterministic functions .as everyone knows , theoretical computer science is largely built around the complexity of decision problems .after all , np is ( at least if one stays away from certain textbooks ! ) a class of decision problems .how could there be anything wrong with this approach ?after all , satisfiability for example clearly has the property that its search and its decision versions are polynomial - time turing interreducible .alan is one of the people who from the very beginning recognized the tremendous importance of studying functions directly .there are many compelling reasons to study functions directly .historically , alan was probably primarily motivated by the extremely interesting role and richness of nondeterministic function classes and his work there was seminal and perhaps also by his expertise in one - way functions .additionally , experts such as alan have always known that even for deterministic classes , the justification given above for focusing on decision problems has weaknesses .satisfiability as used above is not as canonical as it often is .indeed , it has since been shown that unless some np - complete sets are not self - reducible , which is relevant since self - reducibility is central in reducing search problems to their natural associated decision problems . but more crucially , turing reductions are quite powerful , and so may give a blurred view of the actual complexity of the objects involved .early on , alan and his collaborators book and long ( , see also ) introduced a rich range of polynomial - time nondeterministic reductions and studied their properties and relationships .alan also accessibly spread the word about the importance of taking a function - based view , through his papers on `` a taxonomy of complexity classes of functions '' and `` much ado about functions '' ( see also his survey paper on one - way functions ) .as he was writing those papers , alan was also obtaining new insights into search ( a type of function version of problems ) versus decision , e.g. , his paper `` p - selective sets and reducing search to decision vs.self-reducibility '' ( and yes , there is p - selectivity playing a role again ! ) , and he also interestingly studied the role of functions as outputs of oracles .these days , function versions of problems play such a large role that it is easy to forget that thirty years ago it was nt yet clear that that would ever be the case .alan s pioneering stress on functions , quirky for its time , was quite prescient .is finding all solutions to a problem easier than finding one solution ?one is tempted to answer : never !after all , if one can find all solutions , then one can simply take ( for example ) the lexicographically smallest solution , and one has found one solution ( if one exists ) . surprisingly , the reasoning just used falls apart in a nondeterministic context .let us see this . and then let us see what alan did about this through bringing together the theory of functions with a generalization of the theory of p - selectivity .consider a nondeterministic polynomial - time turing machine that on each path either rejects or accepts .we will consider the machine to be computing a partial , multivalued function ( which in mathematics one would probably call a relation , but in complexity theory the term function is often used even for such multivalued objects ) .namely , each rejecting path is nt considered to contribute anything to the function .but whatever string is on the first work tape of the machine on nondeterministic paths that accept is considered to be an output of the function on the given input .this notion is what is called an npmv function ( a nondeterministic polynomial - time multivalued function ) .the class npmv was introduced by book , long , and selman in their seminal work on nondeterministic functions .let us make the nature of npmv clear by giving a very important example . consider the nondeterministic polynomial - time turing machine ( nptm ) that on input , interprets as a boolean formula , nondeterministically guesses an assignment for the variables of and writes that assignment on its first work tape , and then deterministically uses its other tapes to check whether the assignment satisfies , and if it does the machine ( on that nondeterministic computation path ) accepts and if not the machine ( on that nondeterministic computation path ) rejects .notice that this machine on input outputs all satisfying assignments of . that set could be exponentially large , but that is nt a problem here the output set is in effect distributed among all the paths , and so it is nt ever stuffed into a single , giant output string .this function is called .an npsv function ( a nondeterministic polynomial - time single - valued function ) is exactly the same notion as an npmv function , except npsv functions must in addition satisfy the property that on each input the cardinality of the output set is at most one .so if no path accepts , that is fine , as the cardinality of the output set is zero . andif one or more paths accept , that is also fine , as long as every one of those accepting paths has precisely the same string on its first work tape , since then that string will be the one and only output .a central concept in the study of multivalued functions is the notion of a refinement . refines exactly if on each input , 1 . outputs at least one value if and only if outputs at least one value , and 2 .every output of is an output of .we saw above that is an npmv function .but does it have an npsv refinement ?put another way : as mentioned above , it is easy for np function - computing machines to find all solutions of an input boolean formula .but can an np function - computing machine find _ one _ solution of an input boolean formula ( when one exists ) ?this question is asking whether there is an nptm that for unsatisfiable formulas has all its paths reject , and for each satisfiable formula has at least one path that computes a satisfying assignment ( and accepts ) and every accepting path must compute the same satisfying assignment as every other accepting path .it will now be clear why the trick that works in the deterministic case seems unhelpful here . in the deterministic case we took all the solutions and output the lexicographically smallest .but for an npmv function to do that , at least in the most obvious way , a path would have to be able to figure out whether the value it would like to output is such that every other path that would like to output a value would in fact like to output a lexicographically equal - or - larger value ( and in that case our path would go ahead and output its value , and otherwise would kill itself off by rejecting ) . but figuring _ that _ out seems to take an extra quantification that our machine does nt have in its arsenal . big picture , npsv is in some sense asking for a strong degree of coordination among paths that have no way of communicating with each other .of course , the previous paragraph is just an intuitive handwave , not a proof . proving that npmv functions do nt all have npsv refinements ( unless the polynomial hierarchy collapses ) required a surprising twist .alan and his collaborators had developed nondeterministic analogues of p - selectivity .although at first one might think that selectivity has little to do with npmv and npsv functions , nondeterministic selectivity and its connection with nonuniform classes such as turned out to be exactly the tool alan and his collaborators needed to prove that if has an npsv refinement ( equivalently , if every npmv function has an npsv refinement ) , then the polynomial hierarchy collapses to .a collapse of the polynomial hierarchy to still can be obtained even if one merely assumes that one can refine at - most-2-valued npmv functions to npsv functions .later work about nonuniform classes let one conclude from alan s work a slightly more extensive collapse in both these cases , in particular collapsing the polynomial hierarchy to .alan and his collaborators also explored the question of , for , what collapses would occur if one could reduce at - most--valued npmv functions to at - most--valued npmv functions , and obtained polynomial - hierarchy collapses to for each such case .strengthening those collapses to a collapse to remains open to this day and is a quite interesting challenge that has stymied many a graduate student .see also for some cases where refinements in fact are possible . in summary , alan did nt define nondeterministic functions and selectivity theory for the purpose of shedding light on whether one could reduce many solutions to one solution . however , his notions were so beautiful and flexible that they were central in the resolution of that issue .informally put , relativizing by an oracle means giving all machines unit - cost access to , i.e. , machines can as often as they like write a string onto a query tape and immediately the machine will be told whether that string is a member of . in some sense, relativization changes the universe s `` ground rules '' about what information is available to computing machines , but does so fairly all machines now have access to . experienced complexity theorists may be a bit surprised by what we ve chosen as our example of a beautiful contribution by alan to the theory of relativization .after all , one of alan s earliest works is the famous 1979 paper `` a second step toward the polynomial hierarchy '' with baker , which showed that there is an oracle relative to which the second and third levels of the polynomial hierarchy separate .the proof is extremely clever and powerful employing what one might dub a nested double contradiction architecture .how amazing that proof was is made clear by how long it took to take the `` third step '' ; that did nt occur until the work many years later of yao and hstad , which drew on different techniques and separated the entire polynomial hierarchy .structural complexity theorists will remember well the important paper by homer and selman that built a relativized world in which all -complete sets were polynomial - time isomorphic .not too many years later , fenner , fortnow , and kurtz , surely encouraged by that paper , obtained the same result for np itself , thus directly speaking to the relativized version of berman and hartmanis s isomorphism conjecture .rogers even put a great cherry on top of that , by obtaining isomorphism while not killing off one - way functions and we ll speak more about those in section [ s : one ] , since alan s work on them is seminal .but as beautiful and important as those results of alan s are , at least as beautiful , if perhaps less well known these days , is the pioneering work by alan and his collaborators on positive relativization . what is the biggest perceived weakness of relativization theory ?it probably is that it often seems that one can do almost anything in relativized worlds , and that doing so has little connection with the real world .for example , baker , gill , and solovay famously showed that there are oracles making p equal to np , and that there are oracles making p not equal to .does this resolve the p versus np problem in the real world ?fat chance .those oracles actually do tell us something quite important , namely , they tell us that no proof technique that relativizes can prove either or .and so there has been quite a bit of research regarding proof techniques such as arithmetization ( see ) and results such as that seem not to relativize .an interesting perspective on this is given by hartmanis et al . , who argue that there have been nonrelativizing proof techniques for a long time ; see also . in any case, the fact that relativization results such as those of baker , gill , and solovay fail to resolve the analogous real - world problems is often presented as a clear weakness of studying oracles .positive relativization provides an utterly lovely response to this weakness : _ we should find cases where obtaining an oracle result would imply real - world results ._ positive relativization as such was introduced and explored by alan , ron book , and their collaborators ( , see also ) .let us illustrate positive relativization by one of its most striking examples .( note : what we here speak of as `` positive relativization '' also sweeps in what book distinguishes as `` negative relativization . '' ) suppose we claim that there is sparse oracle relative to which the polynomial hierarchy collapses .are you thrilled ?perhaps not , since that might not say anything about the real world .suppose we claim that there is sparse oracle relative to which the polynomial hierarchy is infinite .now are you thrilled ?perhaps again not , since that might not say anything about the real world .you re seeming pretty hard to thrill , my friend . butthese things _ would _ say something about the real world . that is because balczar , book , long , schning , and selman proved the following result . 1 .the polynomial hierarchy collapses if and only if there is a sparse oracle relative to which the polynomial hierarchy collapses .the polynomial hierarchy is infinite if and only if there is a sparse oracle relative to which the polynomial hierarchy is infinite .this theorem tells us we should care , and indeed be thrilled , if someone has a sparse oracle that makes the polynomial hierarchy infinite , or has a sparse oracle that makes the polynomial hierarchy collapse . indeed ,that person thanks to the groundwork of alan just presented will have so changed the real - world landscape of complexity that he or she is sure to win a turing award .so positive relativization links oracle results to real - world collapses and separations . andthat is a lovely thing to do . by now , a broad range of positive relativizations are known , involving issues ranging from sparse sets ( as above ) to tally sets ( historically the earliest form of what now is called positive relativization ) , to number of queries , to aspects of the form and structure of the querying .a cynic might say that such results just tell us which oracle results are too hard to hope to ever get , since they would resolve major real - world issues .but an optimist might say that these offer an extra potential path to resolve those major real - world issues .the path is nt a universal one .for example , alan s student roy rubinstein writing `` on the limitations of positive relativization '' showed that for `` semantic '' classes such as up , , and , positive - relativization attempts with tally oracles fail even though the analogous positive - relativization results are well known to succeed for np and the polynomial hierarchy s levels .but even if not universal , the path of positive relativization is certainly a beautiful insight , especially since it has been broadly applied to the centrally important class . for the final of our five beautiful structures sculpted by alan, we take the rigorous definition of the notion of a ( complexity - theoretic ) one - way function , and the complete characterization ( in terms of the collapse and noncollapse of important complexity classes ) of whether one - way functions exist .alan did this work with his first ph.d .student , joachim grollmann , and this was also achieved independently by ko ( see also ) .many readers have probably already seen the definition and theorem alluded to above , since they can be found in such excellent complexity texts as for example papadimitriou and du and ko .but since it is well worth everyone knowing , let us quickly give the definition and the theorem.separate p from and win the prover a million - dollar clay mathematics institute prize .so achieving the `` easier '' step is quite important although surely not easy . ] a ( total ) function is a ( complexity - theoretic ) one - way function if and only if 1 . is polynomial - time computable , 2 . is polynomially honest ( i.e. , there is a polynomial such that for each , ; informally put , does nt shrink its inputs more than polynomially much ) , 3 . is an injective ( i.e. , one - to - one ) function , and 4 . is not polynomial - time invertible ( since we re concerned here only with total , injective functions , we may use as our definition of this that for any polynomial - time function , there exists an such that ) .valiant s class up is the class of all np sets that are accepted by some nptm that on no input has more than one accepting path .the characterization theorem that brings together complexity classes and the existence of one - way functions is the following : one - way functions exist if and only if .the proof of this theorem elegantly goes back and forth between the world of one - way functions and the world of nondeterministic turing machines .this concept and characterization naturally inspired much related work .( see ( * ? ? ?* chapter 2 : the one - way function technique ) for a survey - like treatment , including proofs , of the characterization mentioned above and the related results mentioned in this paragraph . ) for example , researchers have looked at ( what in effect is the study of ) -bounded - ambiguity one - way functions ( which interestingly enough stand or fall together with alan s notion of one - way functions , thanks to a nice induction proof of watanabe ) , and for multi - argument one - way functions one can study algebraic properties such as associativity and commutativity ( but again such functions turn out to stand or fall together with alan s notion of one - way functions ) .all such work is clearly indebted to the beautiful , seminal work of alan , his student grollmann , and ko .anyone who has read alan s articles knows that alan always knows the exact right number of words to use to motivate , explain , and develop a concept .so to conclude , let us try to take a page ( or a quarter of a page ) from alan and keep things as focused as possible , so that the points this article has been trying to make speak clearly : alan s work stands out as extraordinary in that it has introduced and powerfully explored a tremendous number of utterly beautiful structures .we spoke at the start of the article about the elusive tapestry of coherent beauty that binds together the structure of computation .alan s career has been very successfully devoted to revealing parts of that tapestry . beyond that , alan has been boundlessly generous and inspirational to his many younger collaborators and has been a leader in service to the field .so with warmest thanks for so very much , and comforted by the knowledge that alan intends to keep his hand in the field , let us wish alan the most wonderful of retirements .j. balczar , r. book , t. long ,u. schning , and a. selman . sparse oracles and uniform complexity classes . in _ proceedings of the 25th ieee symposium on foundations of computer science _ , pages 308313 .ieee computer society press , october 1984 .r. book .towards a theory of relativizations : positive relativizations . in _ proceedings of the 4th annual symposium on theoretical aspects of computer science _ , pages 121 .springer - verlag _ lecture notes in computer science # 247 _ , 1987 .r. book .restricted relativizations of complexity classes . in j.hartmanis , editor , _ computational complexity theory _ , pages 4774 .american mathematical society , 1989 .proceedings of symposia in applied mathematics # 38 .p. faliszewski and m. ogihara .separating the notions of self- and autoreducibility . in _ proceedings of the 30th international symposium on mathematical foundations of computer science _ , pages 308315 .springer - verlag _ lecture notes in computer science # 3618 _ ,august / september 2005 . c. glaer , d. nguyen , c. reitwiener , a. selman , and m. witek .autoreducibility of complete sets for log - space and polynomial - time reductions . in _ proceedings of the 40th international colloquium on automata , languages , and programming , part i _ , pages 473484 , july 2013 .a. hughes , a. pavan , n. russell , and a. selman . a thirty year old conjecture about promise problems . in_ proceedings of the 39th international colloquium on automata , languages , and programming , part i _ , pages 473484 .springer - verlag _ lecture notes in computer science # 7391 _ , july 2012 .r. karp and r. lipton .some connections between nonuniform and uniform complexity classes . in _ proceedings of the 12th acm symposium on theory of computing _ , pages 302309 .acm press , april 1980 .an extended version has also appeared as : turing machines that take advice , _lenseignement mathmatique _ , 2nd series , 28:191209 , 1982 .a. meyer and l. stockmeyer .the equivalence problem for regular expressions with squaring requires exponential space . in _ proceedings of the 13th ieee symposium on switching and automata theory _ , pages 125129 .ieee press , october 1972 .
professor alan selman has been a giant in the field of computational complexity for the past forty years . this article is an appreciation , on the occasion of his retirement , of some of the most lovely concepts and results that alan has contributed to the field .
in the last few years the physics community has paid a lot of attention to the field of complex networks .a considerable amount of research has been done on different real world networks , complex network theory and mathematical models .many real world systems can be described as complex networks : www , internet routers , proteins and scientific collaborations , among others .complex network theory benefitted from the study of such networks both from the motivational aspect as well as from the new problems that arise with every newly analyzed system . in this paper we will present an analysis of wikipedias in different languages as complex networks .wikipedia is a web - based encyclopedia with an unusual editorial policy that anybody can freely edit and crosslink articles as long as one follows a simple set of rules .although there has been a lot of debate on the quality of wikipedia articles , recent findings reported in suggest that the factographic accuracy of the english wikipedia is not much worse than that of the editorially compiled encyclopedias such as _ encyclopaedia britannica_. the important facts for this paper are : 1 . that authors are encouraged to link out of their articles , and 2 . that each wikipedia is a product of a cooperative community .the former comes in part from the need for lexicographic links providing context for the topic at hand , and in part from the fact that the official wikipedia article count , serving as the main criterion for comparing encyclopedia sizes , includes only articles that contain an out - link .a community arises initially from the need to follow the central wikipedia policy of the neutral point of view ( npov ) : if there is a dispute regarding the content of an article , effectively all the opposing views and arguments regarding the topic should be addressed .although there are many occasional contributors , the bulk of the work is done by a minority : roughly 10% of contributors edit 80% of the articles , and the differing degree of authors involvement serves as a rough criterion for a meritocracy .hence , there is no central structure that governs the writing of a wikipedia , but the process is not entirely haphazard .we view each wikipedia as a network with nodes corresponding to articles and directed links corresponding to hyperlinks between them .there are over 200 wikipedias in different languages , with different number of nodes and links , which are continuously growing by the addition of new nodes and creation of new links .the model of wikipedia growth based on the preferential attachment " has been recently tested against the empirical data .although different wikipedias are developed mostly independently , a number of people have contributed in two or more different languages , and thus participated in creating different wikipedia networks .a certain number of articles have been simply translated from one language wikipedia into another . also , larger wikipedias set precedents for smaller ones on issues of both structure and governance .there is thus a degree of interdependence between wikipedias in different languages .however , each language community has its unique characteristics and idiosyncrasies , and it can be assumed that the growth of each wikipedia is an autonomous process , governed by the function affects structure " maxim .namely , despite being produced by independent communities , all wikipedias ( both in their content and in their structure ) aim to reflect the received knowledge " , which in general should be universal and inter - linguistic .it is expected that community - specific deviations of structure occur in cases where the content is less universal than e.g. in natural science , but it is also expected that such deviations plague each wikipedia at some stage of its development .we thus assume we are looking at real network realizations of different stages of essentially the same process of growth , implemented by different communities . by showing which network characteristics are more general and which more particular to individual wikipedias and the process of wikipedia growth , we hope to provide insight into generality and/or particularity of the network growth processes .the main focus of our study is to compare networks of lexicographic articles between different languages .however , the wikipedia dataset is very rich , and it is not easily reducible to a simple network in which each wiki page is a node , as various kinds of wiki pages play different roles . in particular , the dataset contains : * _ articles , _ normal " wiki pages with lexicographic topics ; * _ categories , _ wiki pages that serve to categorize articles ; * _ images and multimedia _ as pages in their own right ; * _ user , help _ and _ talk _ pages ; * _ redirects , _ quasi - topics that simply redirect the user to another page ; * _ templates , _ standardized insets of wiki text that may add links and categories to a page they are included in ; and * _ broken links , _ links to articles that have no text and do not exist in the database , but may be created at some future time .we studied 30 largest language wikipedias with the data from january 7 , 2005 .especially we focused on eleven largest languages as measured by the number of undirected links . in order of size ,as measured by the number of nodes , these are : english ( en ) , german ( de ) , japanese ( ja ) , french ( fr ) , swedish ( sv ) , polish ( pl ) , dutch ( nl ) , spanish ( es ) , italian ( it ) , portuguese ( pt ) and chinese ( zh ) . based on different possible approaches to the study we analyzed six different datasets for each language with varying policies concerning the selection of data .we present our results for the smallest subset we studied for each language , designed to match the knowledge network of actual lexicographic topics most closely .it excludes categories , images , multimedia , user , help and talk pages , as well as broken links , and replaces redirects and templates with direct links between articles . for a detailed explanation of the dataset selection issues, please see our webpage .an interesting measurement of the wikipedia dataset statistical properties is given in , and a nice visualization of the wikipedia data can be found in ..[tab : tabela ] the table of power - law exponents for in , out and undirected degree distributions for the eleven largest languages .the exponents for all languages except polish follow the pattern .it is not a surprise that the polish language exhibits uncommon behavior having in mind its unusual degree distribution depicted in fig .[ fig : kumulativne ] .the average values and corresponding errors of the universal exponents are calculated in two ways .the upper one is calculated as a mean value and a standard deviation of different exponents in the sample .the lower is calculated with the assumption that all exponents are the same and differences are related to exponent estimation i.e. the error is calculated as the standard error of the mean .it is important to stress that exponents are not estimated from the degree , but from for which the estimated exponent is stable . [cols="<,^,^,^,^,^,^ " , ] the path analysis of the wikipedia networks reveals interesting results , as shown in table [ tab : path ] for the eleven largest languages .the studied quantities are the average path length of the undirected paths in wcc ( calculated as an arithmetic mean ) and the average path length of the directed paths in wcc ( calculated as a harmonic mean ) . for both of these quantities ,the largest wikipedias show no evidence of scaling of the average path lengths with the network size .however , the values of for all examined networks are close to the expected average path length for a random network , so the wikipedia networks exhibit small - world behavior in the original sense .in addition , the shortest average path values for the eleven largest languages are very close to one another , with very small scattering around the average value of the sample ( see table [ tab : path ] ) .this scattering is considerably smaller than that of .the last quantity we present in this paper are the triad significance profiles ( tsp ) , introduced in , which describe the local structure of the networks .counts of specific triads ( directed three - node subgraphs , shown in fig . [ fig : motifs2 ] along the abscissa ) in the original network are compared to counts of triads in randomly generated networks with the same degree distribution .the significance profile is the normalized vector of statistical significance scores for each triad , here is the count of appearances of the triad in the original network , while and are the average and the standard deviation of the counts of the triad over a sample of randomly generated networks . in ,milo et al .identify superfamilies of networks for which triad significance profiles closely resemble each other . assuming that one can look at the wikipedia as a representation of the knowledge network created by many contributors, one could expect a possible new superfamily of networks .the triad significance superfamily from one would expect to be closest to the wikipedia is the one that includes www and social contacts .the triad significance profile of the largest seven wikipedias is depicted in the fig .[ fig : motifs2 ] , and shows common features found in all examined wikipedias .these tsps indeed belong to the same superfamily as the tsps of www and social contacts reported in , see fig .[ fig : motifcorr ] . within this superfamily ,the www of nd.edu exhibits higher correlation with the wikipedias than the social networks do .since the tsp takes into account the reciprocity of directed links , one could naively expect that wikipedia reciprocity would also be very similar to the www s reciprocity , but we found this is not the case .the scaling of the triads which are the most represented in the wikipedia networks ( denoted as 10 and 13 ) with the network size is given in fig .[ fig : motifsscale ] . since both of these triads represent triangles ( see fig .[ fig : motifs2 ] ) they contribute to increasing the clustering coefficient .the wikipedia tsp thus sheds additional light on the large clustering of wikipedia networks , fig .[ fig : clustering2 ] .( color online ) the triad significance profiles of wikipedias are very similar .the x - axis depicts all possible triads of a directed network , while the y - axis represents the normalized z score for a given triad , given by eq .( [ tspeq ] ) .. tsp shapes resemble the tsp of www reported in .,scaledwidth=40.0% ] ( color online ) the correlations between tsps of the eleven largest languages , the www of the nd.edu domain and the social networks of positive sentiment between prisoners ( soc1 ) and leadership class students ( soc2 ) .wikipedias except for polish and italian shown in order of size .all wikipedia profiles and the www profile are pairwise very similar . with the exception of polish and italian , profiles of languages of similar sizestend to be more closely correlated .also , smaller wikipedias resemble the social networks better than the larger ones do.,scaledwidth=40.0% ] ( color online ) the scaling of the normalized z score for the most represented triads with the size of the network .the plot demonstrates that the representation of the triad 13 ( circles ) grows , whereas the representation of the triad 10 ( squares ) falls with the growth of the network .this effectively means that wikipedia has a tendency of creating strong ( bidirectional ) links for the well connected cliques.,scaledwidth=40.0% ]we have examined the following characteristics of different language wikipedia article networks : degree distribution properties , growth , topology , reciprocity , clustering , assortativity , average shortest path lengths , and triad significance profiles . based on our results , it is very likely that the growth process of wikipedias is universal .the similarities between wikipedias in all the measured characteristics suggest that we have observed the same kind of a complex network in different stages of development .we have also found that certain individual wikipedias , such as polish or italian , significantly differ from the other members of the observed set .this difference can be seen most easily in their degree distributions , but also shows in assortativity , clustering and the triad significance profile . in the case of the polish wikipedia , where the discrepancies are the greatest , we have found that they were caused by an editorial decision involving calendar pages .this shows that the common growth process we have observed is very sensitive to community - driven decisions .we have shown further that wikipedia article networks on the whole resemble the www networks . specifically , they belong to the tsp superfamily described in that includes www and social networks , and exhibit small - world behavior , with average shortest path lengths close to those of a random network .in some characteristics , however , large wikipedias seem to diverge from the www . their reciprocity is lower than that of the www reported in , andtheir average shortest path lengths seem to tend to a stable value .it is possible that the specific properties of wikipedias are related to the underlying structure of knowledge , but also that their shared features stem from growth dynamics driven by free contributions , common policies and community decision making .whichever the case , the regularities we have found point to the existence of a unique growth process . these findings in turnsupport the method of using statistical ensembles in network research , and , finally , affirm the role of statistical physics in modeling complex social interaction systems such as wikipedia .* acknowledgment .* the work of v. zlati and h. tefani was supported by the ministry of science , education and sport of the republic of croatia .v. zlati would like to thank m. martinis and the members of his project for the support during the last 4 years .the authors would like to thank d. vinkovi and p. lazi for important help in computation .we would like to thank r. milo , n. kashtan and u. alon for making the data and the algorithms from available on the weizmann institute of science web site .we also thank g. caldarelli , l. adamic , p. stubbs , f. milievi , and e. heder for valuable suggestions and discussions , and k. brner and the information visualization laboratory , indiana university , bloomington , for support and cooperation .the work of m. boievi was partly supported by a national science foundation grant under iis-0238261 .t. holloway , m. boievi , and k. brner , analyzing and visualizing the semantic coverage of wikipedia and its authors .submitted to complexity , special issue on understanding complex systems . also available as _http://xxx.arxiv.org/abs/cs.ir/0512085 _j. wales and m. rani , private communication with m. boievi .a. z. broder , r. kumar , f. maghoul , p. raghavan , s. rajagopalan , s. stata , a. tomkins , j. weiner , computer networks , * 33 * 309 ( 2000 ) .
wikipedia is a popular web - based encyclopedia edited freely and collaboratively by its users . in this paper we present an analysis of wikipedias in several languages as complex networks . the hyperlinks pointing from one wikipedia article to another are treated as directed links while the articles represent the nodes of the network . we show that many network characteristics are common to different language versions of wikipedia , such as their degree distributions , growth , topology , reciprocity , clustering , assortativity , path lengths and triad significance profiles . these regularities , found in the ensemble of wikipedias in different languages and of different sizes , point to the existence of a unique growth process . we also compare wikipedias to other previously studied networks . # 1#1 # 1 * * * # 1 * * * *
the interference channel ( ic ) models the situation where several independent transmitters communicate with their corresponding receivers over a common channel . due to the shared medium, each receiver suffers from interferences caused by the transmissions of other transceiver pairs .the research of ic was initiated by shannon and the channel was first thoroughly studied by ahlswede .later , carleial established an improved achievable rate region by applying the superposition coding scheme . in , han andkobayashi obtained the best achievable rate region known to date for the general ic by utilizing simultaneous decoding at the receivers .recently , this rate region has been re - characterized with superposition encoding for the sub - messages . however , the capacity region of the general ic is still an open problem except for several special cases .many variations of the interference channel have also been studied , including the ic with feedback and the ic with conferencing encoders / decoders . in this paper , we study another variation of the ic : the state - dependent two - user ic with state information non - causally known at both transmitters .this situation may arise in a multi - cell downlink communication problem , where two interested cells are interfering with each other and the mobiles suffer from some common interference ( which can be from other cells and viewed as state ) non - causally known at both base - stations . notably , communication over state - dependent channelshas drawn lots of attentions due to its wide applications such as information embedding and computer memories with defects .the corresponding framework was also initiated by shannon in , which established the capacity of a state - dependent discrete memoryless ( dm ) point - to - point channel with causal state information at the transmitter . in , gelfand and pinsker obtained the capacity for such a point - to - point case with the state information non - causally known at the transmitter .subsequently , costa extended gelfand - pinsker coding to the state - dependent additive white gaussian noise ( awgn ) channel , where the state is an additive zero - mean gaussian interference .this result is known as the dirty - paper coding technique , which achieves the capacity as if there is no such an interference . for the multi - user case ,extensions of the afore - mentioned schemes were provided in for the multiple access channel , the broadcast channel , and the degraded gaussian relay channel , respectively . in this paper , we study the dm state - dependent ic with state information non - causally known at the transmitters and develop two coding schemes , both of which jointly apply rate splitting and gelfand - pinsker coding . in the first coding scheme ,we deploy simultaneous encoding for the sub - messages and in the second one , we deploy superposition encoding for the sub - messages .the associated achievable rate regions are derived based on the respective coding schemes .the rest of the paper is organized as follows . the channel model and the definition of achievable rate region are presented in section [ sec_2 ] . in section [ sec_3 ] ,we provide two achievable rate regions based on the two different coding schemes , respectively .finally , we conclude the paper in section [ sec_5 ] .consider the interference channel as shown in fig .1 , where two transmitters communicate with the corresponding receivers through a common channel dependent on state . the transmitters do not cooperate with each other ; however , they both know the state information non - causally , which is unknown to either of the receivers . each receiver needs to decode the information from the respective transmitter .we use the following notations throughout this paper .the random variable is defined as with value in a finite set .let be the probability mass function of on .the corresponding sequences are denoted by with length .the state - dependent two - user interference channel is defined by , where are two input alphabet sets , are the corresponding output alphabet sets , is the state alphabet set , and is the conditional probability of given .the channel is assumed to be memoryless , i.e. , where is the element index for each sequence .a code for the above channel consists of two independent message sets and , two encoders that assign a codeword to each message and based on the non - causally known state information , and two decoders that determine the estimated messages and or declare an error from the received sequences .the average probability of error is defined as : where is assumed to be uniformly distributed in .a rate pair of non - negative real values is achievable if there exists a sequence of codes with as .the set of all achievable rate pairs is defined as the capacity region .in this section , we propose two new coding schemes for the dm interference channel with state information non - causally known at both transmitters and present the associated achievable rate regions . for both coding schemes , we jointly deploy rate splitting and gelfand - pinsker coding . in the first coding scheme , we use simultaneous encoding on the sub - messages , while in the second one we apply superposition encoding .now we introduce the following rate region achieved by the first coding scheme , which combines rate splitting and gelfand - pinsker coding .[ theorem_1 ] for a fixed probability distribution , let be the set of all non - negative rate tuple satisfying then for any , the rate pair is achievable for the dm interference channel with state information non - causally known at both transmitters . in the achievable coding scheme for theorem [ theorem_1 ] ,the message at the transmitter is splitted into two parts : the public message and the private message .subsequently , the decoder tries to decode the corresponding messages from the intending transmitter and the public message of the interfering transmitter .furthermore , gelfand - pinsker coding is utilized to help both transmitters send the messages with the non - causal knowledge of the state information . herewe presume that the message pairs are chosen uniformly on the message sets for both transmitters .codebook generation : fix the probability distribution .also define the following function for the user that maps to : where is the element index of each sequence .generate the time - sharing sequence . for the user , is randomly and conditionally independently generated according to , for and .similarly , is randomly and conditionally independently generated according to , for and .encoding : to send the message , the encoder first tries to find the pair such that the following joint typicality holds : and . if successful , is also jointly typical with high probability , and the encoder sends where the element is .if not , the encoder transmits where the element is . decoding : decoder finds the unique message pair such that for some , , , and .if no such unique pair exists , the decoder declares an error .decoder determines the unique message pair in a similar way .analysis of probability of error : here the probability of error is the same for each message pair since the transmitted message pair is chosen with a uniform distribution on the message set . without loss of generality , we assume for user and for user are sent over the channel .first we consider the encoding error probability at transmitter .define the following error events : the probability of the error event can be bounded as follows : where as .therefore , the probability of goes to as if similarly , the probability of can also be upper bounded by an arbitrarily small number as if the encoding error probability at transmitter can be calculated as : which goes to as if and are satisfied .now we consider the error analysis at the decoder .denote the right gelfand - pinsker coding indices chosen by the encoders as and .define the following error events : the probability of can be bounded as follows : where as .obviously , the probability that happens goes to if similarly , the error probability corresponding to the left error events goes to , respectively , if that there are some redundant inequalities in - : is implied by ; is implied by ; is implied by ; is implied by ; , , , , and are implied by . by combining with the error analysis at the encoder , we can recast the rate constraints - as : the error analysis for transmitter and decoder is similar to user and is omitted here .correspondingly , to show the rate constraints for user . in addition , the right hand sides of the inequalities to are guaranteed to be non - negative when choosing the probability distribution . as long as to are satisfied, the probability of error can be bounded by the sum of the error probability at the encoders and the decoders , which goes to as .an explicit description of the achievable rate region can be obtained by applying fourier - motzkin algorithm on our implicit description - .we omit it here due to its high complexity and the space limitation .we now present another coding scheme , which applies superposition encoding for the sub - messages .the achievable rate region is given in the following theorem .[ theorem_2 ] for a fixed probability distribution , let be the set of all non - negative rate tuple satisfying for any , the rate pair is achievable for the dm interference channel defined in section [ sec_2 ] . compared with the first coding scheme , the rate splitting structure is also applied in the achievable scheme of theorem [ theorem_2 ] .the main difference here is that instead of simultaneous encoding , now the private message is superimposed on the public message for the transmitter .gelfand - pinsker coding is also utilized to help the transmitters send both public and private messages .codebook generation : fix the probability distribution .first generate the time - sharing sequence . for the user , is randomly and conditionally independently generated according to , for and .for each , is randomly and conditionally independently generated according to , for and .encoding : to send the message , the encoder first tries to find such that holds .then for this specific , find such that holds .if successful , the encoder sends .if not , the encoder transmits . decoding : decoder finds the unique message pair such that for some , , , and .if no such unique pair exists , the decoder declares an error .decoder determines the unique message pair similarly .analysis of probability of error : similar to the proof in theorem [ theorem_1 ] , we assume message and are sent for both transmitters .first we consider the encoding error probability at transmitter .define the following error events : the probability of the error event can be bounded as follows : where as .therefore , the probability of goes to as if similarly , for the previously found typical , the probability of can be upper bounded as follows : where as .therefore , the probability of goes to as if the encoding error probability at transmitter can be calculated as : which goes to as if and are satisfied .now we consider the error analysis at the decoder .denote the right gelfand - pinsker coding indices chosen by the encoders as and .define the following error events : the probability of can be bounded as follows : where as .obviously , the probability that happens goes to if similarly , the error probability corresponding to the left error events goes to , respectively , if that there are some redundant inequalities in - : is implied by ; is implied by ; is implied by ; is implied by ; is implied by ; , , , , , and are implied by . by combining with the error analysis at the encoder , we can recast the rate constraints - as : the error analysis for transmitter and decoder is similar to user and is omitted here . correspondingly , to show the rate constraints for user .furthermore , the right hand sides of the inequalities to are guaranteed to be non - negative when choosing the probability distribution .as long as to are satisfied , the probability of error can be bounded by the sum of the error probability at the encoders and the decoders , which goes to as .the achievable regions in the above theorems are being further studied in several special cases by only deploying gelfand - pinsker coding for the public message or only for the private message at the transmitters .in addition , the application of special coding schemes to the strong ( or weak ) state - dependent ic is also under investigation .it can be easily seen that the achievable rate region in theorem [ theorem_1 ] is a subset of , i.e. , .however , whether these two regions are equivalent is still under investigation .we considered the interference channel with state information non - causally known at both transmitters .two achievable rate regions are established based on two coding schemes with simultaneous encoding and superposition encoding , respectively .
in this paper , we study the state - dependent two - user interference channel , where the state information is non - causally known at both transmitters but unknown to either of the receivers . we propose two coding schemes for the discrete memoryless case : simultaneous encoding for the sub - messages in the first one and superposition encoding in the second one , both with rate splitting and gelfand - pinsker coding . the corresponding achievable rate regions are established .
quantum searching was invented to speed up the searching process in databases .it was realized by hoyer et al and independently by me that this searching was a special case of amplitude amplification whereby the amplitude in a target state could be amplified linearly with the number of operations .this realization considerably increased the power of the algorithm , no longer was it limited to database searching but was applicable to a host of physics and computer science problems .in fact it gave a square - root speedup for almost any classical probabilistic algorithm .the idea behind this speedup was realized later on to be a two dimensional rotation through which the state - vector got driven from the source to a target state through a sequence of small rotations in the two dimensional space defined by the source and the target state .this is easily seen by considering the basic transformation : say then if we calculate , by definition of the operations , it easily follows that [ 0] note that this is true for any unitary it stays true if we replace by which yields : substituting for as in ( [ recursion ] ) and from ( [ amptamp ] ) , it follows that: similarly by recursing multiple times , we can prove the transformation : to be true for large paper gives a new kind of amplitude amplification in which the amplitude in the target grows ; quadratically with the number of iterations . instead of the basic transformationto be we choose to be it follows by using the definitions of & that in case and then the recursion equation of the previous transformation which only depended on , this equation depends on both and and even so we need to investigate how and vary in successive recursions .consider . again assuming and that if we denote , and by assuming all terms to be real and neglecting on the rhs , the above equation may be written as : therefore stays close to 1 for approximately recursions .in i recursions , provided , rises by a factor of approximately therefore in recursions rises by approximately a factor of the number of queries is approximately which is as expected the amplification of is quadratic in the region when is approximately 1 .consider the situation when s , the starting state is an arbitrary basis state and u is the inversion about average transformation .then , assuming there to be n states to be searched , is and is then analyzing the sequence of operations for a few steps- * * * looks something like the search algorithm , which is : however , any similarity is superficial , as we discuss in the following section , this algorithm is _ not _ a rotation of the state vector in two - dimensional hilbert space . nevertheless , the dynamics of the algorithm are fairly simple to understand and analyze iteratively : the state just before an inversion about average is described by 3 parameters , the amplitudes in the target state , source state and that in the other states .[ ptb ] fig1.ps the evolution is obtained by the following equations ( a(t ) denotes the average amplitude over all states ) : going to the continuous limit and solving this system of differential equations gives the amplitude in the target state as therefore in iterations , the amplitude in the target state becomes unity .the number of iterations required for searching with certainty is times more than required by the search algorithm .the variation of the amplitude in the target state is in the initial stages ( when is close to 0 ) , the amplitude varies as .as expected , the rate of increase is quadratic .however , once the probability in the target state become significant ( also affecting , the quadratic nature of the increase is destroyed .the algorithm of this paper may be useful in applications where the basic that needs to be amplified is small ( in the above example where the u transform was the inversion about average , was only - whereas in the search algorithm it is about it is possible that there would exist applications where a few applications of this algorithm provided the driving transform for amplitude amplification algorithms .that way , we would get the quadratic speedup plus the flexibility of the amplitude amplification algorithms .one might be tempted to conclude that the above algorithm was a variant of the search algorithm because , overall , it gave a square - root speedup ; also it consists of similar sequences of unitary transformations ( [ seq ] ) . however , that is not the case .the chief characteristic of the search algorithm and all its variants ( amplitude amplification algorithms ) was a rotation of the state vector in appropriately defined two dimensional space .the algorithm of this paper needs more than two dimensions to operate in . to see thisconsider the basic recursion equation ( [ recursion ] ) used to develop the algorithm : given large & and we had argued that was amplified significantly in each recursion . in order to satisfy this condition needs more than two dimensions .this is because if there were only two dimensions it would follow from unitarity of that =0 ( any two columns of a unitary matrix are orthogonal ) - therefore , additional dimensions are necessary .the above algorithm gives a quadratic amplification under certain conditions.the quadratic amplification offers something new beyond the search algorithm , even though it is not as universally applicable . to borrow a term from analog amplifiers: this only has a limited dynamic range - outside of this range it has to be supplemented by other more robust algorithms . just as ( , ), this algorithm provides yet another tool in the quantum algorithm designer s toolkit . whereas , ( ) is independently useful to design quantum algorithms , the fixed point algorithms & the algorithm of this paper may be useful in combination with other algorithms - ( ( , ) ) to improve the robustness and the algorithm of this paper to increase the amplification in selected ranges . as described in the _ observations _ section , the algorithm of this paper may be useful in conjunction with the standard quantum search algorithmthis is somewhat similar in spirit to applications where the search algorithm is combined with a classical algorithm .one such example is the counting algorithm of where one gets round the cyclical nature of the search algorithm by making appropriately timed observations ( which is the classical algorithm ) .the counting algorithm is not usually looked at this way , but in the context of robustness versus speed , it is insightful to look upon it as a combination of classical and quantum search algorithms ,
quantum search / amplitude amplification algorithms are designed to be able to amplify the amplitude in the target state linearly with the number of operations . since the probability is the square of the amplitude , this results in the success probability rising quadratically with the number of operations . this paper presents a new kind of quantum search algorithm in which the amplitude of the target state , itself increases quadratically with the number of operations . however , the domain of applications of this is much more limited than standard amplitude amplification .
lyapunov characteristic exponents ( lce ) measure the rate of exponential divergence between neighbouring trajectories in the phase space .the standard method of calculation of lce for dynamical systems is based on the variational equations of the system .however , solving these equations is very difficult or impossible so the determination of lce also needs to be carried out numerically rather than analytically .the most popular methods which are used as an effective numerical tool to calculate the lyapunov spectrum for smooth systems relies on periodic gram - schmidt orthonormalisation of lyapunov vectors ( solutions of the variational equation ) to avoid misalignment of all the vectors along the direction of maximal expansion ( , ) . in some approaches , usually involving a new differential equation instead of the variational one , the procedure of re - orthonormalisation is not used these are usually called the continuous methods .they are usually found to be slower than the standard ones due to the underlying equation being more complex than the variational one .a comparison of various methods with and without orthogonalisation can be found in and a recent general review in .the main goal of this paper is to present a new algorithm for obtaining the lce spectrum without the rescaling and realigning .this application is a consequence of the equation satisfied by the lyapunov matrix or operator ( see below ) which was discovered in one of the authors phd thesis .the particular numerical technique introduced here is the first attempt and is open to further development so it still bears the disadvantages of the usual continuous methods .however , in our opinion , the main advantages of the approach lie in its founding equations and are as follows : * the whole description of the lce is embedded in differential geometry from the very beginning , so that and it is straightforward to assign any metric to the phase space including one with non - trivial curvature .* as the rate of growth is described by an operator ( endomorphism ) on the tangents space , and the equation it satisfies is readily expressed with the absolute derivative , the approach is explicitly covariant .( the exponents are obviously invariants then , although their transformation properties still seem to be a live issue , see e.g. . ) * there is no need for rescaling and realignment , as the main matrix is at most linear with time , and encodes the full spectrum of lce . *since we make no assumptions on the eigenvalues , there are no problems with the degenerate case encountered in some other methods .* we rely on a single coordinate - free matrix equation , which reduces the method s overall complexity . *the fundamental equation is not an approximation but rather the differential equation satisfied by the so - called time - dependent lyapunov exponents .this opens potential way to analytic studies of the exponents .it should be noted that the last points imply a hidden cost ( in the current implementation ) of diagonalising instead of reorthonormalising , due to the complex matrix functions involved .fortunately , this procedure needs to be carried out on symmetric matrices for which it is stable .the natural domain of application of this method might be the general relativity and dynamical systems of cosmological origin already formulated in differential geometric language .of course this still requires the resolution of the question of the time parameter , and natural metric in the whole phase space ( not just the configuration space which corresponds to the physical space - time ) .regardless of that choice , however , the fundamental equation of our method will remain the same whether one chooses to consider the proper interval as the time parameter , or find some external time for an eight dimensional phase space associated to the four dimensional space - time .this stems from the fact that our approach works on any manifold . here, we wish to focus on the numerical aspect of the method , providing the rough first estimates of its effectiveness .this is a natural question , after the theoretical motivation for a given method has been established , namely how well it performs numerically .there are obviously many ways of translating the method into code , and we hope for future improvements , nevertheless , the presented implementation can be considered a complete , ready - to - use tool . in the next sections we review the derivation of the main equation andthen proceed to the simple mechanical examples for testing and results .for a given system of ordinary differential equations the variational equations along a particular solution are defined as and the largest lyapunov exponent can be defined as for almost all initial conditions of . from now on we take the norm to be where is treated as a column vector , and denotes transposition . that is to say the metric in the tangent space is euclidean , as is usually assumed for a given physical systems .this needs not be the case , and a fully covariant derivation of the main equation can be found in .the above definitions are intuitively based on the fact that for a constant , the solution of is of the form and is the greatest real part of the eigenvalues of . in the simplest case of a symmetric ,the largest exponent is exactly the largest eigenvalue . to extend this to the whole spectrum, we note that any solution of is given in terms of the fundamental matrix so that then ( if the limit exists ) , the exponents are since is a symmetric matrix with non - negative eigenvalues , the logarithm is well defined .the additional factor of 2 in the denominator results from the square root in the definition of the norm above . as we expect to diverge exponentially, there is no point in integrating the variational equation in itself , but rather to look at the logarithm . to this endwe introduce the two matrices and : clearly has the same eigenvalues as to which it is connected by a similarity transformation , and the eigenvalues of behave as for large times that is why we call the lyapunov matrix . to derive the differential equation satisfied by we start with the derivative of and use the property of the matrix ( operator ) exponential where we have introduced a concise notation for the the adjoint of acting on any matrix as ,\ ] ] and used its property next , the integralis evaluated taking the integrand as a formal power series in where the fraction is understood as a power series also , so that there are in fact no negative powers of .alternatively one could justify the above by stating that the function is well behaved on the spectrum of which is contained in . as is never zero for a real argument , we can invert the operator on the left - hand side of to get where the symmetric and antisymmetric parts of are this allows for the final simplification to \label{main_eq}\ ] ] the function should be understood as the appropriate limit at , so that it is well behaved for all real arguments . as was proven in ,the above equation is essentially the same in general coordinates : ,\ ] ] where is the vector field associated with , and is the covariant derivative .note that in this form it is especially easy to obtain the known result for the sum of the exponents .since trace of any commutator is zero , the only term left is the `` constant '' term of which is 1 ( or rather the identity operator ) so that where is the volume of the parallelopiped formed by independent variation vectors .another simple consequence occurs when the matrix is zero , the whole equation becomes a lax equation ,\ ] ] which preserves the spectrum over time , so that tends to zero at infinity .another way of looking at it is that it is a linear equation in and the matrix of coefficients is antisymmetric in the adjoint representation , so that the evolution is orthogonal and the matrix norm ( frobenius norm to be exact ) of is constant which means tends to the zero matrix .the simplest example of this is the harmonic oscillator or any critical point of the centre type .the variations are then vectors of constant length and the evolution becomes a pure rotation .the authors are not aware of any complex or non - linear system that would exhibit such simple behaviour . already for the mathematical pendulumsuch picture is achieved only asymptotically for solutions around its stable critical point .one could expect that a system with identically zero exponents might not be `` interesting '' enough to incur this kind of research .we have thus arrived at a dynamical system determining , with right - hand side being given as operations of the adjoint of on time dependent ( through the particular solution ) matrices and .the next section deals with the practical application of the above equation .the main difficulty in using is the evaluation of the function of the adjoint operator . since we will be integrating the equation to obtain the elements of the matrix , it would be best to have the right - hand side as an explicit expression in those elements .this can be done for the case , but already for one has dozens of terms on the right , and for higher dimensions the number of terms is simply too large for such an approach to be of practical value . an alternative( although equivalent ) dynamical system formulation for the mentioned low dimensionalities have been studied in , but again the complexity of the equations increases so fast with the dimension that the practical value is questionable .our method , on the other hand , can be made to rely on the same equation for all dimensions , and the only complexity encountered will be the diagonalisation of a symmetric matrix of increasing size .another problem lies in the properties of the function which , although finite for real arguments , has poles at .this means that a series approximation is useless , as it would converge only for eigenvalues smaller than in absolute value , whereas we expect them to grow linearly with time and need the results for . on the other hand , for large but , unfortunately , the adjoint operator always has eigenvalues equal to zero , and for hamiltonian systems it is also expected that two eigenvalues tend to zero .thus , as we require the knowledge of for virtually any symmetric matrix , and we are going to integrate the equation numerically anyway , we will resort to numerical method for this problem . because the matrix is symmetric ( hermitian in an appropriate setting )so is its adjoint , and the best numerical procedure to evaluate its functions is by direct diagonalisation .obviously this is the main disadvantage of the implementation method as even for symmetric matrices , finding the eigenvalues and all the eigenvectors is time - consuming .so far the authors have only been able to find one alternative routine which is to numerically integrate not itself but rather the diagonal matrix of its eigenvalues and the accompanying transformation matrix of eigenvectors .however , due to the increased number of matrix multiplications the latter method does not seem any faster than the former . with this in mind, let us see how the diagonal form of simplifies the equation .first , we need to regard as an operator , and since it is acting on matrices we will adopt a representation where any matrix becomes a matrix , i.e. a element vector constructed by writing all the elements of successive rows as one column . is then a matrix .fortunately , one does not need to diagonalise but only itself . as can be found by direct calculation, the eigenvalues of the adjoin are all the differences of the eigenvalues of .for example where the subscript denotes `` diagonal ''. now let us assume we also have the transformation matrix such that then , instead of bringing the whole equation to the eigenbasis of , one can only deal with the matrix in the following way of course , the other term of equation can be evaluated as the standard commutator . for the above examplethe part would be and converting to a vector we get which corresponds to the usual 2 by 2 matrix of where are the differences of the eigenvalues of . in general the appropriate ( ) matrix elements read {ij } = \psi_2(\delta_{ij})\tilde\theta_{ij},\ ] ] and this matrix needs to be transformed back to the original basis according to before being used in the main equation .the matrix elements of will , in general , grow linearly with time .this is of course a huge reduction when compared with the exponential growth of the perturbations , but one might want to make them behave even better by taking time into account with the reason for the additional 1 is that we will specify the initial conditions at and we want to avoid dividing by zero in the numeric procedure and the limit at infinity ( if it exists ) is not affected by this change . of course ,in the case of the autonomous systems any value of can be chosen as initial ( at the level of the particular solution ) but the non - autonomous case might require a particular value , which can be dealt with in a similar manner .we now have .\ ] ] as for the initial conditions , the fundamental matrix is equal to the identity matrix at , so that which in turn implies . we are now ready to state the general steps of the proposed implementation . choosing a specific numerical routine to obtain the solution for a time step at each these are 1 .obtain the particular solution , calculate the jacobian matrix at and , from it , the two matrices and .start with 2 .find the eigenvalues and eigenvectors of 3 .transform to .4 . compute the auxiliary matrix {ij } = \psi_2((t+1)\delta_{ij})\tilde\theta_{ij} ] , and use it to integrate the solution to the next step .repeat steps 25 until large enough time is reached . 7 .the `` time - dependent '' lyapunov exponents at time are simply .the relation in the last point stems from the rescaling and , as can be seen , is only important for small values of .as suggested in the preceding section , one expects this method of obtaining the lyapunov spectrum to be relatively slow . in order to see that , we decided to compare it with the standard algorithm based on direct integration of the variational equation and gram - schmidt rescaling at each step .although usually the rescaling is required after times of order 1 , we particularly wish to study a system with increasing speed of oscillations and frequent renormalisation will become a necessity . in other words , we want to give the standard method a `` head start '' when it comes to precision .for numerical integration we chose the modified midpoint method ( with 4 divisions of the whole timestep ) , which has the advantage of evaluating the derivative fewer times than the standard runge - kutta routine with the same accuracy .since we intended a simple comparison on equal footing , we did not try to optimise either of the algorithms and wrote the whole code in wolfram s mathematica due to the ease of manipulation of the involved quantities and operations ( e.g. matrices and outer products ) .the most straightforward comparison is for the simplest , i.e. linear dynamical system , for which the main equation is the same as the variational one , with constant matrix .the exponents are then known to be the real parts of the eigenvalues of . to include all kinds of behaviour, we took a matrix which has a block form whose eigenvalues are , so that the lce are approximately equal to .the basic timestep was taken to be and the time to run from 0 to 1000 .the results are depicted in figures [ const_1 ] and [ const_2 ] with the horizontal axis representing the inverse time so that the sought for limit at infinity becomes the value at zero which is often clearly seen from the trend of the curves .we note that our method required 59 seconds , whereas the standard one only took 17 seconds .the final values of lce ( ) were and , respectively .the shape of the curves is different , which is to be expected because the matrix measures the true growth of the variation vectors at each point of time , and the other method provides more and more accurate approximations to the limit values of the spectrum .before we go on to the central example , which explores a system with accelerating oscillations , we will present numerical results for the system which is synonymous with chaos , namely the lorenz system .the equations read where we took , and , and integrated the equation for the initial conditions of from to 1000 .next we integrated the respective methods for the exponents with the timestep of .our method took about 591 seconds and the result is shown in figure [ lorenz_1 ] with the final value of the spectrum .note that we have shifted the lowest exponent by , so that all three could be presented on the same plot with enough detail .the standard method took about 152 seconds and its outcome is shown in figure [ lorenz_2 ] with the final values of ( we have shifted the graph in the same manner as before ) .one could note that there is less overall variation of the time - dependent exponents in our method similarly to the previous example .a good estimate of precision would be to calculate the sum of the exponents , which , in this system , should be exactly equal to -41/3 .the difference between that and the numerical estimates were : for the standard method ( at ) , and for our method a much better result .this seems to be the usual picture for the continuous methods which trade computing time for precision . a presentation of some more complicated , including both integrable and chaotic, examples can be found in , and for such systems also , we observe the concordance of final results and the speed discrepancy . in order to see how one could benefit from the new method we have to turn to another class of models , ones for which the exponents present oscillatory behaviour .we found that for artificial systems with accelerated oscillations our method performs better and present here a simple physical model which exhibits such property .consider a ball moving between two walls which are moving towards each other and assume that the ball bounces off with perfect elasticity .as the distance between the walls decreases and the speed of the ball increases it takes less and less time for each bounce cycle . in order to model thisanalytically , without resorting to infinite square potential well we will take the following hamiltonian system which depends on time explicitly via the function whose meaning is seen as follows : the area hyperbolic tangent is infinite for , so that the potential becomes infinite for the position variable , so that is simply half the distance between the walls .in particular we take it to be so that it decreases from 1 to slowing down but never stopping .the reason for this is that we want the system not to end in a finite time , and also that the worse behaved is the faster the numerical integration of the main system itself will fail .the initial shape of the potential and the systems setup is depicted in figure [ potential ] .( the vertical position of the ball has no meaning . ) ] the slowing down of the walls , and the particular shape of allows us to find rigorous bounds on the lyapunov exponents .first we note , that the vector tangent to a trajectory in the phase space , i.e. a vector whose components are simply the components of from , is always ( for any dynamical system ) a solution of the variational equation .that means that just by measuring its length we can estimate the largest exponent , since for almost all initial conditions the resulting evolution is dominated by the largest exponent .second , the system is hamiltonian and it must have two exponents of the same magnitude but different signs , so analysing this particular vector will give us all the information regardless of the initial condition .thus we have to find the following quantity as a function of time , which boils down to finding bounds on the velocity and acceleration of the bouncing ball . as the hamiltonian depends on time explicitly ,the energy is not conserved , but instead we have the velocity has its local maxima at when all the energy is in the kinetic term , and we are lead to define a virtual maximal velocity by equating the energy at any given time to a kinetic term we will take the positive sign of , and assume it is non - decreasing as the physical setup suggest . differentiating the above weget let us go back to the equation of motion for the momentum variable which reads and substitute that into the previous equation to get as mentioned above decreases very slowly at late times , which is when we estimate the exponents anyway .we thus assume , that is small enough for the fraction on the right - hand side to be considered constant over one cycle that is over the time in which the ball moves from the centre up the potential wall and back to the centre .this time will get shorter and shorter , but also will get closer and closer to zero .the standard problem of the elastic ball and infinitely hard walls shows that the speed transfer at each bounce is of the order of so the ( virtual ) maximal velocity will change as slowly as and we are entitled to average the equations over one cycle : ^{t+t } - \int_t^{t+t } p^2\mathrm{d}t\right ) \leq -\frac{\dot{f}}{f}v_m^2,\ ] ] where we integrated by parts and used the fact that at the beginning and end of the cycle , and also that the momentum is never greater than the maximal velocity .we are thus left with a bound in the form of a differential equation , and since all the quantities involved are positive and non - decreasing ( with the `` initial '' value of taken at sufficiently large so that is small .similar considerations can be carried out for the acceleration , only this time we have to introduce a virtual points of return , that is the point at which all the energy is in the potential term since at the real turning points the acceleration reaches its local maxima .note that this is not the same as which describes the slow growth of the consecutive maxima of and not the maxima of its slope .this definition allows us to express as a function of ( via energy ) and the acceleration is bringing the two results together we see that because the function does not tend to zero , and by both lyapunov exponents must in turn be zero themselves .we also recognise that the hyperbolic cosine factor in could produce a nonzero exponents if were to decrease to zero as .let us now turn to the numerical results of both methods for this system .as initial conditions we take and .for the same time step we see in figure [ wall_1 ] that the new method predicts the values correctly , integrating for 59 seconds from to .however for the standard one , as shown in figure [ wall_2a ] , we see the exponents diverging from zero , for the same time limits , and integrating time of 17 seconds .it turns out also gives divergent results and the correct behaviour is recovered for , shown in figure [ wall_2b ] , for which the routine takes 171 seconds . ) . ] ) . ] ) . ]we have presented a new algorithm for evaluation of the lyapunov spectrum , emerging in the context of differential geometric description of complex dynamical systems .this description seems especially suitable for systems found in general relativity like , e.g. , chaotic geodesic motion .the main advantage of the base method is its covariant nature and concise , albeit explicit , matrix equations that promise more analytic results in the future .also , this allows for study of curved phase spaces and general dynamical systems not only autonomous or hamiltonian ones .the main differential equation , can be numerically integrated , giving a simple immediate algorithm for the computation of the lyapunov characteristic exponents .it is in general slower than the standard algorithm ( based on gram - schmidt orthogonalisation ) , but the first numerical test suggest it works betters in systems with increasing frequency of ( pseudo-)oscillations .we show this on the example of a simple mechanical system a ball bouncing between two contracting walls .although in low dimensions the main equation can be cast into an explicit form ( with respect to the unknown variables ) , in general the numerical integration requires diagonalisation at each step , which is the main disadvantage of the method and the reason of its low speed .we hope to present a more developed algorithm without this problem in the future .this paper was supported by grant no .n n202 2126 33 of ministry of science and higher education of poland .the authors would also like to thank j. jurkiewicz and p. perlikowski for valuable discussion and remarks .g. benettin , l. galgani , a. giorgilli and j. m. strelcyn `` lyapunov characteristic exponents for smooth dynamical systems and for hamiltonian systems ; a method for computing all of them .part 1 : theory , '' meccanica , * 15 * , 1:920 ( 1980 ) .
we present a new algorithm for computing the lyapunov exponents spectrum based on a matrix differential equation . the approach belongs to the so called continuous type , where the rate of expansion of perturbations is obtained for all times , and the exponents are reached as the limit at infinity . it does not involve exponentially divergent quantities so there is no need of rescaling or realigning of the solution . we show the algorithm s advantages and drawbacks using mainly the example of a particle moving between two contracting walls .
newtonian mechanics originates only from those trajectories in spacetime , which make the action an extremum .when we free ourselves from this restriction and allow all trajectories , we arrive at quantum mechanics provided we introduce another selection criterion : although all trajectories contribute in a democratic way , that is with equal weight , each trajectory carries a phase determined by the action along this path . in this way we obtain the path - integral formulation of quantum mechanics pioneered by richard p. feynman . more freedom combined with a new rule in this enlarged realm gives birth to a new theory , which transcends the original one . on a much more elementary level, the factorization of numbers with the help of gauss sums displays a similar phenomenon .so far , we have restricted our analysis of these sums to integer arguments .this requirement has provided us with a tool for the unique identification of the factors . in the present paperwe allow more freedom and consider these sums at rational numbers .needless to say , we immediately loose the possibility of identifying factors . however , when we introduce new rules for extracting factors the enlarged space of rational numbers enables us to factor numbers more efficiently .factorization of large integers is an important problem in network and security systems .the most promising development in this domain is the shor algorithm , which uses the entanglement of quantum systems to obtain an exponential speed up .recently a different route towards factorization was proposed .based solely on interference , an analogue computer evaluates a gauss sum .several experimental implementations have been suggested . in the meantimegauss sum factorization has been demonstrated experimentally in various systems ranging from nmr techniques , cold atoms , ultra short laser pulses to classical light interferometry .the largest number factored so far had 17 digits .all experiments performed so far implement the truncated gauss sum where is the number to be factored and the integer represents a trial factor .the general idea of this approach is to find an observable of a physical system such as a spin , or the internal states of an atom , which is given by such a gauss sum .obviously a requirement for the successful application of gauss sum factorization is that the parameters , and can be chosen at will . in this way, we implement an analogue computer , that is the system calculates for us the truncated gauss sum . unfortunately the experiment does not provide us directly with the factors but only with a `` yes '' or `` no '' answer to the question if is a factor .indeed , the result of the experiment is : `` is a factor of '' or `` is not a factor of '' as a consequence , we have to check every prime number up to .most of the experiments performed so far in the realm of gauss sum factorization had to check every trial factor individually .however , there already exists an experiment with classical light , where the gauss sum is estimated simultaneously for every trial factor . here the number is encoded in the wavelength of the light and we deduce the factor of from the spectrum of the light . in this way, we have the opportunity to obtain all factors simultaneously .however , as often in life , every opportunity is accompanied by new challenges .indeed , the spectrum in the experiment of ref. is not represented by the gauss sum at integer values of only .it is determined by the gauss sum evaluated along the real axis .so far , we have only used the gauss sum of integers to factor . as a consequencewe throw away a lot of information contained in the full spectrum .the goal of the present paper is to make the first step towards factorization with gauss sums at real numbers : we discuss factorization with truncated gauss sum at rational arguments .our paper is organized as follows . in section [ sec : overview ]we briefly summarize the theoretical and experimental work on gauss sums .then we analyse in section [ sec : congauss ] the behavior of the truncated gauss sum for rational arguments .our main interest is to obtain an answer to the following question : is it possible to extract information about the factors of from peaks of the gauss sum located at rational arguments .we show , that the answer is an emphatical `` yes ! '' by providing two methods .we conclude in section [ sec : summary ] by a brief summary of our results .gauss sums come in a large variety of forms and have already been analyzed in their possibilities to factor numbers . in this section ,we give a brief introduction into the different types of gauss sums and how to obtain factors from them .we conclude by summarizing the main theoretical and experimental results obtained so far .the gauss sum \label{eq : s(xi)}\ ] ] with the weight factors depends on the continuous argument and contains an infinite number of terms .we obtain the factors of by testing if a maximum of this function is located at an integer value of as shown in fig.[fig : s(xi ) ] .the sum emerges in the context of wave packet dynamics in the form of the autocorrelation function , or the excitation of a multi - level atom with chirped pulses . using the continuous gauss sum defined by eq.([eq : s(xi ) ] ) . on the top we present in its dependence on for . on the bottomwe magnify in the vicinity of candidate primes .the pronounced maxima at the factors are clearly visible .in contrast , at non - factors the signal does not show any peculiarities . here , we have used gaussian weight factors centered at of width . , scaledwidth=100.0% ] the gauss sum \label{eq : st_gauss}\ ] ] follows from the continuous gauss sum eq.([eq : s(xi ) ] ) , when we restrict the argument to integer values .since =1 ] are periodic in with period , that is . for the factorization of numbers ,it is useful to introduce the scaled square of the standard gauss sum the standard gauss sum can be calculated analytically . due to the division of by find the result which gives us three possibilities to find a factor of : \(i ) if , that is is an integer multiple of a factor , is larger than , whereas for all other the function is equal to unity .( ii)the value of is a factor of , if is an integer multiple of this factor , that is .\(iii ) the function is periodic with the period .+ in fig.[fig : st_gauss ] we illustrate this technique for .we note that the function displays maxima at integer multiples of and with the values and . using the scaled square of the standard gauss sum defined by eqs.([eq : g(l , n ) ] ) and ( [ eq : g_n ] ) .in contrast to fig.[fig : s(xi ) ] , were we had the continuous variable we now restrict ourselves to discrete arguments .we recognize dominant maxima at integer multiples of the factors and .moreover , we note the relation for integer and the factor of .indeed , the value of at an integer multiple of a factor is the factor . ]the gauss sum \label{eq : trunc_gauss}\ ] ] which is most popular in the context of experiments is again defined for integer arguments but uses the fraction , that is the reciprocate of the ratio appearing in .morover , in the summation extends only over terms , which is not necessarily identical to . for this reason ,this gauss sum carries the name `` truncated gauss sum '' .the truncated gauss sum is of special interest , because it needs only a few summands to distinguish between factors and nonfactors .provided the minimum number of terms grows with {n} ] .( ii)the second method uses randomly chosen phases instead of m consecutive phases . with this methodthe minimum number of terms necessary to factor a number grows only logarithmically . in this way it was possible to factor a 17-digits number with only 10 pulses .so far , we have only considered the truncated gauss sum at integer arguments .this restriction was motivated by the fact , that is equal to unity , if and only if the integer trial factor is a factor of . in the present sectionwe show that we can also gain information about the factors of from gauss sums at rational numbers .such an extension of the theory has been made necessary by a recent experiment where all trial factors are checked simultaneously .this experiment involves the truncated gauss sum not only for integer but also for rational numbers .indeed , the generalized truncated gauss sum for the continuous variable assumes the value for rational numbers of the form and with s integer and factor of .the resulting peaks have the information about the factors encoded either in their location , or in their frequency of appearance , or do not contain any information at all . obviously the maxima at contain information since their locations are proportional to the factors .since the peaks at are independent of , they can not give any hints about factors of .unfortunately , the analysis of the peaks at is not that straight forward. nevertheless , we show in the next section , that it is possible to obtain information about the factors . the problem with the truncated gauss sum at rational arguments as a tool to factor numbers originates from the existence of additional peaks located at and .on first sight , these maxima do not seem to give us any information about the factors of .however , in the present section , we outline two methods , that allow us to extract the factors from these additional peaks .if the gauss sum would not contain the peaks at or , we could determine the factors immediately from the maxima at .all we have to do in this case , is to search for a datapoint , where .the value will then give us a factor of .hence , we have to develop a strategy how to eliminate unwanted peaks and/or identify those peaks which contain useful information . from the peaks at can not learn anything , since does not contain any information about the factors .fortunately , it is easy to ignore those peaks , since they are all located in the domain . for this reason ,we restrict our domain of interest to arguments .we now have to select from the forrest of peaks arising for the ones , which carry information about the factors . for this purpose, we discretize the curve obtained from the gauss sum with continuous argument by considering values of , which are integer multiples of the minimal step size .moreover , we confine the measurement range to .this domain must contain peaks at , but there will be no peaks at .the following consideration explains this observation .we assume , that at a given multiple integer of the step size ,there exists a peak , which can be attributed to the ration , where and do not share a commen factor . as a result , we have the identity since does not share a factor with and is an integer , has to be a divisor of .however , this fact implies the inequality and from eq.([eq:11 ] ) we find .this inequality is in contradiction with our assumption of the domain which provides us with the inequality , that is . at rational arguments , with and integer , illustrated by the example .here we have confined ourselves to the range and have chosen a step size . despite of the fact that there are no factors in this domain , the gauss sum assumes the value unity at the positions and , corresponding to the maxima at where is a factor of .this analysis immediately suggests the factors and . due to the reduced domain of interest and the discreteness of the observation points, we do not observe the peaks from arguments or except the one located at .,scaledwidth=80.0% ] the reduction of the interval together with the discretization of the arguments of the truncated gauss sum ensure that all peaks appearing in the so - obtained diagram carry the information about the factors of .figure [ fig : gauss_kont ] illustrates this method .we conclude this discussion by recalling that in the standard approach of factoring numbers using gauss sums at integer values , we have to search for factors at prime numbers smaller than . in the worst case, we have to test all prime numbers up to . in the present approach, we search for factors at integer multiples of in an interval up to . as a result , the present method scales in the same way as the standard one .however , the interval in which we perform the search is much smaller .it is in this sense that factorization with gauss sums at rationa arguments might be more efficient than the corresponding one at integers .moreover , it is appealing to use the full information contained in the gauss sum .we now propose a second method to take advantage of the additional peaks in the gauss sum at rational numbers .for this purpose we analyse the series and and note , that is contained in . in other words , because is factor of , some numbers can be expressed by . the number of different representations of the fraction depends of course on the factors of . in figure[ fig : haeufigkeit ] we show for the example of the degree of degeneracy of these different representations of the same number , providing information about the factors .indeed , the periods contained in are the factors .we conclude by briefly addressing the question of how to implement the degree of degeneracy of ratios in a quantum system .many ideas offer themselves . in this contextit suffices to name at least one . for this purposewe recall the phenomenon of quantum carpets . herestructures appear in the spacetime representation of a schrdinger particle moving between two hard walls .indeed , the design of a quantum carpet has its origin in the degeneracy of the so - called intermode traces .it is only a small additional step to conjecture that the degeneracy of the ratios manifests itself in different steepnesses of the canals .however , a more detailed analysis goes beyond the scope of the present paper and we refer to a future publication . with the help of the number of degeneracies of the truncated gauss sum at rational arguments . for function assumes the value unity at the rational numbers and , marked on the corresponding axes by rhombs , squares and full dots , respectively .fractions , which are connected with factors allow several representations . indeed ,the ratios and display the degree of degeneracy as indicated in the top by the full dot in the square .only the ratio is identical to and , giving rise to .the degree of degeneracies displays a double periodicity determined by the two factors . for example , the squares with the dots have the period , whereas the rhombs with the dots show the period .,scaledwidth=100.0% ]the idea of factoring numbers using truncated gauss sums relies on the fact , that these sums assume the value unity if and only if the test factor is indeed a factor .however , this one - to - one relation is only true when we restrict the arguments of the truncated gauss sum to integer values .when we give up this restriction , we loose the possibility of uniquely identifying factors . in this casethe truncated gauss sum takes on the value of unity at many non - integer values , which obviously can not be factors .however , this additional wealth of maxima also opens up new possibilities of factoring numbers . in the present paperwe have introduced two methods , that allow us to obtain with the help of these on first sight useless peaks additional information about the factors .we thank m. jakob , m. tefak and m. s.zubairy for many fruitful discussions on this topic .this research was partially supported by the max planck prize of wps awarded by the humboldt foundation and the max planck society .
factorization of numbers with the help of gauss sums relies on an intimate relationship between the maxima of these functions and the factors . indeed , when we restrict ourselves to integer arguments of the gauss sum we profit from a one - to - one relationship . as a result the identification of factors by the maxima is unique . however , for non - integer arguments such as rational numbers this powerful instrument to find factors breaks down . we develop new strategies for factoring numbers using gauss sums at rational arguments . this approach may find application in a recent suggestion to factor numbers using an light interferometer [ v. tamma et al . , j. mod . opt . in this volume ] discussed in this issue . gauss sum ; factorization ; rational arguments
mathematical modelling of the phenomena of disease spreading has a long history , the first such attempts being made in the early twentieth century .typically , an individual is assumed to be in either in one of the three possible states : susceptible , infected and removed ( or recovered ) denoted by s , i , and r respectively in the simplest models .diseases which can be contracted only once are believed to be described by the sir model in which a susceptible individual gets infected by an infected agent who is subsequently removed ( dead or recovered ) .a removed person no longer takes part in the dynamics . in sis model, an infected person may become susceptible again .plenty of variations and modifications of the sir and sis models have been considered over the last few decades .resurgence of interest in these models has taken place following the discovery that social networks do not behave like random or regular networks .the recent emphasis has been to study these models on complex networks like small world and scale free networks .a few surprising results have been derived theoretically in the recent past . in mathematical models ,one quantifies the infection probability . in most theoretical models ,the epidemic has a threshold behaviour as the infection probability is varied .however , an estimate of this quantity from real data is difficult as it is related to biological features like nature of the pathogen etc .the test of a model lies in its ability to match real data .not appreciable success has been made so far although some qualitative consistency has been achieved .the available data is usually in the form of number of newly infected patients and total ( cumulative ) number of cases . in the sir model, the newly infected fraction shows an initial growth followed by a peak and a subsequent decay .this matches with the overall structure of the real data ( e.g. for severe acute respiratory syndrome ( sars ) ) , which however , show local oscillatory behaviour in addition .such a behaviour may be due to demographic non uniformity .it is meaningful to study the epidemic spreading by considering that the agents are embedded on an euclidean space .a few models on euclidean networks have been studied earlier which show that the geographical factor plays an important role in the spreading process . in particular , the sir model on an euclidean network where the agents may be connected to a few randomly chosen long range neighbours with a probability decaying with the euclidean distance has been considered is some detail . in 2014 ,the ebola virus caused large scale outbreaks mainly in three west african countries and only recently it has been declared as over ( june 2016 ) .ebola virus is transmitted through body fluids and blood and it is also believed that a person can contract the disease only once .a few attempts have been made to analyse the data so far .different factors like demographic effect , hospitalization , vaccination and treatment plans have been incorporated in the traditional and well - known sir model to understand the dynamics of ebola disease . however , in these models , a mean field approximation has been used which is rather unphysical . using the results of an agent based sir model on euclidean network mentioned in the last paragraph , we have analysed the ebola data for the three countries guinea , liberia and sierra leone in west africa where the outbreak extended over approximately two years . in sectionii , we discuss the details of the available data and the method of the analysis . analysis of the data and simulation results are presented in section iii and in the last section summary and discussions are made .we consulted the ebola data for the number of cases detected in the three countries guinea , liberia and sierra leone in west africa ( the centers for disease control and prevention ( cdc ) ) .the data is available from 25th march 2014 to 13th april 2016 at the time interval of a few days .the data is noisy and contains obvious errors as often the cumulative data is shown to decrease .the first available data is from march 2014 when guinea was already struck with the disease for some time ( first case in guinea reported in december 2013 ) such that the data for the initial period is missing . for liberia and sierra leone , the data for initial stage are available , however these are sparse and unreliable ; often the data for number of death exceeds the number of cases .for this reason , the data has been analysed from the date when the number of cases detected is at least for each country .even then the errors can not be fully avoided as for very late stages , the data being rare , also become somewhat unreliable .hence , the entire data set has to be handled carefully . in table[ tab : statistics ] , a summary of the statistics of the ebola data is presented and one can immediately note that all cases could not have been confirmed in the laboratory in the case of liberia where number of deaths exceeds the laboratory confirmed cases . obviously many cases were unreported . for guinea, these two figures are closest and the data for guinea is in fact the cleanest one .we have studied the available data for total ( cumulative ) number of cases as a function time and extracted the data for number of new cases from these .another point needs to be mentioned .the disease has been officially declared over on 1st june 2016 for guinea , 9th june 2016 for liberia and 17th march 2016 for sierra leone .but one can see from fig .[ fig : gls ] that the cumulative data shows a saturation over fairly long period of time .apparently a few stray cases delayed the declaration of the disease being over . for liberia , for example, the disease was originally declared to be over as early as in may 2015 but two small flare - ups were reported later .however the cumulative data is hardly affected by the later cases ..statistics of ebola data for three different countries . [ cols="<,<,<,<,<",options="header " , ] [ tab : expo_model ]we have analysed the data for ebola outbreak in west african countries which are available for 2014 - 16 and also reproduced qualitatively the data using a model of epidemic spreading . in this sectionwe justify the choice of the parameters used in the model to obtain the results consistent with the real data .we have already justified the choice of between and in the last section .we have used a larger value of for sierra leone and a smaller value of for guinea to get the consistency . to justify why should be larger for sierra leonewe note the following .sierra leone and liberia are comparable in size but the density of population is much higher in the former .the density of population is /km and /km respectively for the two countries .hence the number of neighbours within the same distance is larger for sierra leone which implies a larger value of effectively . on the other hand ,the population densities of guinea ( /km ) and liberia are quite close so that one should use the same value .however , we need to justify why a smaller value of is able to reproduce the data for guinea . a smaller value of indicates less infection probability which is possible if proper medical care and control measurements are taken .this is indeed true as we find from several documents that the disease was tackled most effectively in guinea .table [ tab : statistics ] clearly shows that the maximum percentage of cases for guinea were laboratory - tested which indicates that the process of contact tracing and treatment were more efficient .this is supported by the fact that in guinea , about contacts per infected person were traced compared to in case of sierra leone .we find from that msf treated the largest number of reported cases in guinea , in sierra leone the minimum out of reported cases .thus most cases in sierra leone , even when reported , had received less attention while in liberia , a large number is not confirmed or reported at all . apparently , medical centers by international organisations have also been set up much earlier in guinea as it was the epicenter of the disease and the disease started as early as in 2013 december .however , later activities could control the disease in liberia and sierra leone as well , and the final number of deaths had been far less than initially anticipated .we also note a curious fact - though guinea may have recorded the minimum number of cases , yet the disease spanned a longer duration compared to liberia .further analysis , beyond the scope of the present paper , may be able to explain this . we add here a few more relevant comments .we note that while qualitative features of the data obtained from the model are quite similar to the real data , quantitatively they are much larger . this may be due to the fact that for the real data , the entire population has been taken to obtain the fractions while the disease might not have prevailed in such totality due to geographical or other factors .we have also made simple assumptions like homogeneity , i.e. , uniform number of contacts for all agents .the initial condition has been taken to be identical : the disease commences with only one infected people .our assumption that agents are immobile is supported by in which it is argued that migration does not play a role in the spreading .even so , this simple model is able to yield data which is consistent with real data .the effect of the ebola outbreak has been devastating in the west african countries .apart from the human losses , economic loss has also been considerable .the present study shows that the euclidean model can be treated as a basic starting point and can be further developed by adding other features. this will make it very useful and important for making accurate predictions .work is in progress towards that direction .a. barrat , m. barthelemy and a. vespignani , _ dynamical processes on complex networks _ ( cambridge university press , cambridge , u.k . , 2008 ) .p. sen and b. k. chakrabarti , _ sociophysics : an introduction _ , oxford university press , 2013 .h. w. hethcote , siam review * 42 * , 599 ( 2000 ) .w. hethcote , mathematical problems in biology , lecture notes in biomathematics , springer , berlin * 2 * , 83 ( 1974 ) . h. k. janssen and k. oerding and f. van wijland and h. j. hilhorst , the european physical journal b * 7 * , 137 ( 1999 ) . f. linder , j. tran - gia , s. r. dahmen and h. hinrichsen , journal of physics a * 41 * , 185005 ( 2008 ) .s. n. bennett , a. j. drummond , d. d. kapan , m. a. suchard , j. l. munoz - jordan , o. g. pybus , e. c. holmes and d. j. gubler , molecular biology and evolution * 27 * , 811 ( 2010 ) .z. wu , k. rou and h. cui , aids education and prevention * 16 * , 7 ( 2004 ) .x. xu , h. peng , x. wang and y. wang , physica a * 367 * , 525 ( 2006 ) .j. wang , z. liu and j. xu , physica a * 382 * , 715 ( 2007 ) .z. zhao , y. liu and m. tang , chaos * 22 * , 023150 ( 2012 ) .a. khaleque and p. sen , j. phys .a : math . theor . * 46 * , 095007 ( 2013 ) .p. grassberger , journal of statistical mechanics : theory and experiment * 2013 * , p04004 ( 2013 ). j. a. lewnard , m. l. n. mbah , j. a. alfaro - murillo , f. l. altice , l. bawo , t. g. nyenswah and a. p.galvani , the lancet infectious diseases * 14 * , 1189 ( 2014 ) .d. chowell , c. castillo - chavez , s. krishna , x. qiu and k. s. anderson , the lancet infectious diseases * 15 * , 148 ( 2015 ) .a. camacho , a. j. kucharski , s. funk , p. piot and w. j. edmunds , epidemics * 9 * , 70 ( 2014 ) .a. rachah and d. f. m. torres , discrete dynamics in nature and society * 2015 * , 842792 ( 2015 ) .a. radulescu , j. herron , arxiv:1512.06305 , ( 2015 ) .k. burghardt , c. verzijl , j. huang , m. ingram , b. song and m. hasne arxiv:1606.07497 , 842792 ( 2016 ) .https://en.wikipedia.org/wiki/liberia , https://en.wikipedia.org/wiki/sierra. https://en.wikipedia.org/wiki/guinea .http://ebolaresponse.un.org/guinea , http://ebolaresponse.un.org/sierra-leone .+ msf.pdf .
the data for the ebola outbreak that occurred in 2014 - 2016 in three countries of west africa are analysed within a common framework . the analysis is made using the results of an agent based susceptible - infected - removed ( sir ) model on a euclidean network , where nodes at a distance are connected with probability in addition to nearest neighbors . the cumulative density of infected population here has the form , where the parameters depend on and the infection probability . this form is seen to fit well with the data . using the best fitting parameters , the time at which the peak is reached is estimated and is shown to be consistent with the data . we also show that in the euclidean model , one can choose and values which reproduce the data for the three countries qualitatively . these choices are correlated with population density , control schemes and other factors .
in a typical longitudinal study , a number of variables are measured on a group of individuals and the goal is to analyze the relationships between the trajectories of the variables . in recent years , functional data analysis has provided efficient ways to analyze longitudinal data . in many casesthe variable trajectories are discretized continuous curves that can be reconstructed by smoothing , and functional linear regression methods can be applied to study the relationship between the variables ( ramsay and silverman , 2005 ) .but in other situations the data is observed at sparse and irregular time points , which makes smoothing difficult or even unfeasible .therefore , functional regression methods that can be applied directly to the raw measurements become very useful .methods for functional data analysis of irregularly sampled curves have been proposed by a number of authors , for the one - sample problem as well as for the functional regression problem ( chiou et al . , 2004 ;james et al . , 2000 ; mller et al . , 2008 ; yao et al . , 2005a , 2005b ) .outlier - resistant techniques for the functional one - sample problem have also been proposed ( cuevas et al ., 2007 ; gervini , 2008 , 2009 ; fraiman and muniz , 2001 ; locantore et al . , 1999 ) , and two recent papers deal with robust functional regression for pre - smoothed curves ( zhu et al . 2011 ;maronna and yohai , 2012 ) .however , outlier - resistant functional regression methods for raw functional data have not yet been proposed in the literature . in this paperwe address this problem and present a computationally simple approach based on random - effect models .our simulations show that this method attains the desired outlier resistance against atypical curves , and that the asymptotic distribution of the test statistic is approximately valid for small samples . as an example of application, we will analyze the daily trajectories of oxides of nitrogen and ozone levels in the city of sacramento , california , during the summer of 2005 .the data is shown in figure fig : sample_curves .the goal is to predict ozone concentration from oxides of nitrogen .both types of curves follow regular patterns , but some atypical curves can be discerned in the sample .we will show in section sec : example that to a large extend it is indeed possible to predict ozone levels from oxides - of - nitrogen levels , but that the outlying curves distort the classical regression estimators and that the proposed robust method gives more reliable results .the paper is organized as follows .section [ sec : methods ] presents a brief overview of functional linear regression and introduces the new method .section [ sec : simulations ] reports the results of a comparative simulation study , and section [ sec : example ] presents a detailed analysis of the above mentioned ozone dataset .technical derivations and proofs are left to the appendix .matlab programs implementing these procedures are available on the author s webpage .the functional approach to longitudinal data analysis assumes that the observations are discrete measurements of underlying continuous curves , so and are the trajectories of interest , and are random measurement errors , and and are the time points where the data is observed .the and the are random functions that we assume independent and identically distributed realizations of a pair .suppose and are square - integrable functions on an interval ] .the raw observations were generated following ( [ eq : raw - x ] ) and ( [ eq : raw - y ] ) , with random uniformly distributed in ] of the pairs by , with and for , and .note that the contaminated data follows model ( [ eq : reduced_linear_model ] ) with and high - leverage , so the effect of this type of contamination is an underestimation of that tends to pull towards 0 .the estimation of requires two steps : first , to estimate and from the raw data , and then to compute from the and the .so we compared two procedures : a non - robust procedure , using reduced - rank normal models ( james et al . , 2000 ) to estimate the component scores , followed by the ordinary least - squares regression estimator ( [ eq : lse ] ) ; and a robust procedure , using reduced - rank -models ( gervini , 2009 ) to estimate the component scores , followed by the gmt regression estimator ( [ eq : wlne ] ) .for the robust procedure , we considered the two types of weights discussed in section [ sec : robust - estim ] , with trimming proportions and ; degrees of freedom and were used for the -models .four levels of contamination were considered : 0 ( clean data ) , , and .we took as sample size , as grid size , and as model dimensions . each casewas replicated 1000 times . as measure of the estimation error we used the expected rootintegrated squared error , where .the results are reported in table [ tab : simulations_1 ] , along with monte carlo standard errors .we see that for non - contaminated data ( ) , there is no significant difference between metric and rank trimming for a given pair .the trimming proportion has a larger impact on the estimator s behavior than the degrees of freedom .for this reason we recommend choosing adaptively , so as not to cut off too much good data .when , we see that metric trimming tends to outperform rank trimming for a given pair .somewhat counterintuitively , estimators with tend to be more robust than those with for a given ; the reason is that for this type of contamination , which affects but not the or the , models with provide more accurate estimators of and than models with ( for other types of contamination this is no longer true , although models with are still very robust ; see gervini ( 2009 ) . ) in general , then , the recommendation is to use -model estimators with metrically trimmed weights and a trimming proportion chosen adaptively . .simulation results .mean root integrated squared errors of under various contamination proportions ( monte carlo standard errors in parenthesis ) . [cols="<,<,^,^,^,^ " , ]ground - level ozone is an air pollutant known to cause serious health problems . unlike other pollutants, ozone is not emitted directly into the air but forms as a result of complex chemical reactions , including volatile organic compounds and oxides of nitrogen among other factors .modeling ground - level ozone formation has been an active topic of air - quality studies for many years .the california environmental protection agency database , available at http://www.arb.ca.gov/aqd/aqdcd/aqdcddld.htm , has collected data on hourly concentrations of pollutants at different locations in california for the years 1980 to 2009 . herewe will focus on the trajectories of oxides of nitrogen ( nox ) and ozone ( o3 ) in the city of sacramento ( site 3011 in the database ) between june 6 and august 26 of 2005 , which make a total of 82 days ( shown in figure [ fig : sample_curves ] ) .there are a few days with some missing observations ( 9 in total ) , but since the method can handle unequal time grids , imputation of the missing data was not necessary .the first step in the analysis is to fit reduced - rank models to the sample curves .we used cubic b - splines with 7 equally spaced knots every 5 years , and fitted normal and ( cauchy ) reduced - rank models with up to 10 principal components . for both the response and the explanatory curves ,the leading three components explain at least 85% of the total variability , so we retained these models .the means and the principal components are plotted in figure [ fig : mean - pc ] . there is no substantial difference between the estimators obtained by these models , except perhaps for the mean and the third component of log - nox ( figures [ fig : mean - pc ] ( a ) and ( g ) ) . with the normal component scores we computed the least squares estimator , obtaining the cauchy component scores we computed the gmt estimator with 1 degree of freedom and 10% metric trimming , obtaining latter cut off 5 observations out of the 82 .there are some noticeable differences between these two estimators , even leaving aside the third row ( which are not easily comparable , since and are rather different ) .the differences are more striking in the slope estimators and , shown in figure [ fig : betas ] .there is a bump in around that does not appear in .this means that the robust slope estimator assigns positive weight to nox values around 8 am in the prediction of o3 levels around 4 pm , showing that there is a persistent effect of oxides - of - nitrogen level in ozone formation . of course, none of this would be meaningful if the regression model was not statistically significant .but the estimated response curves , shown in figure [ fig : pred - y ] , clearly show that the model does predict the response curves to a large extent .the robust estimator provides a better fit overall , with a root median squared error of compared to the root median squared error of for the least squares estimator .the author was partly supported by nsf grants dms 0604396 and 1006281 .the method proposed by gervini ( 2009 ) to estimate the mean and the principal components of a stochastic process works as follows . the mean function and the principal components are modeled as spline functions ; that is , given a set of spline basis functions , chosen by the user , it is assumed that and .the observed vector can then be expressed as {(j , l)} ] and .note that in this notation . by assuming has a standard multivariate distribution , robust maximum likelihood estimators of , , and are obtained .the estimators are computed via a standard em algorithm .the optimal number of components can be chosen via aic or bic criteria .see gervini ( 2009 ) for details .in addition to parameter estimates , the em algorithm yields predictors of the random effects , so one obtains as a by - product .the estimators of , , and are obtained in a similar way from the sample .the estimators and defined by ( [ eq : wlne ] ) are m - type estimators ( van der vaart , 1998 , ch . 5 ) , since they minimize a function of the form . specifically , and solve the equations and . to compute matrix derivatives we use the method of differentials ( magnus and neudecker , 1999 ) . differentiating with respect to we obtain .then can be rearranged in matrix form as ( [ eq : fixed - point - theta ] ) follows .differentiating with respect to we obtain , this can be expressed in matrix form as which ( [ eq : fixed - point - sigma ] ) follows .we will simplify the derivation of the asymptotic distribution of by assuming that the true component scores are used , instead of the estimated scores , and by assuming that is fixed and known . in that casewe can apply theorem 5.23 of van der vaart ( 1998 ) directly , and obtain that is asymptotically with expectations are taken with respect to the true parameters . without loss of generality we can eliminate the factor in ( [ eq : der - wrt - theta ] ) ; then it is easy to see that ( [ eq : b ] ) holds . to derive ( [ eq : a ] ) we use differentials again : which ( [ eq : a ] ) follows .ash , r. b. and gardner , m. f. ( 1975 ) . _ topics in stochastic processes_. probability and mathematical statistics ( vol .new york : academic press .
we present a robust regression estimator for longitudinal data , which is especially suited for functional data that has been observed on sparse or irregular time grids . we show by simulation that the proposed estimators possess good outlier - resistance properties compared with the traditional functional least - squares estimator . as an example of application , we study the relationship between levels of oxides of nitrogen and ozone in the city of san francisco . _ key words : _ functional data analysis ; longitudinal data analysis ; mixed effects models ; robust statistics ; spline smoothing .
shannon ( * ? ? ?* sec . 25 ) showed that the capacity of an additive white gaussian noise ( awgn ) channel with bandwidth and average transmit power constraint is where is the noise power spectral density .the capacity ( [ eq : awgneq ] ) is achieved by a sinc pulse and the spectral efficiency is mazo introduced faster - than - nyquist ( ftn ) signaling for sinc pulses where the pulses are modulated faster than the nyquist rate . the resulting intersymbol interference ( isi ) can be interpreted as a type of coding and is the same as correlative or partial response signaling such as the duobinary technique .mazo showed that increasing the modulation rate by up to does not affect the minimum euclidean distance between the closest two signals when using binary antipodal modulation .thus , the coding induced by ftn signaling increases the spectral efficiency at high signal - to - noise ratio ( snr ) .non - orthogonal transmission schemes such as ftn , are receiving renewed attention for their potential to increase capacity .ftn may also be interesting for applications that need low cost transmitters and flexible rate adaptation . in practice , it is difficult to approximate sinc pulses .one instead often analyzes square root raised cosine ( rrc ) pulses that decay more quickly than sinc pulses and can be approximated more accurately .rusek , anderson and wall show in and that ftn signaling achieves a substantially higher spectral efficiency than ( [ eq : spec_awgn ] ) if the comparison is based on the 3-db power bandwidth of rrc pulses with independent and identically distributed ( i.i.d . )gaussian symbols .the calculations are performed for a single channel , i.e. , there is no interference or spectral sharing .we revisit this comparison by viewing bandwidth as a shared resource where the spectral efficiency is computed by normalizing the sum rate of users ( or systems ) by an overall bandwidth of approximately hz , where is the bandwidth assigned to each user .for example , for shannon s sinc pulse , every user receives hz of non - overlapping bandwidth and the spectral efficiency is given by ( [ eq : spec_awgn ] ) .one may try to improve by using non - orthogonal signaling such as ftn .of course , now the users experience interference .our main goal is to explore the spectral efficiency of ftn signaling from the shared resource perspective .this paper is organized as follows : section [ sec2 ] analyzes the capacity of ftn signaling for a single user , section [ sec3 ] defines and analyzes spectral efficiency for a multiaccess channel with spectrum sharing , and section [ sec4 ] discusses the results further for low and high snr .the nyquist _ rate _ usually refers to twice the bandwidth of a bandlimited signal .the nyquist intersymbol interference ( isi ) _ criterion _ refers to the requirement that sampling at regular intervals incurs no isi . for sinc pulses ,nyquist - rate sampling satisfies the nyquist isi criterion , so that ftn refers to sampling faster than both the nyquist rate _ and _ a nyquist isi criterion rate .however , in terms of capacity there is no need to sample faster than the nyquist rate for linear awgn channels ( * ? ? ?19 ) . for pulses other than sinc pulses , on the other hand, it might be interesting to sample faster than the fastest nyquist isi criterion rate .a ftn signal with complex pulse shape is given by (t- k\tau t).\ ] ] where the complex random symbols ] with variance , let snr be the snr at frequency , i.e. , define }{n_{0 } } = \frac{p|h(f)|^{2}}{n_{0}}\ ] ] where (t - k\tau t ) e^{-j2\pi ft } dt\right| ^{2}. \nonumber\ ] ] note that snr does not depend on the ftn rate .the capacity is achieved with proper complex gaussian ] : the cosine portions overlap the flat portions of the rrc pulses but the flat portions do not overlap ; 4 . : requires and that the flat portions of the pulses overlap .we treat the first two cases here and the next two cases in appendix [ app : a ] . for is no interference and we have ( see ( [ eq : c_ftn_sec2 ] ) ) for , we have \right ) df\end{aligned}\ ] ] }{2wn_{0}+p\left[1 + \cos\left(\frac{(f- b + ( 1-\alpha)w ) \pi}{\alpha w}\right)\right]}\bigg)df .\label{eq : spec_ftn}\end{aligned}\ ] ] for the special case , we compute ( see appendix [ app : b ] ) : to compute the spectral efficiency , we divide ( [ eq : spec_ftn_general ] ) by the total bandwidth . as gets large , the bandwidth per user is approximately and the spectral efficiency is for , we plot the corresponding spectral efficiencies as the dashed curves in fig .[ ftn_musu ] . observe that the spectral efficiency _decreases _ with increasing which is the opposite as in ( * ? ? ?2 ) where the interference is not accounted for .the spectral efficiencies optimized over satisfying are shown in fig .[ ftn_b_1 ] for the roll - off factor .the reader may find it strange that choosing beats shannon s curve at low snr .we discuss the low and high snr effects next .at low snr or , the spectral efficiency ( [ eq : spec_ftn_b ] ) can be approximated as and it is best to choose as small as possible .in fact as the approximation ( [ eq : lowsnr ] ) remains valid and we can achieve an arbitrarily large spectral efficiency .this perhaps unexpected behavior is because the transmit power per hertz for large is , i.e. , the power per hertz increases as decreases . in comparison ,the transmit power per hertz for orthogonal transmission with shannon s sinc pulses is . we should thus normalize ( [ eq : lowsnr ] ) by multiplying by , and we arrive at the same spectral efficiency for all positive .the result ( [ eq : lowsnr ] ) remains valid for also , and this relates to the optimality of bursty signaling at low snr .this observation also explains the low - snr behavior of the curves in fig .[ ftn_b_1 ] : the gains and losses for as compared to are because the transmit power per hertz depends on .if we normalize to watts / hz and then optimize over in the range we arrive at the curves shown in fig .[ ftn_b_1_normalized_power ] .now any choice for gives the same spectral efficiency at low snr , as expected . at high snr or , the spectral efficiency ( [ eq : spec_ftn_b ] ) based on ( [ eq : spec_ftn ] )can be approximated as and it is best to choose as large as possible , i.e. , which corresponds to no interefernce .the resulting spectral efficiency pre - log is 1 .the spectral efficiency pre - log for and high snr is , which is the high - snr slope of the dashed curves in fig .[ ftn_musu ] .finally , a more precise version of ( [ eq : spec_ftn_b ] ) for and high snr gives note that there is an additive gap as compared to ( [ eq : highsnr ] ) and this gap increases monotonically with .for example , the gap for is 4/3 bit / sec / hz which corresponds to a 4.01 db loss in energy efficiency .this gap can be seen at high snr in fig .[ ftn_b_1 ] .the gap for is 2 bit / sec / hz which corresponds to a 6 db loss in energy efficiency .after normalizing the transmit power per hertz to , the gap reduces to which we plot in fig .[ alpha_loss ] . for gap is 0.75 bit / sec / hz , i.e. , the loss is 2.25 db which can be seen at high snr in fig .[ ftn_b_1_normalized_power ] .the gap for is 1 bit / sec / hz , i.e. , the loss is 3 db .spectral efficiency is usually considered in the context of spectrum sharing .we showed that the spectral efficiency of rrc pulses with ftn decreases monotonically with the roll - off factor .this means that shannon s sinc pulses are the best rrc pulses , and they are in fact the best pulses in general . at low snr ,ftn neither improves nor degrades the spectral efficiency . at high snr , it is best to avoid interference for the models considered here .[ [ app : a ] ] for \leq b\leq w$ ] , we have \log_{2 } \left ( 1 + \frac{p}{wn_{0}}\right ) \nonumber\\ & + 2 \int_{0}^{w - b}\log_{2 }\bigg(1 + \nonumber\\ & \quad\frac{2p}{2wn_{0}+p\left[1+\cos\left(\frac{(f-\alpha w)\pi}{\alpha w}\right)\right]}\bigg ) df\nonumber\\ & + 2 \int_{w - b}^{\alpha w}\log_{2}\bigg ( 1+\nonumber\\ & \quad \frac{p\left [ 1 + \cos \left ( \frac{(f+b - w)\pi}{\alpha w } \right)\right]}{2wn_{0}+p\left [ 1 + \cos\left ( \frac{(f-\alpha w)\pi}{\alpha w } \right)\right ] } \bigg ) df\nonumber\\ & + 2 \int_{\alpha w}^{(1+\alpha)w - b}\log_{2}\bigg(1 + \frac{p\left [ 1 + \cos\left ( \frac{(f+b - w)\pi}{\alpha w}\right ) \right]}{2wn_{0}+2p } \bigg ) df.\end{aligned}\ ] ] for , we have \log_{2 } \left ( 1 + \frac{p}{wn_{0}}\right ) \nonumber\\ & + 2 \int_{0}^{\alpha w}\log_{2 }\bigg(1 + \frac{2p}{2wn_{0}+p\left[1+\cos\left(\frac{(f-\alpha w)\pi}{\alpha w}\right)\right]}\bigg ) df\nonumber\end{aligned}\ ] ] }{2wn_{0}+2p } \bigg ) df.\end{aligned}\ ] ] [ [ app : b ] ] we compute ( [ eq : b=2w ] ) as follows : where is }{2wn_{0}+p\left[1 + \cos\left(\frac{(f-\alpha w)\pi}{\alpha w}\right)\right]}\right ) df \\ & = 2\int_{0}^{\alpha w } \log_{2 } \left ( \frac{wn_{0}+p}{wn_{0 } + \frac{p}{2}\left [ 1- \cos \left ( \frac{f\pi}{\alpha w}\right)\right]}\right ) df\\ & \overset{\text{(a)}}= 2\alpha w \log_{2 } \left ( 1 + \frac{p}{wn_{0}}\right ) \\&\quad-\frac{2\alpha w}{\pi } \int_{0}^{\pi } \log_{2 } \left ( \left ( 1 + \frac{p}{2wn_{0 } } \right ) - \frac{p\cos(x)}{2wn_{0 } } \right ) dx \\ & \overset{\text{(b)}}= 2\alpha w \log_{2 } \left ( 1 + \frac{p}{wn_{0}}\right ) \\ & \quad- 2\alpha w \log_{2 } \left ( \frac{1 + \frac{p}{2wn_{0 } } + \sqrt{1 + \frac{p}{wn_{0}}}}{2 } \right)\end{aligned}\ ] ] where ( a ) follows by subsitiuting and ( b ) follows by ( [ eq : ab ] ) .this work was performed in the framework of the fp7 project ict-317669 metis .g. kramer was supported by an alexander von humboldt professorship endowed by the german federal ministry of education and research .
capacity computations are presented for faster - than - nyquist ( ftn ) signaling in the presence of interference from neighboring frequency bands . it is shown that shannon s sinc pulses maximize the spectral efficiency for a multi - access channel , where spectral efficiency is defined as the sum rate in bits per second per hertz . comparisons using root raised cosine pulses show that the spectral efficiency decreases monotonically with the roll - off factor . at high signal - to - noise ratio , these pulses have an additive gap to capacity that increases monotonically with the roll - off factor .
in the past few years , the research community has put significant efforts in designing feature representations for acoustic sound recognition .good features should improve the performance of various audio analytic tasks such as classification and detection .traditionally , features are heuristically designed , based on the understanding of spectral characteristics of natural sounds . meanwhile , since this process is separate from the classification process , heuristically designed features do not always contain enough information to obtain a high classification accuracy .thanks to the development of deep learning methods and rich dataset for sound , deep learning is increasingly becoming a popular candidate for acoustic recognition tasks .recently , cnn has shown the superior performance in feature extraction and classification in visual and acoustic domain , especially in speech recognition .it could not only reduce the dimension of data and but also could extract features as well .however , training a cnn model requires huge computational effort .therefore , we leverage human experience ( i.e. domain knowledge ) , to design the deep learning model , understand the features from the model , and finally use the learned features to improve the audio recognition tasks performance . currently , most works in sound recognition area use log - mel filter banks as features .these features are not optimized for a particular audio recognition task at hand , and thus might not lead to high accuracy . in this paper , we design our feature extractor by the studying the procedure of designing log - mel filter banks . we build a special filter bank learning layer and concatenate it with a cnn architecture . after training , the weight of the filter bank learning layer is post - processed with human experience. then , the filter bank learning layer is re - initialized with the processed weight .the weight could be iteratively improved for feature extraction .this process is shown in fig .[ fig1:framework ] .we call it as experience guided learning . to our knowledge, this is the first attempt to infuse domain knowledge of feature design to a deep learning pipeline for acoustic recognition tasks .[ fig1:framework ] by using this method , the accuracy of recognition for the _ urbansound8k _ sound increases at least 1.5% accuracy based on the human designed filter bank under different settings , such as triangular window for mfcc .the rest of the paper is structured as follows : section 2 introduces the related work by using various methods to improve sound recognition tasks .section 3 describes the special layer , a layer that could extract log - mel features and the cnn architecture in our work in detail . the experimental setup and resultare shown in section 4 .finally , conclusion to our work can be found in section 5 .there is a wide range of studies related with sound recognition , especially in speech recognition . provided a detailed implementation of the hidden markov model ( hmm ) on speech recognition by using the linear predictive coding ( lpc ) features . applied the hmm model on the mfcc features for speech recognition . with the advancement of deep learning , people applied different deep learning techniques , cnn in particular , for recognition . applied convolutional deep belief networks to audio data and evaluated them on various audio classification tasks by using the mfcc feature .their feature representations trained from unlabeled audio data showed very good performance .however , the mfcc feature is not generalized and not learned for improving different task objectives . thus proposed a filter learning layer to adaptively learn filter banks from the spectrum , and obtained good result in speech recognition .however , this learning layer is complex ( multiple non - linear operations ) and requires pre - estimation of the spectrum features mean and standard deviation . therefore , in this study , we propose a new filter learning layer based on the procedure of designing log - mel filter banks .the mechanism of the filter bank learning layer is similar to the design of log - mel spectrogram , which has been widely used in automatic speech recognition . in general , there are several steps to calculate this feature : 1 .perform fourier transform to calculate power spectrogram 2 .apply the mel filter banks to all power spectrogram 3 .take the logarithm of all filter banks energy similar to this process , we design the network layer as following : the filter bank learning layer takes power spectrogram of a waveform as input .the layer generates the mel - features by multiplying the filters and individual spectrum .the number of filters is a hyper - parameter that represents the number of features to be learned .after that , we take the logarithm of these features and input into a cnn architecture that has high performance in sound recognition . the filter bank learning layer s weightis not randomly initialized .similar to triangular window or gamma - tone filter window , each row of the weight is activated once ( non - zeros value ) within a localized frequency range . mathematically , the filter bank learning layer is described by the following equation : where is the individual power spectrum of the acoustic clip at time , is the weight of filter bank . represents each individual element .this operation s output is the energy of the filter bank .then , we take the logarithm of to get the log - mel filter coefficient for filter bank here , to prevent the non positive number error , the equation is further developed as : where linear rectified units and is a small constant ( e.g. ) . in order to optimize the objective function , the filter bank learning layer s weightis gradually updated by taking the derivative of the objective function with respect to the weight .the update equation is : here , is the learning rate and is the loss . by taking the derivative of the weight .the derivative function could be calculated through chain rule : and here , is the loss gradient from previous layers .the filter bank learning layer could adaptively extract features from the power spectrogram .combining domain knowledge , the learned filter bank s weight could be further developed into generic filters .different from s work , our filter bank learning layer does not require estimating the mean and standard deviation of the input beforehand .also , our method incurs less computation cost .in this study , the training of the cnn model is performed on the natural sounds dataset , the _ urbansound8k_ .this dataset contains 8732 labeled sound excerpts ( ) of urban sounds from 10 classes : air conditioner , car horn , children playing , dog bark , drilling , engine idling , gun shot , jackhammer , siren , and street music .they are evenly divided into 10 folds .the original sound is at 44.1khz , we down sample it to 22.5khz and 8khz . for the 22.5khz sound , due to the dimension of raw waveform, we divide it into 1 second each clip ( in this case , we use majority voting method to obtain the output ) . after that, we take the power spectrogram of the sound by using librosa (nfft equals to sampling rate , default hop length ) .the weight of the filter bank learning layer is initialized by triangular filter banks of mfcc .we build two cnn architectures , one is deep vgg architecture while the other one is shallow as shown in fig .[ fig2:arch ] .the parameters are as following .the optimizer is default adam optimizer with learning rate 0.001 .the learning rate decays every three epochs with the decay rate 0.006 .the update function is : where , is the learning rate .[ fig2:arch ] after each layer , we apply leakyrelu with parameter 0.33 . the baseline is around 70% by using svm with rbf kernel and 73.7% .we also test the s filter bank learning layer for comparison .the result is shown in table 2 .the proposed method could provide a modest 0.4% of improvement in the classification accuracy .we take out the weight of the filter bank learning layer and use the savitzky - golay function to smooth it .we then re - initialize the filter bank layer with the smoothed weight .after retraining the model , the accuracy is improved by 1.5% .we did nt concanate the 4 second clip for the 22.5khz sound , but we expect improvement compared to the 8khz result .we also test the filter bank learning layer proposed in , but the accuracy is lower than other baselines .this might be caused by the complex non - linearity of this layer and our estimation of input s mean and standard deviation might be too rough . to our knowledge , our method obtains the highest accuracy of _urbansound8k _ dataset .we also notice that the sampling rate of the sound affect the detection accuracy . for natural sound ,different events happen at different frequency levels .therefore , a relatively high sampling rate is essential for natural sound recognition tasks .lllllll & & & & & & + + t & 1 & 128 & fix & 1 & 22.5 & 71.88 + t & 1 & 128 & trained & 1 & 22.5 & 72.21 + t & 1 & 128 & improved & 1 & 22.5 & 73.63 + t & 1 & 128 & fix & 4(mv ) & 22.5 & 78.34 + t & 2 & 40 & fix & 4 & 8 & 69.03 + t & 2 & 40 & trained & 4 & 8 & 69.43 + t & 2 & 40 & improved & 4 & 8 & 71.41 + the purpose of this work is to understand the mechanism of filter bank and further facilitate the design of filter banks to generate better feature extractors . here , we visualize the filter banks from the triangular window , and smoothed weight from the trained filter bank learning layer that is trained by fold 1 - 9 in the following picture .as we can notice , the first few learned filter banks ( 1st row ) conform with the triangle filter banks , which means these triangular filter banks capture most information in low frequency range .however , in the second row , we notice that the learned filter banks are activated around 0.4khz to 0.5khz and 0.75khz to 0.9khz , while frequency between 0.6 to 0.7 khz is less interested .[ fig1:filter ] in triangular windows , the bandwidth of filters increases as the frequency level increases .contrary to this , the learned filters show smaller bandwidth at relatively high frequency area .fig.3 also shows there are several new peaks within the original single window , which means more filter banks are required .for instance , in the last row , the third picture shows that there are three different frequency ranges that are activated and their bandwidths are relatively small .this information could provide more intuition for audio experts to design new filters .one problem with these learned filter banks is that they have a lot of serration along the the shape .this is primarily due to the bias of the model . by smoothing the learned filter banks ,the model could be generalized , however , more expert experience would be beneficial to improve the recognition accuracy . here, we apply the savitzky - golay function , however , different smooth function might result to different performance .also , adding some regularization on model s parameters would smooth these filters as well .in this paper , we explore the possibility of using the deep learning methods to facilitate the design of filter banks by incorporating human expert knowledge .we first design a filter bank learning layer that takes in frequency features .the output of the layer is fed to two different cnn architectures .this layer is designed according to the design procedure of the log - mel - spectrogram . by taking the weight of the filter bank learning layer, we apply a smooth function on the weight .this gives us at least 1.5% accuracy improvement on the _ urbansound8k _ dataset .we further investigate the learned filter banks , and they provide us some intuitions to facilitate the feature design for the recognition task .justin salamon , christopher jacoby , and juan pablo bello , `` a dataset and taxonomy for urban sound research , '' in _ proceedings of the 22nd acm international conference on multimedia_. acm , 2014 , pp .10411044 .tara n sainath , brian kingsbury , abdel - rahman mohamed , and bhuvana ramabhadran , `` learning filter banks within a deep neural network framework , '' in _ automatic speech recognition and understanding ( asru ) , 2013 ieee workshop on_. ieee , 2013 , pp .297302 .geoffrey hinton , li deng , dong yu , george e dahl , abdel - rahman mohamed , navdeep jaitly , andrew senior , vincent vanhoucke , patrick nguyen , tara n sainath , et al . , `` deep neural networks for acoustic modeling in speech recognition : the shared views of four research groups , '' , vol .6 , pp . 8297 , 2012 .george e dahl , tara n sainath , and geoffrey e hinton , `` improving deep neural networks for lvcsr using rectified linear units and dropout , '' in _ 2013 ieee international conference on acoustics , speech and signal processing_. ieee , 2013 , pp .86098613 .hans - gnter hirsch and david pearce , `` the aurora experimental framework for the performance evaluation of speech recognition systems under noisy conditions , '' in _asr2000-automatic speech recognition : challenges for the new millenium isca tutorial and research workshop ( itrw ) _ , 2000 .honglak lee , peter pham , yan largman , and andrew y ng , `` unsupervised feature learning for audio classification using convolutional deep belief networks , '' in _ advances in neural information processing systems _ , 2009 , pp .10961104 .brian mcfee , matt mcvicar , colin raffel , dawen liang , oriol nieto , eric battenberg , josh moore , dan ellis , ryuichi yamamoto , rachel bittner , douglas repetto , petr viktorin , joo felipe santos , and adrian holovaty , `` librosa : 0.4.1 , '' oct .2015 .karol j piczak , `` environmental sound classification with convolutional neural networks , '' in _ 2015 ieee 25th international workshop on machine learning for signal processing ( mlsp)_. ieee , 2015 , pp .
designing appropriate features for acoustic event recognition tasks is an active field of research . expressive features should both improve the performance of the tasks and also be interpret - able . currently , heuristically designed features based on the domain knowledge requires tremendous effort in hand - crafting , while features extracted through deep network are difficult for human to interpret . in this work , we explore the experience guided learning method for designing acoustic features . this is a novel hybrid approach combining both domain knowledge and purely data driven feature designing . based on the procedure of log mel - filter banks , we design a filter bank learning layer . we concatenate this layer with a convolutional neural network ( cnn ) model . after training the network , the weight of the filter bank learning layer is extracted to facilitate the design of acoustic features . we smooth the trained weight of the learning layer and re - initialize it in filter bank learning layer as audio feature extractor . for the environmental sound recognition task based on the _ urbansound8k _ dataset , the experience guided learning leads to a 2% accuracy improvement compared with the fixed feature extractors ( the log mel - filter bank ) . the shape of the new filter banks are visualized and explained to prove the effectiveness of the feature design process . filter bank , feature learning , experience guide learning , data driven , neural network
before addressing the regularization of the b - factor fit , we present the equations that describe it .we assume that the thermal displacements are the combination of internal plus rigid - body motions , .independence between rigid - body motions and internal motions results in the thermal averages , which lead to the fit we rename the parameters as and rewrite the fit as , with , for , and for ( and , with ) . here and in the following ,we denote by any of the atoms and any of the variables .the variable corresponds to the thermal fluctuations due to internal motions , predicted by the enm ( see methods ) .the force constant of the model is thus obtained directly from the fit , as . in the context of b - factors , it is natural to weight , by its mass , the contribution of each atom to the error of the fit .we use therefore the weighted variables and instead of and , respectively ( this does not make any difference if , as it is customary , only the alpha carbons are considered ) . moreover , it is convenient to express the fit in terms of the normalized and dimensionless variables and the normalized and dimensionless parameters . in matrix notations , the fit is then simply written . to limit the risk of overfitting, we adopt the tychonov regularization , also known as ridge regression .we first describe the usual approach for obtaining the non - scaled parameters , which are defined as the values of the fit parameters that minimize the quantity where denotes the scalar product and is the covariance matrix .the ordinary least square ( ols ) regression is recovered as the special case .the minimization of can be interpreted as a constrained minimization of the error of the fit , where the constraint is set on the norm of the parameters , , via a lagrange multiplier . in other words ,the error can not be minimized at the cost of having too large values of the parameters .the explicit solution of the above minimization problem is . since the covariance matrix is symmetric and positive definite , its eigenvalues are real and positive . to simplify the computation, we define the normalized projections of the fitted variable over the eigenvectors of the covariance matrix as , i.e. . the solution is then given by the following formula , which is convenient if we have to perform computations for several values of the tykhonov parameter : when increases , the parameters tend to zero and so does the fitted dependent variable .protocols for ridge regression typically address this problem by avoiding to penalize the offset of the fit , and choosing this offset in such a way that the fit is unbiased , i.e. that the average of the fit is equal to the average of .nevertheless , this procedure modifies the relationship between the explanatory variables .in particular , in the present case , the offset has to be interpreted as the component of the fit due to translations .increasing the offset would have the effect of artificially increasing the contribution of translations , which would then be treated differently from the other degrees of freedom . to ensure that the fitted dependent variable is correctly scaled with respect to , while still considering translational motions similarly to other degrees of freedom , we modify the ridge regression protocol so as to optimize the scale of the fit parameters .more precisely , we multiply all parameters by a constant scalar , to obtain the rescaled parameters .this transformation does not modify the physical interpretation of the fit .it is easy to see that the optimally rescaled parameters have to satisfy the condition , with in order to keep the analytic treatibility , we impose this constraint on the scale via a new lagrange multiplier , so that the rescaled ridge regression is still formulated as the minimization of a quadratic function of the parameters : + \left(\mathbf{y},\mathbf{y}\right)\ , , \nonumber\end{aligned}\ ] ] with .the non - scaled fit can be obtained as a particular case by setting , which implies . since the term is constant ,minimizing the objective function is equivalent to minimizing , and the solution of this problem is thus given by , that is the proportionality factor must fulfill ( eq.([eq : scale ] ) ) .we find note that we defined the penalisation term as instead of .this adjustment is of course not necessary , but it is convenient as it allows the rescaled solution to be proportional to the non - scaled solution obtained at the same value of .the interplay of the constraints imposed by the two lagrange multipliers implies that , contrary to , the rescaled fit parameters do not tend to zero when increases . limit values of the parameters are independent of the correlation matrix , except for their scale that depends on the eigenvalues .therefore , when increases , the information on the correlations between the predictor variables is progressively lost , and their relative weights in the fit become more and more strongly determined by their individual correlations with the dependent variable .there are alternative ways to formulate the rescaled ridge regression problem .in particular , instead of the constraint on the norm of the parameters , we may choose to impose a constraint on the euclidian distance between the parameters and adequately chosen reference values of these parameters , : provided that the reference parameters are chosen as , the minimization with respect to of the new objective function + \left(\mathbf{y},\mathbf{y}\right)\ , , \nonumber\end{aligned}\ ] ] yields , which is exactly the same result as eq.([eq : asca ] ) above .note that the previous formulation of the objective function , in eq.([eq : scaledpr ] ) , is retrieved by setting .the parameter that enforces the scale is now note that the value of the scale parameter is still determined by eq.([eq : nu ] ) , but we are free to choose the multiplier and the scale of the reference parameters as it is most convenient .a simple and interesting choice is to set independent of . to recover the correct infinite limit , it must hold , so that , and the term vanishes in the limit of infinite .this gives a more straightforward interpretation to the penalisation term in rescaled ridge regression , i.e. the error can not be minimized at the cost of having parameters values too different from those that would be obtained if the predictor variables were not correlated to each other .another possibility is to set as constant .again , the infinite limit requires and .since , we do not have to impose any constraint on the scale , but the optimal scale is automatically imposed by the chosen scale of the reference paraters . in that case , the reference parameters are not constant but depend on via .there is also a direct correspondence with the non - scaled ridge regression since , in that case , both and , and . in view of addressing the problem of the optimal choice of , we note that there is a formal analogy between the function that has to be minimized and a free energy , where the ridge parameter plays the role of the temperature .consider a discrete physical system that can access microstates .the boltzmann distribution in statistical mechanics is given by , where is the probability of microstate and its energy , is the boltzmann constant , and the absolute temperature .it can be formally described as the probability distribution that maximizes the entropy for given average energy or , equivalently , minimizes the energy for given entropy .the constraint on the entropy is fixed by the lagrange multiplier , and the normalization condition on the probabilities , , is imposed through another lagrange multiplier . the objective function can be put in the form , where is the average energy and is the entropy . for the sake of the analogy , we adopt the equivalent objective function , where is the uniform distribution in the space of the microstates .the term does not modify the result of the constrained minimization .the term is proportional to the kullback - leibler divergence ( ) between the boltzmann distribution and the reference distribution that corresponds to the infinite temperature limit .thus , we can write , and interpret the boltzmann distibution as the distribution that minimizes the energy subject to the normalization constraint and the constraint on the kl divergence from the uniform distribution . now consider the ridge regression problem .it consists in determining the values of the parameters that minimize the error of the fit , which is analogous to an energy , under a constraint on the scale of the parameters ( eq.([eq : scale ] ) ) , which is analogous to the normalization condition on the probabilities , and a constraint on the divergence from reference parameters ( eq.([eq : f ] ) ) , which is analogous to the divergence from the uniform distribution .this formal analogy suggests that the ridge parameter ( or , with our definition of the rescaled ridge regression problem in eq.([eq : scaled ] ) ) plays a role analogous to that of the temperature in a statistical mechanical system .it is therefore interesting to compare the behaviour of the two systems in the zero temperature and in the infinite temperature limit . in the zero temperature limit, only the microstate of minimum energy contributes to the boltzmann distribution of a thermodynamic system .we can interpret this state as the state dominated by the correlations between degrees of freedom embodied in the energy function . in the ridge regression context , the contribution to the fit of each principal component ( i.e. each eigenvector of the correlation matrix ) is weighted by the factor ( eqs.([eq : a_spectral],[eq : asca ] ) ) , so that the relative contribution of the eigenvector corresponding to the minimum eigenvalue is maximal when .the similarity with a thermodynamic system is particularly striking if as , in that case , the eigenvector is the only one that significantly contributes to the regression at . on the other hand , in the infinite temperature limit , the probabilities tend to the reference values , i.e. the distribution no longer depends on the energies and all microstates are equally populated .similarly , when in ridge regression , the parameters tend to their reference values ( in non - scaled regression , and if we set or in rescaled regression , eq.([eq : ascinf ] ) ) , which no longer depend on the correlation matrix , except for the scaling factor .furthermore , the equipopulation of the thermodynamic microstates finds an interesting correspondance in the fact that the factors weighting the contributions of the eigenvectors to the fit are all equal when . to further analyze this analogy, we can compute the derivative of the error of the fit with respect to , which is equivalent to the specific heat at constant volume , >0\ , .\label{eq : cv}\ ] ] the cauchy - schwartz inequality allows to prove that is positive for , which means that the error of the fit increases with for , as is intuitive since imposes a constraint that is fulfilled at the cost of increasing the error .thus , the positivity of for provides additional support to the thermodynamic analogy .we now propose two criteria for choosing suitable tykhonov parameters inspired by the above statistical mechanical analogy .a good value of should provide a satisfactory trade - off between minimizing the error and reducing overfitting . in the small phase the error is small but is large andoverfitting is the main problem , while in the large phase the contrary holds .we conjecture that , if these phases are separated by a second order phase transition with diverging specific heat in the limit of infinite variables , we may observe a peak of the specific heat for the actual number of variables .thus , we define as the value of at which the specific heat has a maximum .there is only one such value , which can be numerically determined by maximizing the specific heat eq.([eq : cv ] ) .the second criterion that we propose is based on the `` entropic '' contribution to the free energy , .this quantity is equal to zero both at , where all the information of the correlation matrix is retained , and for , where and thus , and the information of the correlation matrix is lost ( except for the scaling factor ) . in between , the penalty reaches a maximum , and we hypothesize that this maximum corresponds to a possibly optimal choice of the ridge parameter .we call `` maximum penalty ( mp ) fit '' ridge regression with this choice of .we consider the case of the rescaled ridge regression , with the reference parameters chosen equal to the parameters obtained in the limit , i.e. and .using eqs.([eq : asca]-[eq : f ] ) , the mp ridge parameter is then defined as the value of that maximizes the term since , the factor can be expressed as follows : , with .note that the dependency of on actually has only a very minor effect on the determination of .therefore , a possible approximation is to simply maximize , considering as a constant .we have verified that it yields highly similar , although slightly reduced , values of .we tested two additional alternative definitions of , based on different choices of the reference parameters , by setting either or instead of .we found that these choices produce results that are generally similar , although a bit poorer on average , than those based on the above definition . more detail , and comparative plots presenting the performances of these alternative definitions are given in supp .text s1 and supp .the new criteria presented above are compared with two well - known schemes for selecting the tykhonov parameter , namely the range risk ( rr ) optimization and the generalized cross validation .the gcv criterion is based on the minimization of the error of the fit with a penalty on the effective number of fitted parameters , which results in minimizing the following quantity : ^ 2 } \label{eq : gcv}\ ] ] where is the error of the fit , is the number of samples and is the number of parameters .although gcv may be defined for negative as well , we only considered positive since the objective function diverges at , which causes numerical instabilities , and since we observed that the performances of the gcv fit generally worsen if negative are allowed .the rr criterion leads to a related formula .nevertheless , we found that these two schemes produce the same results up to numerical precision in the examined case , and we discuss only gcv . finally , we consider the traditional two - parameters fit of b - factors , i.e. .we call this fit the `` no - rotation '' ( norot ) fit , since it neglects all variables associated with rigid rotations , while the intercept of the fit can be interpreted as the contribution of rigid - body translations .we also consider the one - parameter fit , which neglects all rigid - body degrees of freedom ( hence we refer to it as the norigid fit ) .this fit has the advantage that the fitted force constant is guaranteed to be positive if the covariance between the experimental b - factors and those predicted through the enm is positive , which is always the case in our data sets .the fitting procedures described above were applied to 35 different protein datasets .the x - ray dataset consists of 376 monomeric proteins with experimentally determined b - factors , and corresponds to the real case application .the nmr dataset consists of 183 monomeric proteins for which pseudo b - factors were computed from the structural variability within the nmr ensemble . in this case, the superposition of the structures in each ensemble ensures that all rigid - body motions are excluded from the thermal fluctuations , and that the pseudo b - factors are representative of internal motions only .in addition , we created sets of simulated data , by adding randomly generated rotations and translations to the internal fluctuations present in each nmr ensemble , in such a way that the average fraction of motion due to internal , rotational , and translational degrees of freedom is , and , respectively , in each set ( see methods ) .an example of the atomic fluctuations for a protein in the nmr dataset and in simulated sets with rigid - body motions of increasing amplitude is given in fig.[fig:1de1 ] .interestingly , even though the overall shape of the ( pseudo ) b - factor profiles remain very similar with low to medium amplitudes of added rigid - body motions , critical alterations such as the formation of additional peaks can be observed with larger amplitudes of added rotations ( e.g. with in fig.[fig:1de1]c ) . , and .for clarity , only five structures in each ensemble are shown .( c ) the pseudo b - factors for the same protein , as of function of the position in the sequence , in the nmr ensemble 1de1 ( ) , and in the simulated sets with different values of , with . ] for each protein , the predicted fluctuations due to internal motions were obtained with the tnm program , and were used to estimate the force constants and the fractions of degrees of freedom , , and , for each protein in each dataset ( see methods ) .besides the force constant , two parameters define the force field of the tnm , the exponent and the cutoff .we adopt here the parameters and , as these values produce optimal predictions of interatomic distance fluctuations ( unpublished ) .we briefly discuss later the impact of choosing different values of these parameters .the average values of the ridge parameter and of the error of the fit are reported on fig.[fig : lambda ] , for different datasets and fitting procedures .more precisely , we compare the ordinary least square regression ( ols ) with the rescaled ridge regression using different regularization criteria : generalized cross validation ( gcv ) , specific - heat maximum ( cv ) , and maximum penalty ( mp ) .furthermore , the two - parameters fit that neglects rigid - body rotations is referred to as norot , and the one - parameter fit that neglects both rigid - body rotations and translations as norigid . on selected sets of simulated data ( left ) and in the x - ray dataset( right ) .bottom : relative error of the fit ( eq . [ eq : errorfit ] ) .the error bars correspond to the standard deviation over all proteins in each set . ]overall , the comparison of the respective behaviour of these different fits is very consistent across all datasets , which does support the validity of the nmr data with added rigid - body motions as relevant simulated test sets .the gcv criterion generates very small values of the ridge parameter , e.g. on average in the x - ray dataset ( fig.[fig : lambda ] , top ) .this is due to the fact that the number of data points ( i.e. the number of atoms ) is much larger than the number of parameters , so that the effective number of parameters ( eq . [ eq : gcv ] ) depends very weakly on and is always approximately equal to .the results produced by the gcv fit are therefore very close to those obtained by minimizing the error of the fit without any regularization , as in ols regression ( ) . the new criteria that we introduce here , cv and mp , both yield substantially larger values of , on all examined datasets .for example , in the x - ray dataset , the average is equal to with the cv criterion , and to with the mp criterion . with larger values of ,the constraint imposed on the parameters of the fit becomes stronger , resulting in somewhat larger fitting errors ( fig.[fig : lambda ] , bottom ) .however , even with the mp fit that imposes the strongest regularization , the error remains lower than with the common two - parameters fit ( norot ) . on averageover all simulated sets , the relative error of the fitted b - factors decreases from with the norot fit , to with the mp fit , and with the ols fit .the corresponding values in the x - ray dataset are with the norot fit , with the mp fit , and with the ols fit . in the simulated datasets ,the contribution of internal degrees of freedom to the fluctuations of the atomic coordinates is known exactly .these sets give thus the possibility to assess and compare the quality of the various fitting schemes and regularization criteria .the error on internal motions ( eq . [ eq : errorint ] ) reflects the ability to accurately retrieve the fluctuations due to internal degrees of freedom from a fit of b - factor data that may also include contributions from rigid - body degrees of freedom .this error arises in part from the imperfections of the elastic network model used to make the predictions , and in part from the presence of `` noise '' in the b - factor data , in the form of fluctuations due to rigid - body motions .since we are here interested in the latter source of error , we take as reference the lowest possible value of this error ( ) that is obtained with the norigid fit , which does not account for rigid - body motions , on the nmr dataset with , which does not contain fluctuations due to rigid - body motions .the performances of the various fits as a function of the internal fraction ( with ) are given in fig.[fig : errint ] .with the norigid fit , all fluctuations are interpreted as being due to internal motions , and the error thus rapidly increases as the internal fraction drops . the norot fit , which accounts for internal motions and translations , but not for rotations , presents a quite similar behaviour except that it is more robust to the addition of rigid - body fluctuations .it appears therefore as optimal in the presence of rigid - body motions of medium amplitude ( i.e. in the range ) .[ eq : errorint ] ) is given as a function of the fraction of internal motions , in the simulated sets with , for different types of fit .the lowest possible value of the error , , is obtained with the norigid fit on the nmr dataset ( ) . ]if there are rigid - body fluctuations of larger amplitude ( ) , the norot fit is no longer sufficient , and accounting for all degrees of freedom in the fit becomes a critical necessity . for that purpose ,the cv fit and the mp fit , based on the new criteria that we introduced , both appear clearly superior to the ols and the gcv fit .the cv fit outperforms the ols and gcv fits in all of the datasets that we examined , with an error on internal motions that remains relatively stable across the whole range of ( ) .the error of the mp fit is more dependent on the internal fraction , but it still outperforms the ols and gcv in most datasets , except for very low amplitudes of added rigid - body fluctuations ( ) . in the range , the mp criterion stands out as the optimal choice , yielding a significantly lower error than all other fits .in particular , at , the error on internal motions is as low as with the mp fit , which is quite impressive considering that the minimum error possibly achievable is , and that the addressed problem is quite challenging since the amplitude of the `` noise '' ( rigid - body fluctuations ) is here four times larger than that of the `` signal '' ( internal fluctuations ) . for comparison , at , without regularization ( ols ) , and with the gcv criterion .we also examined the error on the simulated sets with either or .the results are similar to those obtained with , except for the norot fit ( supp .as it could be expected , the norot fit performs much better when there are added translations but no rotations , and much worse when there are added rotations but no translations .in addition , a related performance measure can be obtained by considering as a proxy of the real force constant the force constant determined by the norigid fit on the nmr set without rigid - body motions , . in each simulated set , the comparison between and the force constants estimated from the various fitting procedures , leads to the definition of the error on the force constant , ( eq . [ eq : errorkappa ] ) .for all fits , the behaviour of as a function of is strongly related to that of , and leads to similar conclusions ( supp .s3 ) . furthermore , to investigate more systematically the impact of using different levels of regularization , we examined the parametrization , where is the maximum eigenvalue of the normalized covariance matrix , and is a factor that was exponentially increased from to .we can see in fig.[fig : errint_2 ] that for each simulated set , there is an optimal value of , corresponding to a minimum of the error . both the depth and the position of this minimum depend on the fractions of , , and that define each simulated set .yet , in all cases , either the mp or cv criterion , or both , yield an average value and error that are very similar to those obtained at the minimum .these results suggest that , for the current application , the new criteria that we introduced not only outperform the gcv criterion , but are also generally close to optimal in terms of selecting the right level of regularization .[ eq : errorint ] ) is given as a function of the average ridge parameter for different types of fit .each subplot corresponds to a different simulated dataset , with fractions of degree of freedom : and , , or . ] as detailed above , the analysis of the results obtained on the simulated datasets have shown that the performances of the different fitting procedures depend to some extent on the fractions of motion due to internal degrees of freedom ( ) , rigid - body translations ( ) , and rigid - body rotations ( ) .notably , if the contribution of rigid - body motions remains small or medium , accounting for all degrees of freedom may not be necessary , and the use of the norigid or norot fit can be appropriate . on the contrary ,if rigid - body motions account for a sufficiently high fraction of the fluctuations , then a full fit including contributions from both translations and rotations is necessary , and the regularization with the mp criterion yields the best performances .hence , an important question that arises concerns the actual values of the , , and fractions in the b - factor data from x - ray experiments . to answer that question, we used the simulated data to evaluate the ability of the different fits to accurately estimate the , , fractions .the left panel of fig.[fig : itr ] shows the average value ( over all proteins in a simulated set ) of the fitted internal fraction , , as a function of the actual value of in the set .a very strong linear correlation is observed between and , for all fits except the norigid fit , which only accounts for internal motions and thus always yields .the norot fit tends to systematically overestimate the importance of internal motions , i.e. , since fluctuations due to rigid - body rotations are not accounted for and mostly interpreted as being due to internal degrees of freedom . on the other hand , in fits with , the contribution of internal motions is underestimated when is large , and overestimated when is small ( with a threshold at for all types of fit ) .this bias increases with , and is thus most visible in the mp fit , and practically non - existent in the ols and gcv fits . as a function of the real internal fraction .the dashed line corresponds to .( right ) root mean square error of the fitted internal fraction , rmse( ) ( eq .[ eq : rmsei ] ) , as a function of . ] the right panel of fig.[fig : itr ] shows the root mean square error of the fitted internal fraction , rmse , as a function of .interestingly , despite the fact that the mp fit is subject to the strongest regularization , and thus affected by the strongest bias , it does yield the lowest error on the estimation of the internal fraction for individual proteins ( at least when , otherwise the norigid fit is superior ) .this is explained by the fact that the bias is accompanied by a sizeable reduction of the variance ( between proteins ) of the fitted fraction . on the contrary ,the ols and gcv fits are affected by a large variance of the fitted internal fraction .therefore , even though there is little to no systematic bias , the error on is actually much larger with these two fits .the norot and cv fits present intermediate performances .similar results hold for the other types of degrees of freedom , and ( supp .s4 ) . in a number of cases ,the parameters of the fit correspond to negative values of the fraction of motion due to either internal , translational or rotational degrees of freedom , which is unphysical ( in particular , means a negative force constant ) . as shown on fig.[fig : negitr ] , this problem is particularly serious when the internal fraction is large .for example , at , unphysical parameters are obtained for as many as 63% of the proteins , with the ols fit . here again , the benefits of regularization are very apparent : e.g. at , the number of proteins with negative , , or fractions is reduced from 10% with the ols fit , to 3% with the cv fit , and 0% with the mp fit .similar results are obtained on the dataset of crystallographic b - factors . in this case ,unphysical parameters are derived from the ols fit for about 15% of the proteins , but this number drops to 1% with the mp fit . , , or ratio is given for the different fits , on selected simulated datasets ( left ) , and on the x - ray dataset ( right ) . ] in summary , the analysis of the results obtained on the simulated data indicates that the mp fit produces the best estimation of the , , fractions , for individual proteins ( unless , in which case the norigid fit may be preferable ) .this is achieved at the price of a systematic bias , which is a consequence of the regularization of the parameters . on the contrary ,the ols and gcv fits are characterized by a large rmse on the fitted , , fractions , and moreover often generate unphysical values of the parameters .however , these fits are not biased , i.e. on average over a sufficiently large set of proteins , the estimated fractions of motion due to the different types of degrees of freedom , , , and , are remarkably accurate ( fig . [fig : itr ] , left ) .thus , although the ols and gcv fits may not be well suited to any real - case application , this absence of bias is an interesting feature that can be exploited to evaluate the average contributions of internal and rigid - body motions in crystallographic b - factors .the results for experimental b - factors measured in protein crystals are presented in fig.[fig : dof ] .the ols fit , which is expected to give the least biased estimates , yields average values of for internal , for translational , and for rotational degrees of freedom .the regularized fits based on the cv and mp criteria generate very similar values of the fraction of internal motions , which is consistent with the fact that the bias on is minimal when ( fig.[fig : itr ] , left ) .these fits do however somewhat underestimate , on average , the contribution of rigid - body translations and overestimate the contribution of rigid - body rotations ( e.g. and with the mp fit ) . in any case , these results strongly suggest that the contribution of rigid - body motions is quite important in crystallographic b - factors , with internal motions accounting for 20% or less of the measured atomic fluctuations . in this range, the commonly used two - parameters norot fit fails to provide satisfactory results , and the mp fit appears as a much preferable alternative ( see e.g. fig.[fig : errint ] ) .one of the main motivations of this work was to assess how the evaluation of the force constant from a fit of crystallographic b - factors can be improved by properly accounting for fluctuations due to rigid - body degrees of freedom .as detailed above , this assessment can be performed rigorously with the simulated datasets , in which the relative contributions of internal and rigid - body motions are known _ a priori_.such knowledge is not available in the x - ray dataset , but it is however possible to evaluate the variability of the force constants estimated by the different fitting procedures across all proteins in the set .we assess this variability by measuring the standard deviation of the logarithm of the force constant , , plotted in fig.[fig : kappa ] .we consider the logarithm because its fluctuations are better behaved , and it allows to eliminate the influence of multiplicative scale factors ., is given for selected sets of simulated data ( left ) and for the x - ray dataset ( right ) . in each set ,the proteins for which at least one of the fits yielded a negative value of were omitted , for all fits .the dashed line corresponds to the value of obtained with the norigid fit on the nmr dataset ( ) .a value of corresponds to a multiplicative spread of , i.e. for most proteins , the estimated force constant is smaller than times and larger that times the geometric mean . ] in the simulated sets , an important part of the variability is due to the nature of the nmr data .typically , the extent of structural variability within an nmr ensemble is determined by the number and quality of the interatomic distance constraints extracted from the experiment , which may depend on a number of factors not directly related to the dynamical properties of the macromolecule . in consequence ,the norigid fit applied to the nmr dataset without added rigid - body motions ( ) generates values of the force constants that are already considerably spread out , with .the addition of rigid - body motions tends to increase this variability , although shows a relatively limited dependence on the internal fraction ( fig.[fig : kappa ] , left ) .this is not overly surprising since , in each simulated set , the amplitude of added rigid - body motions is identical for all proteins .the largest variability amongst force constants evaluated for different proteins occurs with the ols and gcv fits .these fits account for rigid - body motions but are not ( or only slightly ) regularized , and the collinearity between predictor variables leads thus , in a number of cases , to the determination of either negative or very large values of the force constant . increasing tends to mitigate this problem , and consequently reduces the variability ( supp .the mp fit is thus characterized by an intermediate level of variability of the force constants , similar to that of the norot fit .it is however important to recognize that a low variability does not imply a correct estimation of .for example , the norigid and norot fits yield a relatively low variability but systematically underestimate the force constant , since fluctuations due to rigid - body rotations are interpreted as resulting from internal degrees of freedom .the results are essentially similar in the x - ray dataset ( fig.[fig : kappa ] , right ) . here again , we expect some intrinsic variability of the estimated force constants , due to a variety of phenomena that may influence the b - factors measured in protein crystals , such as crystal packing and static disorder .it is therefore unlikely that any fitting procedure could achieve a complete elimination of the spread of the force constants estimated for different proteins . with , the mp fit does however produce a substantial reduction of the variability , in comparison with both the ols fit ( ) , in which case we can interpret the excess variability as due to overfitting , and the commonly used norot fit ( ) . in the elastic network model , the stiffness of the spring associated to a given pair of residues is typically defined as a function of the spatial distance separating these two residues .we adopted here a common expression of the force constant , if , and otherwise , where is the minimal interatomic distance between residues and , and ( ) is a reference distance ( see methods ) .the results presented above are related to the determination of the factor , with chosen values of the distance threshold , and of the exponent . in order to analyze the dependence of the force constant on and and to ensure the robustness of our conclusions , we investigated the influence of these two parameters of the enm on the estimated force constants . in practice , we varied from to , and from 0 to 8 , and applied the resulting models to the x - ray dataset .the average and the standard deviation of the logarithm of the fitted force constants are given in fig.[fig : cut - off ] , for the various fits , with different values of and . , eliminating proteins for which is negative , are plotted versus the distance cut - off for exponent ( left ) and versus the exponent for distance cut - off (right ) . ]as could be expected , the average force constant tends to decrease when the distance cut - off increases ( fig.[fig : cut - off ] , top left ) .indeed , if the number of interacting pairs of residues increases due to a larger cut - off , the force constant associated with each pair must decrease to maintain a similar amplitude of the atomic fluctuations .we also measured the correlation coefficients between the force constants determined at different values of and , over all proteins of the x - ray data set , for each type of fit .the force constants obtained for the different proteins , with two different sets of parameters and , are always highly correlated to each other ( correlation coefficient larger than 0.98 ) as long as .drastic changes do however occur when the cut - off distance is decreased below 3.5 : forces constants determined with tend to present vanishing correlations with those determined using different values of and , irrespective of the type of fit .these observations are consistent with the fact that if the distance cut - off is too small , crucial interactions are ignored , which can result in a disruption of the overall integrity of the structure , and have dramatic effects on the predicted dynamics . in contrast , the force constant depends much more weakly on the exponent ( fig.[fig : cut - off ] , right ) .this is due to the choice of the reference distance . indeed ,when increases , increases for pairs of residues with , and decreases for pairs or residues with .these two effects appear to compensate each other fairly well , which explains the small impact of on the scaling factor .importantly , all types of fits show the same qualitative behavior , and the differences between the fits are mostly independent of the choice of the and parameters , which demonstrates the robustness of the results presented in the previous sections , where we used and .note however that , even if the parameters and have a relatively limited effect on the force constant , they may significantly affect the normal modes of motion predicted by the enm , and thus the overall performances of the model .crystallographic b - factors are commonly used to train and validate computational models of protein dynamics .there are however a number of possible shortcomings to this approach .in particular , rigid - body rotations are often neglected , even though several studies concluded that the occurrence of such motions is a major determinant of experimental b - factors .our results support these conclusions , and indicate that the contribution of internal motions is lower than 20% , on average .a systematic analysis of simulated sets of pseudo b - factors , characterized by variable amplitudes of rigid - body fluctuations , suggests that these estimations are subject to very little bias .these results are also consistent with a study by halle and a follow - up by li and brschweiler , who showed that b - factors in x - ray protein structures are well predicted by the number of contacts of each residue . in light of the importance of rigid - body motions to b- factors , this observation may be explained by the negative correlation between the number of contacts , which tends to be larger for buried residues , and the atomic fluctuations due to rigid - body rotations , which more severely affect surface residues as they are further away from the center of mass . on the other hand, the number of contacts also has a strong influence on the fluctuations due to internal degrees of freedom , predicted for example by the enm .this entails a high level of collinearity between explanatory variables and underlines the fact that , to determine the scale of the enm force constant via a fit of b - factors , it is not only important to properly account for all rigid - body degrees of freedom , but also to carefully regularize the fit in order to reduce the risk of overfitting .for that purpose , we introduced two novel criteria for determining the ridge parameter .the definition of these criteria was motivated by a strong formal analogy between ridge regression and statistical mechanics , where plays the role of the temperature .our results demonstrate that the mp criterion is an almost optimal way of choosing , at least for the problem of fitting force constants from b - factors . for simulated b - factors with an internal fraction close to ( the range that we expect to find in x - ray structures ), the mp fit was shown to yield the minimum root mean square error ( rmse ) on the estimation of the fluctuations due to internal degrees of freedom , the minimum rmse on the logarithm of the fitted force constant , and the minimum rmse on the estimated fractions of internal , translational and rotational motions . in contrast , the commonly used gcv criterion produces a 40% larger value of the rmse on internal fluctuations .furthermore , in the x - ray dataset , the gcv fit generates unphysical values of the parameters ( e.g. negative force constants ) for 47 out of 376 proteins , while this number is reduced to 5 with the mp fit .the gcv criterion also induces an increase of the across - protein variability of the logarithm of the force constants , from with the mp criterion to .the poor performances of the gcv fit indicate that the ridge parameter determined by this criterion is too small and does not provide a sufficient regularization of the fit of b - factors .this is most likely due to the fact that overfitting occurs here because of the high level of collinearity between explanatory variables , even though the number of fitted parameters ( ) remains small with respect to the number of data points ( number of residues ) .the cv criterion produces a level of regularization that is intermediate between the gcv and mp fits . despite yielding poorer performances than mp when the internal fraction is close to , and thus when applied to x - ray structures , the cv criterion is consistently superior to gcv over all sets of simulated data , and it outperforms mp when the internal fraction is either large or small . in consequence , even though the cv fit appears to be somewhat less well adapted than mp to the present application , it has the advantage of being more robust with respect to the nature of the analysed data , and is thus a good candidate for application in a wide range of situations .note that the analogy between statistical mechanics and ridge regression that we explored here can easily be extended to other types of constrained minimization problems , where the weight of the constraint imposed on the parameters has to be fixed in an optimal way .in particular , in the context of statistical mechanics our analogy evidences the existence of an intrinsic temperature that maximizes the penalty term , where is the temperature and is the entropy , at which the minimization of the energy and the maximization of the entropy are well balanced . in summary ,the present results further confirm the predominance of rigid - body motions in crystallographic b - factors , and underline the importance of accounting for all degrees of freedom via a carefully regularized fit when b - factors are used to scale computational models of protein dynamics .the mp fit stands out as the optimal choice for that particular application .importantly , the potential benefits of the mp and cv criteria are not limited to the fit of b - factor data .although further studies would be necessary to assess the general applicability of these criteria to the regularization of other multi - variable fits , the strong performances displayed here , in comparison with the common gcv approach , suggest that the adoption of these new criteria could be advantageous in various applications. this may be particularly true for problems that bear similarity to the considered case , i.e. when the number of fitted parameters is small enough with respect to the number of data points but the explanatory variables are highly collinear , and/or when restrictions apply to the physically acceptable values of the parameters . on the other hand ,the rescaled variant of ridge regression that we introduced here is readily applicable to any regression problem in which the intercept of the fit has a physical meaning and must be penalised similarly to the other explanatory variables .we examined a test set of 380 non - redundant monomeric proteins whose structure has been solved by x - ray crystallography , with resolution better than 2 , extracted from the top500 dataset used to benchmark the molprobity program . from this set ,we eliminated the proteins with pdb codes 2sns , 1cne , and 2ucz , because the record of b - factors was missing in these proteins ( all recorded b - factors were equal ) , and 1rho , which was the most serious outlier for all the fits and was predicted as hexameric by the software pisa . the 183 structural ensembles in the nmr dataset were selected according to the following criteria : they consist of at least 20 models with identical number of residues ; they correspond to monomeric proteins of at least 50 residues that present at most 30% sequence identity with one another ; they are not listed under the scop classifications `` peptides '' or `` membrane and cell surface proteins '' ; they do not include ligands , dna or rna molecules , chain breaks , or non - natural amino acids ; and they do not contain highly flexible loops or c- or n - terminal tails . to enforce the latter criterion ,highly flexible regions were defined as stretches of at least two consecutive residues for which the mean square fluctuations of the c coordinates are larger than 2.5 times the average over all residues .such loops or tails typically correspond to disordered regions , for which the observed fluctuations within the nmr ensemble are not meaningful , and which are usually absent from structures determined by x - ray crystallography .the structures in each nmr ensemble were superposed , so as to ensure the absence of any rigid - body component to the observed displacements of the atomic coordinates .pseudo b - factors , corresponding to thermal fluctuations due solely to internal motions , were then computed for each residue : where is the number of structural models in the ensemble , is the position of the c atom of residue in model of the superposed ensemble , and the averages are taken over all models of the ensemble .on the basis of the nmr dataset , we generated sets of simulated data , by adding rigid - body contributions of controlled amplitude to the thermal fluctuations .for each set , we applied the following procedure to all superposed nmr ensembles . each structure in the ensemble was subjected to a random translation and rotation , and its coordinates were adapted in consequence : the orientations of the rotation and translation vectors and were drawn randomly from a uniform spherical distribution , and their amplitudes from a standard normal distribution .the scalar parameters and give control over the relative importance of internal , rotational , and translational motions . the resulting structural ensemble can be considered as a series of snapshots of a molecule undergoing fluctuations due to both internal and rigid - body motions . for each residue ,the mean square fluctuations were then computed from this ensemble of snapshots , ensuring that the contribution of rigid - body motions is affected by the same kind of noise as for the internal motions . where is the number of atoms .the average fractions of motion due to translational ( ) and rotational ( ) degrees of freedom are defined similarly . for each protein in each set , the parameters and were adjusted so as to reach pre - defined values of , , and . more precisely , simulated sets were build for 11 different values , with either , , or .the relative amplitude of added rigid - body motions , , was thus varied from 11% ( ) to 900% ( ) .note that the structural variability within a superposed nmr ensemble does not perfectly reflect the actual dynamical behaviour of the protein .indeed , it may also be affected by the resolution of the nmr experiment , and the way the structure - building software deals with missing or conflicting distance constraints .still , mean square fluctuations extracted from superposed nmr ensembles have been shown to correlate well with b - factors from x - ray experiments , and with nmr measurements more directly related to protein dynamics . the structural variability within nmr ensembleshas also been successfully used to investigate the behaviour of enms , or to parametrize their force field . in any case , for our purposes , it is not necessary to assume that the collection of structural models within an ensemble gives an accurate picture of the protein s dynamics .we merely consider that each ensemble provides a reasonably realistic example of possible fluctuations due to internal motions , captured with a certain level of noise . in this work we adopted the torsional network model ( tnm ) , an enm in torsion angle space that preserves the bond lengths and bond angles of the protein .all protein atoms were considered in the computation of the kinetic energy .the native interactions were identified with pairs of heavy atoms at distance smaller than , which was varied from to .for every pair of residues , only the pair of atoms at smallest distance were regarded as native contacts and joined with a spring with force constant , where is the equilibrium distance between the two atoms , a reference distance , is the force constant obtained from the fit of b - factors and is an exponent that we varied from 0 to 8 .finally , the force constant of torsion angles was fixed at , a value that we had previously tested as almost optimal . where is the number of atoms and the number of predictor variables ( when rigid - body motions are accounted for ) .the are the normalised and dimensionless predictor variables , the corresponding fitted parameters , and is the variance of the dependent variable . in the simulated sets ,the contributions of the internal degrees of freedom are known exactly . a straightforward way to evaluate the quality of the different fitting proceduresis thus to assess their ability to accurately extract the atomic fluctuations due to internal motions , , from datasets with varying amounts of added `` noise '' ( i.e. rigid - body fluctuations ) .the relative error on internal motions is defined as : where the parameter is obtained from the full fit of the atomic fluctuations .a related measurement is obtained by considering the force constant estimated from the nmr set without rigid - body motions , , as a proxy of the real force constant .considering that the force constant is a multiplicative factor in the enm model , the values derived from the fits on simulated sets with added rigid - body motions are compared to as follows : the error measures are computed for each protein independently , and the reported values of and are averaged over all proteins in the considered dataset . since is not defined when , the reported values are averaged over the subset of proteins for which none of the fitting procedures generates a negative value of ( i.e. between 165 ( ) and 176 ( ) proteins , out of 183 , for the different simulated sets ) . in the simulated datasets , the fractions of motion due to internal ( ) , translational ( ) , and rotational ( ) degrees of freedomwere adjusted to specific predefined values .the corresponding contributions of the three types of degrees of freedom can be estimated from the different fits .in particular , the fitted fraction of motion due to internal degrees of freedom , in protein , is : where is the number of proteins in the dataset .the corresponding measures for the translational and rotational degrees of freedom , rmse( ) and rmse( ) , respectively , are defined similarly .dos santos hg , klett j , mndez r , bastolla u. ( 2013 ) characterizing conformation changes in proteins through the torsional elastic response .biochem biophys acta 1834:836 - 46 .rueda m , chacn p , orozco m. ( 2007 ) thorough validation of protein normal mode analysis : a comparative study with essential dynamics .structure 15:565 - 75 .riccardi d , cui q , philips gn jr .( 2010 ) evaluating elastic network models of crystalline biological molecules with temperature factors , correlated motions , and diffuse x - ray scattering .biophys j. 99:2616 - 25 .krissinel e , henrick k. ( 2007 ) inference of macromolecular assemblies from crystalline state .j mol biol . 372:774 - 797 .davis iw , leaver - fay a , chen vb , block jn , kapral gj , wang x , murray lw , arendall wbiii , snoeyink j , richardson js , richardson dc .( 2007 ) molprobity : all - atom contacts and structure validation for proteins and nucleic acids .nucleic acids res .35 : w375w383 .berjanskii m , wishart ds .( 2006 ) nmr : prediction of protein flexibility .nat protoc .1:683 - 8 .
multivariate regression is a widespread computational technique that may give meaningless results if the explanatory variables are too numerous or highly collinear . tikhonov regularization , or ridge regression , is a popular approach to address this issue . we reveal here a formal analogy between ridge regression and statistical mechanics , where the objective function is comparable to a free energy , and the ridge parameter plays the role of temperature . this analogy suggests two new criteria to select a suitable ridge parameter : the specific - heat ( cv ) and the maximum penalty ( mp ) fits . we apply these methods to the calibration of the force constant in elastic network models ( enm ) . this key parameter , which determines the amplitude of the predicted atomic fluctuations , is commonly obtained by fitting crystallographic b - factors . however , rigid - body motions are typically partially neglected in such fits , even though their importance has been repeatedly stressed . considering the full set of rigid - body and internal degrees of freedom bears significant risks of overfitting , due to the strong correlations between explanatory variables , and requires thus careful regularization . using simulated data , we show that ridge regression with the cv or mp criterion markedly reduces the error of the estimated force constant , its across - protein variation , and the number of proteins with unphysical values of the fit parameters , in comparison with popular regularization schemes such as generalized cross - validation . when applied to protein crystals , the new methods are shown to provide a more robust calibration of enm force constants , even though our results indicate that rigid - body motions account on average for more than 80% of the amplitude of b - factors . while mp emerges as the optimal choice for fitting crystallographic b - factors , the cv fit is more robust to the nature of the data , and is thus an interesting candidate for other applications . there is growing interest in the investigation of the intrinsic dynamical properties of proteins in their native state , for these properties play a key role in ensuring proper functional activity , notably for catalysis , allosteric regulation , or molecular recognition . however , despite recent progress , the experimental study of protein dynamics remains rather challenging , and computational methods can thus often provide valuable alternatives . among those , elastic network models ( enm ) are becoming increasingly popular , since they are able to provide detailed analytic predictions of native protein dynamics at a very reasonable computational cost . the enm predictions have been shown to correlate well with experimentally observed conformational changes , and with long molecular dynamics trajectories . one of the advantages of enms is that their force field is derived from the experimentally determined structure of the protein of interest , adopting the principle of minimal frustration , and relies on a very small number of coarse - grained parameters . typically , pairs of residues separated by a spatial distance lower than a certain cut - off value are identified as relevant interactions and connected by elastic springs . the dynamical behaviour of the resulting network is determined by the stiffness of the spring assigned to each pair of residues , which is often expressed as a constant multiplied by a decaying power function of the interresidue distance . several previous studies have been focused on the determination of the optimal parameters of this model , i.e. the distance - dependence of the force constant , via the cut - off and the exponential factor , or the influence of the chemical nature of each amino acid type . in the present work , we address a somewhat different question , the evaluation of the overall scale of the force constants . although it does not influence the shape of the normal modes , this parameter is crucial for determining the amplitude of the predicted internal motions of the macromolecule . for that purpose , we adopt here the torsional network model ( tnm ) , an enm in torsion angle space that preserves the bond lengths and bond angles within the protein . in enm studies , it is customary to obtain the scale of the force constants by fitting the predicted thermal displacements of each atom to the experimental mean square fluctuations measured as temperature factors ( b - factors ) in x - ray crystallography . this approach is based on the implicit assumption that the atomic displacements underlying crystallographic b - factors result mainly from motions due to internal degrees of freedom . however , it has been known for a long time that the b - factors are mainly influenced by rigid - body motions taking place in the crystal . on the other hand , crystallised macromolecules experience a different environment than when isolated in solution , and the contacts established with neighbouring molecules in the crystal have also been shown to affect the normal modes of motion . several studies , in which crystal contacts were modelled explicitly , did however come to the conclusion that these contacts only weakly perturbate the internal dynamics of the protein , while the anisotropic temperature factors are dominated by rigid - body motions of the protein . soheilifard et al . , and later lezon , proposed to improve the fit of b - factors by considering the amplitudes of rigid - body motions via six additional fitting parameters . however , the proposed fits are not complete , because ten parameters are necessary for a full representation of the thermal fluctuations due to rigid - body motions . it is in principle straightforward to perform a complete fit of b - factors using 10 free parameters corresponding to rigid - body motions , plus one free parameter that rescales the internal motions predicted by the enm , which is equal to the inverse of the force constant . in general , there is however a high level of collinearity between the variables describing internal motions and rigid - body rotations . a fit of b - factors that fully accounts for rigid - body motions runs therefore a significant risk of overfitting and must be carefully regularized . ridge regression is one of the most common methods for regularizing fits with many variables . it relies heavily on the choice of an adequate value for the ridge parameter but , although several criteria have been proposed for that purpose , there is no consensus on how to systematically determine the optimal value of this parameter . in this paper , we propose two new criteria for choosing the ridge parameter , based on the analogy between ridge regression and statistical mechanics . we call maximum penalty ( mp ) or specific - heat ( cv ) fit the ridge regression performed with either choice of the ridge parameter . we show that the mp fit yields close to optimal results when rigid - body motions account for a fraction of the fluctuations close to that estimated for x - ray structures , while the cv fit is more robust to the amplitude of rigid - body fluctuations . in contrast , other widely used approaches , such as the generalized cross - validation ( gcv ) criterion , fail to provide a sufficient level of regularization . the programs for performing the mp and cv fits and computing the force constants within the enm are available on request .
cellular automata ( cas ) are discrete dynamical systems that have been studied theoretically for years due to their architectural simplicity and the wide spectrum of behaviors they are capable of .cas are capable of universal computation and their time evolution can be complex .but many cas show simpler dynamical behaviors such as fixed points and cyclic attractors . herewe study cas that can be said to perform a simple `` computational '' task .one such tasks is the so - called _ majority _ or _ density _task in which a two - state ca is to decide whether the initial state contains more zeros than ones or _vice versa_. in spite of the apparent simplicity of the task , it is difficult for a local system as a ca as it requires a coordination among the cells . as such , it is a perfect paradigm of the phenomenon of _ emergence _ in complex systems .that is , the task solution is an emergent global property of a system of locally interacting agents .indeed , it has been proved that no ca can perform the task perfectly i.e. , for any possible initial binary configuration of states .however , several efficient cas for the density task have been found either by hand or by using heuristic methods , especially evolutionary computation .for a recent review of the work done on the problem in the last ten years see .all previous investigations have empirically shown that finding good cas for the majority task is very hard .in other words , the space of automata that are feasible solutions to the task is a difficult one to search .however , there have been no investigations , to our knowledge , of the reasons that make this particular fitness landscape a difficult one . in this paperwe try to statistically quantify in various ways the degree of difficulty of searching the majority ca landscape .our investigation is a study of the fitness landscape as such , and thus it is ideally independent from the actual heuristics used to search the space provided that they use independent bit mutation as a search operator .however , a second goal of this study is to understand the features a good search technique for this particular problem space should possess .+ the present study follows in the line of previous work by hordijk for another interesting collective ca problem : the synchronization task . the paper proceeds as follows .the next section summarizes definitions and facts about cas and the density task , including previous results obtained in building cas for the task .a description of fitness landscapes and their statistical analysis follows .this is followed by a detailed analysis of the majority problem fitness landscape .next we identify and analyze a particular subspace of the problem search space called the olympus .finally , we present our conclusions and hints to further works and open questions .cas are dynamical systems in which space and time are discrete .a standard ca consists of an array of cells , each of which can be in one of a finite number of possible states , updated synchronously in discrete time steps , according to a local , identical transition rule . herewe will only consider boolean automata for which the cellular state .the regular cellular array ( grid ) is -dimensional , where is used in practice . for one - dimensional grids ,a cell is connected to local neighbors ( cells ) on either side where is referred to as the _ radius _ ( thus , each cell has neighbors , including itself ) .transition rule _ contained in each cell is specified in the form of a rule table , with an entry for every possible neighborhood configuration of states .the state of a cell at the next time step is determined by the current states of a surrounding neighborhood of cells .thus , for a linear ca of radius with , the update rule can be written as : where denotes the state of site at time , represents the local transition rule , and is the ca radius . +the term _ configuration _ refers to an assignment of ones and zeros to all the cells at a given time step .it can be described by , where is the lattice size .the cas used here are linear with periodic boundary conditions i.e. , they are topologically rings . + a global update rule can be defined which applies in parallel to all the cells : the global map thus defines the time evolution of the whole ca .+ to visualize the behavior of a ca one can use a two - dimensional space - time diagram , where the horizontal axis depicts the configuration at a certain time and the vertical axis depicts successive time steps , with time increasing down the page ( for example , see figure [ sync - ca ] ) . [ cols="^,^,^ " , ] all gas have _ on average _ better performances than the optima find by human or by genetic programming .as expected , searching in the olympus is useful to find good rules .all the gas have nearly the same average performances . however , standard deviation of olympus is four times larger than standard deviation of centroid. as it is confirmed by the mean distance between individuals , the cga quickly looses diversity ( see fig .[ fig - ga ] ) . on the other hand , neutral gakeep genetic diversity during runs .figure [ fig - threshold ] shows that for the most interesting threshold over , neutral have more runs able to overcome the threshold ( ) than olympus ( ) or centroid ( ) .even though we can not statistically compare the best performance of different gas , the best rule was found by the nga with performance of to be compared to the second best rule .these experimental results using gas confirm that it is easy to find good rules in the olympus landscape . during all the independent runs , we find a lot of different cas with performance over : for oga , for cga and for nga .a low computational effort is needed to obtain such cas .a run takes about 8 hours on pc at 2 ghz . taking the neutrality into account allows to maintain the diversity of the population and increases the chance to reach rules with high performance .cellular automata are capable of universal computation and their time evolution can be complex and unpredictable .we have studied cas that perform the computational majority task .this task is a good example of the phenomenon of emergence in complex systems is . in this paperwe have taken an interest in the reasons that make this particular fitness landscape a difficult one .the first goal was to study the landscape as such , and thus it is ideally independent from the actual heuristics used to search the space .however , a second goal was to understand the features a good search technique for this particular problem space should possess .we have statistically quantified in various ways some features of the landscape and the degree of difficulty of optimizing .the neutrality of the landscape is high , and the neutral network topology is not completely random .the main observation was that the landscape has a considerable number of points with fitness or which means that investigations based on sampling techniques on the whole landscape are unlikely to give good results .in the second part we have studied the landscape _ from the top_. although it has been proved that no ca can perform the task perfectly , six efficient cas for the majority task have been found either by hand or by using heuristic methods , especially evolutionary computation . exploiting similarities between these cas and symmetries in the landscape , we have defined the _ olympus _ landscape as a subspace of the majority problem landscape which is regarded as the `` heavenly home '' of the six _ symmetric of best local optima known _( _ blok _ ) .then , we have measured several properties of the olympus landscape and we have compare with those of the full landscape , finding that there are less solutions with fitness .fdc shows that fitness is a reliable guide to drive a searcher toward the _ blok _ and its centroid .an model has been used to describe the fitness / fitness correlation structure .the model indicates that local search heuristics are adequate for finding good rules .fitness clouds and nsc confirm that it is easy to reach solutions with fitness higher than .although it is easier to find relevant cas in this subspace than in the complete landscape , there are structural reasons that prevents a searcher from finding overfitted gas in the olympus .finally , we have studied the dynamics and performances of three genetic algorithms on the olympus in order to confirm our analysis and to find efficient cas for the majority problem with low computational effort . beyond this particular optimization problem ,the method presented in this paper could be generalized . indeed , in many optimization problems , several efficient solutions are available , and we can make good use of this set to design an `` olympus subspace '' in the hope of finding better solutions or finding good solutions more quickly .d. andre , f. h. bennett iii , j. r. koza , discovery by genetic programming of a cellular automata rule that is better than any known rule for the majority classification problem , in : j. r. koza , d. e. goldberg , d. b. fogel , r. l. riolo ( eds . ) , genetic programming 1996 : proceedings of the first annual conference , the mit press , cambridge , ma , 1996 , pp .h. juill , j. b. pollack , coevolutionary learning : a case study , in : icml 98 proceedings of the fifteenth international conference on machine learning , morgan kaufmann , san francisco , ca , 1998 , pp .251259 .r. breukelaar , t. bck , using a genetic algorithm to evolve behavior in multi dimensional cellular automata : emergence of behavior , in : h .-beyer , u .- m .oreilly ( eds . ) , genetic and evolutionary computation conference , gecco 2005 , proceedings , washington dc , usa , , acm , 2005 , pp . 107114 . j. p. crutchfield , m. mitchell , r. das , evolutionary design of collective computation in cellular automata , in : j. p. crutchfield , p. schuster ( eds . ) , evolutionary dynamics : exploring the interplay of selection , accident , neutrality , and function , oxford university press , oxford , uk , 2003 , pp .361411 .r. das , j. p. crutchfield , m. mitchell , j. e. hanson , evolving globally synchronized cellular automata , in : l. j. eshelman ( ed . ) , proceedings of the sixth international conference on genetic algorithms , morgan kaufmann , san francisco , ca , 1995 , pp . 336343 .w. hordijk , j. p. crutchfield , m. mitchell , mechanisms of emergent computation in cellular automata , in : a. eiben , t. bck , m. schoenauer , h .-schwefel ( eds . ) , parallel problem solving from nature- ppsn v , vol .1498 of lecture notes in computer science , springer - verlag , heidelberg , 1998 , pp . 613622 .r. quick , v. rayward - smith , g. smith , fitness distance correlation and ridge functions , in : a. e. eiben et al .( ed . ) , fifth conference on parallel problems solving from nature ( ppsn98 ) , vol .1498 of lecture notes in computer science , springer - verlag , heidelberg , 1998 , pp .m. clergue , p. collard , ga - hard functions built by combination of trap functions , in : d. b. fogel , m. a. el - sharkawi , x. yao , g. greenwood , h. iba , p. marrow , m. shackleton ( eds . ) , proceedings of the 2002 congress on evolutionary computation cec2002 , ieee press , 2002 , pp .249254 . s.verel , p. collard , m. clergue , where are bottleneck in nk fitness landscapes ? , in : r. sarker , r. reynolds , h. abbass , k. c. tan , b. mckay , d. essam , t. gedeon ( eds . ) , proceedings of the 2003 congress on evolutionary computation cec2003 , ieee press , canberra , 2003 , pp .273280 .l. vanneschi , m. clergue , p. collard , m. tomassini , s. verel , fitness louds and problem hardness in genetic programming , in : proceedings of the genetic and evolutionary computation conference , gecco04 , lncs , springer - verlag , 2004 , pp .690701 .r. das , m. mitchell , j. p. crutchfield , a genetic algorithm discovers particle - based computation in cellular automata , in : y. davidor , h .-schwefel , r. mnner ( eds . ) , parallel problem solving from nature- ppsn iii , vol .866 of lecture notes in computer science , springer - verlag , heidelberg , 1994 , pp .344353 .h. juill , j. b. pollack , coevolving the ideal trainer : application to the discovery of cellular automata rules , in : j. r. koza et al .( ed . ) , genetic programming 1998 : proceedings of the third annual conference , morgan kaufmann , university of wisconsin , madison , wisconsin , usa , 1998 , pp. 519527 .p. collard , s. verel , m. clergue , how to use the scuba diving metaphor to solve problem with neutrality ?, in : r. l. de mntaras , l. saitta ( eds . ) , proceedings of the 2004 european conference on artificial intelligence ( ecai04 ) , ios press , valence , spain , 2004 , pp . 166170 .l. barnett , netcrawling - optimal evolutionary search with neutral networks , in : proceedings of the 2001 congress on evolutionary computation cec2001 , ieee press , coex , world trade center , 159 samseong - dong , gangnam - gu , seoul , korea , 2001 , pp .
in this paper we study cellular automata ( cas ) that perform the computational majority task . this task is a good example of what the phenomenon of emergence in complex systems is . we take an interest in the reasons that make this particular fitness landscape a difficult one . the first goal is to study landscape as such , and thus it is ideally independent from the actual heuristics used to search the space . however , a second goal is to understand the features a good search technique for this particular problem space should possess . we statistically quantify in various ways the degree of difficulty of searching this landscape . due to neutrality , investigations based on sampling techniques on the whole landscape are difficult to conduct . so , we go exploring the landscape _ from the top_. although it has been proved that no ca can perform the task perfectly , several efficient cas for this task have been found . exploiting similarities between these cas and symmetries in the landscape , we define the _ olympus _ landscape which is regarded as the `` heavenly home '' of the _ best local optima known _ ( blok ) . then we measure several properties of this subspace . although it is easier to find relevant cas in this subspace than in the overall landscape , there are structural reasons that prevents a searcher from finding overfitted cas in the olympus . finally , we study dynamics and performances of genetic algorithms on the _ olympus _ in order to confirm our analysis and to find efficient cas for the majority problem with low computational cost . , , and fitness landscapes , correlation analysis , neutrality , cellular automata , ar models
the conventional design of cellular systems prescribes the separation of uplink and downlink transmission via time - division or frequency - division duplex .one of the main reasons for this choice is that operating a base station in both the uplink and the downlink at the same time causes the downlink transmitted signal to interfere with the uplink received signal .this self - interference , if not cancelled , overwhelms the uplink signal and makes the full - duplex operation of the base station impractical .recent advances in analog and digital domain self - interference cancellation challenge the need for this arrangement and open up the possibility to operate base stations , especially low - power ones , in a full - duplex mode ( see the review in ) .full - duplex base stations with effective self - interference cancellation seemingly enable the throughput of a cellular system to be doubled , since the available bandwidth can be shared by the uplink and the downlink .however , this conclusion neglects two additional sources of interference between uplink and downlink transmissions , namely : ( _ i _ ) the _ downlink - to - uplink ( d - u ) inter - cell interference _ that is caused on the uplink signals by the downlink transmissions of neighboring base stations ; and ( _ ii _ ) the _ uplink - to - downlink ( u - d ) interference _ that is caused on the downlink signals by the transmission of mobile stations ( mss ) , both within the same cell and in other cells .the impact of intra - cell u - d interference has been studied in and for a single - cell system with a single - antenna or a multi - antenna base station , respectively .multi - cell systems , in which d - u interference and also inter - cell u - d interference arise , have been studied in via system simulation and in using stochastic geometry .the prior work mentioned above focuses on single - cell processing techniques , in which baseband processing is carried out locally at the base stations .single - cell processing is inherently limited by the d - u interference . with the aim of overcoming this limitation , herewe investigate the impact of the cloud radio access network ( c - ran ) architecture on a full - duplex cellular system . in a c - ran system , the base stations operate solely as radio units ( rus ) , while the baseband processing is carried out at a central unit ( cu ) within the operator s network .this migration of baseband processing is enabled by a network of fronthaul links , such as fiber optics cables or mmwave radio links , that connect each ru to the cu .the centralization of both uplink and downlink baseband processing at the cu allows the cu to perform cancellation of the d - u interference since the downlink signal is known at the cu . in order to further cope also with the u - d interference, we evaluate the advantages of performing successive interference cancellation at the mss .accordingly , the strongest intra - cell uplink transmissions are decoded and cancelled before decoding the downlink signals .the analysis in this letter takes an information theoretic approach that builds on the prior work reviewed in .specifically , in order to capture the key elements of the problem at hand , with particular emphasis on the various sources of interference , we focus on a modification of the classical wyner model .the adoption of this model enables us to derive analytical expressions for the achievable rates under single - cell processing and c - ran operation assuming either half - duplex or full - duplex base stations .these analytical results provide obtain fundamental insights into the regimes in which full - duplex rus , particularly when implemented with a c - ran architecture , are expected to be advantageous .consider the extended wyner model depicted in fig . [ fig:1 ] .the model contains one ms per cell that transmits in the uplink and one that receives in the downlink . with conventional half - duplex rus ,the two mss transmit in different time - frequency resources , while , with full - duplex rus , uplink and downlink are active at the same time .we describe here the system model for the full - duplex system the modifications needed to describe the half - duplex system will be apparent . there are cells and inter - cell interference takes place only between adjacent cells as shown in fig .[ fig:1 ] . in order to avoid border effects ,as it is customary , we take to be very large . due to the limited span of the interference ,results in the regime of are known to be accurate also for small values of ( see ) . in the uplink ,the ms active in the cell transmits a signal with power \leq p_{u} ] .the baseband signal received in uplink by the ru is given as where denotes the convolution ; , where is the kronecker delta function , accounts for the direct channel , which has unit power gain , and for the inter - cell interference , which is characterized by the inter - cell interference power gain ; models the _ d - u interference _ with inter - cell power gain and self - interference power gain ; and is white gaussian noise with unit power . in the downlink , the signal received by the ms in the cell can be written as where describes the _ u - d interference _ , which has inter - cell power gain and intra - cell power gain ; and is white gaussian noise with unit power . as depicted in fig .[ fig:1 ] , the parameter accounts for the power received by the ms active in the downlink from the ms active in the uplink within the same cell .each ru is connected to the cu with a fronthaul link of capacity in the uplink and in the downlink .these capacities are measured in bits / s / hz , where the normalization is with respect to the bandwidth shared by the uplink and downlink channels . we assume full channel state information at the cu for both uplink and downlink .define as and the per - cell rates , measured in bits / s / hz , achievable in uplink and downlink , respectively , by a particular scheme .the _ equal per - cell rate _ is now defined as _ notation _ :for convenience of notation , we define the shannon capacity and the function .in this section , we review the performance in the presence of the conventional half - duplex constraint on the rus . in this case , a fraction ] , which yields with c - ran operation , baseband processing is carried out at the cu , while the rus act solely as downconverters in the uplink and upconverters in the downlink .unlike the single - cell processing case , the fronthaul links here carry compressed baseband information . in the uplink ,the signals received by the rus are compressed and forwarded to the cu , which then performs joint decoding . to elaborate, each ru produces the compressed version of the received signal where is the quantization noise , which is white and independent of all other variables . using standard results in rate - distortion theory ( see , e.g. , ( * ? ? ?3.6 ) ) , assuming separate decompression at the cu for each fronthaul link , the quantization noise power is obtained by imposing the equality , which yields based the received signals ( [ eq : quantized uplink signal ] ) , the cu performs joint decoding .the corresponding achievable rate per cell can be written as , where we have defined ^{t} ] , which leads to we observe that , in the special case in which zero - forcing ( zf ) linear precoding is adopted , we have for and in ( [ eq : downlink hd cran ] ) ( see ( * ? ? ?4.2.3 ) ) . in summary , for any given precoding filter ,the per - cell equal rate is equal to ( [ eq : req ] ) with in ( [ eq : uplink hd cran ] ) and in ( [ eq : downlink hd cran ] ) .in this section , we consider the performance with full - duplex ru operation . as in - , we assume that the cancellation of known d - u interference signals is ideal in order to focus on the potential advantages of full - duplex . with single - cell processing ,each ru is able to cancel its self - interference d - u signal . as a result , the achievable uplink per - cell rate is obtained , similar to ( [ eq : uplink hd scp ] ) , as where the additional term at the denominator accounts for the d - u interference .note that we have allowed for a transmit power \leq p_{u}, ] .based the received signals ( [ eq : quantized uplink signal ] ) , the cu first cancels the d - u interference .note that this is possible since the downlink signals are known to the cu .then , the cu performs joint decoding .similar to ( [ eq : uplink hd cran ] ) , the corresponding achievable rate per cell can be written as we adopt linear precoding as discussed in sec .[ sub : downlink hd ] .accordingly , if u - d intra - cell interference is treated as noise , the achievable per - cell rate is given as ( [ eq : downlink fd cran ] ) .if instead successive interference cancellation is performed at the mss , the rate achievable in c - rans can be written as , where , and can be calculated similar to sec .[ sub : single - cell - processing ] from ( [ eq : downlink fd cran ] ) .for instance , equals ( [ eq : downlink fd cran ] ) but with the term removed from the denominator . in summary , for any given precoding filter ,the per - cell equal rate is equal to ( [ eq : req ] ) with in ( [ eq : uplink fd cran ] ) and in ( [ eq : downlink fd cran ] ) . versus the fronthaul capacities with , , , and ., width=249 ]in this section , we provide some numerical results to bring insights into the performance of the discussed approaches . fig .[ fig:1 - 1 ] , we plot the equal per - cell rate versus the fronthaul capacities with , , , and . note that the parameter does not play a role in the analysis .the inter - cell d - u interference gain is chosen to be comparable to the inter - cell gain , while the u - d intra - cell interference gain is significantly larger and the corresponding inter - cell gain is instead significantly smaller than .this setting appears to be in line with what is expected in a dense small - cell scenario in which the rus are placed in a more advantageous position than the mss .a zf precoder is assumed for the downlink , and , unless stated otherwise , successive interference cancellation ( sic ) is employed in the downlink .the figure shows that c - ran solutions have a significant advantage over the corresponding single - cell processing ( scp ) approaches for both half - duplex ( hd ) and full - duplex ( fd ) operations as long as the fronthaul capacities are large enough .note that the spectral efficiency of the fronthaul links is expected to at least one order of magnitude larger than the downlink or uplink spectral efficiencies , which is well within the range shown in the figure .moreover , when the fronthaul capacities are sufficiently large , fd - c - ran provides a gain of around 1.7 as compared to hd - c - ran , which falls short of the maximum gain of 2 due to the interference between uplink and downlink .we finally study the impact of u - d intra - cell interference in fig .[ fig:1 - 3 ] .the parameters are the same as for the previous figure . for the full - duplex approaches, we consider the rate achievable with and without sic as a function of .it is seen that , fd - c - ran is advantageous only if we have small intra - cell interference or if the mss implement sic and the gain is large enough .this suggests that , in practice , fd - c - ran should only be used in conjunction with an appropriate scheduling algorithm that ensures one of these two conditions to be satisfied .overall , the results herein confirm the significant potential advantages of the c - ran architecture in the presence of full - duplex base stations , as long as sufficient fronthaul capacity is available and appropriate ms scheduling or successive interference cancellation at the mss is implemented .s. barghi , a. khojastepour , k. sundaresan and s. rangarajan , characterizing the throughput gain of single cell mimo wireless systems with full duplex radios , in _ proc .modeling and optimization in mobile , ad hoc and wireless networks _ ( wiopt 2012 ) , pp.68 - 74 , may 2012 . o. simeone , n. levy , a. sanderovich , o. somekh , b. m. zaidel , h. v. poor and s. shamai ( shitz ) , cooperative wireless cellular systems : an information - theoretic view , _ foundations and trends in communications and information theory _ , vol .8 , nos . 1 - 2 , pp . 1 - 177 , 2012
the conventional design of cellular systems prescribes the separation of uplink and downlink transmissions via time - division or frequency - division duplex . recent advances in analog and digital domain self - interference interference cancellation challenge the need for this arrangement and open up the possibility to operate base stations , especially low - power ones , in a full - duplex mode . as a means to cope with the resulting _ downlink - to - uplink interference _ among base stations , this letter investigates the impact of the cloud radio access network ( c - ran ) architecture . the analysis follows an information theoretic approach based on the classical wyner model . the analytical results herein confirm the significant potential advantages of the c - ran architecture in the presence of full - duplex base stations , as long as sufficient fronthaul capacity is available and appropriate mobile station scheduling , or successive interference cancellation at the mobile stations , is implemented . _ index terms _ : full duplex , cellular wireless systems , wyner model , cloud radio access networks ( c - ran ) , successive interference cancellation .
in this work we consider time - independent schrdinger equation to calculate vibrational spectra of molecules . the goal is to find smallest eigenvalues and corresponding eigenfunctions of the hamiltonian operator .the key assumption that we use is that potential energy surface ( pes ) can be approximated by a small number of sum - of - product functions .this holds , e.g. if pes is a polynomial .we discretize the problem using the discrete variable representation ( dvr ) scheme .the problem is that the storage required for each eigenvector grows exponentially with dimension as , where is number of grid points in each dimension . even for the dvr scheme where grid points in each dimension is often sufficient to provide very accurate results we would get pb of storage for a 12 dimensional problem .this issue is often referred to as the _ curse of dimensionality_. to avoid exponential growth of storage we use _ tensor train _ ( tt ) decomposition to approximate the operator and the eigenfunctions in the dvr representation , which is known to have exponential convergence rate .it is important to note that the tt - format is algebraically equivalent to the matrix product state format ( mps ) , which has been used for a long time in quantum information theory and solid state physics to approximate certain wavefunctions ( dmrg method ) , see the review for more details .prior research has shown that the eigenfunctions often can be well approximated in the tt - format , i.e. they lie on a certain low - dimensional non - linear manifold .the key question is how to utilize this a priori knowledge in computations .we propose to use well - established iterative methods that utilize matrix inversion and solve corresponding linear systems inexactly along the manifold to accelerate the convergence .our main contributions are : * we propose a concept of a _ manifold preconditioner _ that explicitly utilizes information about the size of the tt - representation .we use the manifold preconditioner for a tensor version of _ locally optimal block preconditioned conjugate gradient _ method ( lobpcg ) .we will refer to this approach as _ manifold - preconditioned lobpcg _( mp lobpcg ) .the approach is first illustrated on computation of a single eigenvector ( section [ sec : mp1 ] ) and then extended to the block case ( section [ sec : mpb ] ) .* we propose tensor version of simultaneous inverse iteration ( also known as block inverse power method ) , which significantly improves accuracy of the proposed mp lobpcg .similarly to the manifold preconditioner the inversion is done using the a priori information that the solution belongs to a certain low - parametric manifold .we will refer to this method as _ manifold - projected simultaneous inverse iteration _( mp sii ) .the approach is first illustrated on computation of a single eigenvector ( section [ sec : iii1 ] ) and then extended to the block case ( section [ sec : iib ] ) .* we calculate vibrational spectra of acetonitrile molecule ch using the proposed approach ( section [ sec:12 ] ) .the results are more accurate than those of the smolyak grid approach but with much less storage requirements , and more accurate than the recent memory - efficient h - rrbpm method , which is also based on tensor decompositions .we note that the smolyak grid approach does not require pes to be approximated by a small number of sum - of - product functions .we follow and consider schrdinger equation with omitted cross terms and the potential - like term in the normal coordinate kinetic energy operator .the hamiltonian in this case looks as where denotes potential energy surface ( pes ) .we discretize the problem using the discrete variable representation ( dvr ) scheme on the tensor product of hermite meshes such that each unknown eigenfunction is represented as where denotes one - dimensional dvr basis function .we call arising multidimensional arrays _ tensors_. the hamiltonian consists of two terms : the laplacian - like part and the pes .it is well - known that the laplacian - like term can be written in the kronecker product form where is the one dimensional discretization of the -th mode .the dvr discretization of the pes is represented as a tensor .the operator corresponding to the multiplication by is diagonal .finally the hamiltonian is written as for our purposes it is convenient to treat not as a 2d matrix , but as a multidimensional operator . in this casethe discretized schrdinger equation has the form hereinafter we use notation implying matrix - by - vector product from . using this notation can be equivalently written as this section we discuss how to solve the schrdinger equation numerically and present our approach for finding a single eigenvector .the case of multiple eigenvalues is discussed in section [ sec : block ] . the standard way to find requirednumber of smallest eigenvalues is to use iterative methods .the simplest iterative method of finding one smallest eigenvalue is the shifted power iteration where the shift is an approximation to the largest eigenvalue of .the matrix - by - vector product is the bottleneck operation in this procedure .this method was successfully applied to the calculation of vibrational spectra in .the eigenvectors in this work are represented as sum - of - products , which allows for fast matrix - by - vector multiplications . despite the ease of implementation and efficiency of each iteration ,the convergence of this method requires thousands of iterations . instead of power iterationwe use inverse iteration which is known to have much faster convergence if a good approximation to the required eigenvalue is known .question related the solution of linear systems in the tt - format will be discussed in section [ sec : iii1 ] .convergence of the inverse iteration is determined by ratio where is the next closest to eigenvalue .thus , has to be closer to than to for the method to converge .however , the closer to is , the more difficult to solve the linear system with matrix it is .therefore , typically this system is solved inexactly .parameter can also depend on the iteration number ( rayleigh quotient iteration ) , however in our experiments ( section [ sec : num ] ) constant choice of yields convergence in 5 iterations . as it follows from for the inverse iteration to converge fast a good initial approximation has to be found .to get initial approximation we propose to use locally optimal block preconditioned conjugate gradient ( lobpcg ) method as it is relatively easy to implement in tensor formats , and a preconditioner can be explicitly utilized .the problem with straightforward usage of the iterative processes above is that we need to store an eigenvector .the storage of this vector is , which is prohibitive even for and .therefore we need a compact representation of an eigenfunction which allows to do inversion and basic linear algebra operations in a fast way . for this purposewe use the tensor train ( tt , mps ) format .tensor is said to be in the tt - format if it is represented as where , , .matrices are called tt - cores and are called tt - ranks . for simplicitywe assume that , and call the tt - rank . in numerical experiments we use different mode sizes , , but for simplicity we will use notation in complexity estimates .compared to parameters of the whole tensor , tt decomposition contains parameters as each , has size .the definition of the tt - format of an operator is similar to the tt representation of tensors where tt - cores , .if , then this representation contains degrees of freedom .tt - format can be considered as a multidimensional generalization of svd . other alternatives to generalize the svd to many dimensions are the canonical , tucker and hierarchical tucker formats .we refer the reader to for detailed survey .the important point why we use the tt decomposition is that it can be computed in a stable way and it does not suffer from the `` curse of dimensionality '' .moreover , there exists well - developed software packages to work with the tt - decomposition . for the inverse iterationwe need to find tt - representation of by approximately solving a linear system assume that both the exact eigenvector and the current approximation belong to the manifold of tensors with tt - rank .the solution of may have ranks larger than and therefore be out of . in the present work we suggest exploiting the information that belongs to andretract the solution of back to the manifold .we refer to this concept as a _ manifold - projected inverse iteration _ ( mp ii ) . in this workwe pose the following optimization problem with a rank constraint problem is hard to solve as operator is close to singular .similarly to the inexact inverse iteration framework we are not searching for the solution that finds global minima of , but utilize several sweeps of the alternating least squares ( als ) method with the initial guess .the als procedure alternately fixes all but one tt - core and solves minimization problem with respect to this tt - core .for instance , an update of a core when all other cores are fixed is found from the minimization over a single core ( see appendix [ sec : als ] ) is a standard linear least squares problem with the unknown vector that has the size size of the corresponding tt - core and is very cheap .moreover these systems can be also solved iteratively .the described minimization over all cores in the tt - representation is referred to as one sweep of the als .the total computational cost of one sweep is then , where is maximum tt - rank of the operator , see appendix [ sec : als ] . _according to the proposed concept we start from and use only a few sweeps ( one or two ) of the als method rather than running the method till convergence .moreover , we found that one can solve local linear systems inexactly either with fixed number of iterations or fixed low accuracy ._ such low requirements for solution of local linear systems and number of als sweeps results in a very efficient method .however , for this method to converge , a good initial approximation to both eigenvector and eigenvalue has to be found . to get initial approximation we use lobpcg method . the lobpcg algorithm for one eigenvalue looks as follows where denotes preconditioner and vector of coefficients chosen from minimization of the rayleigh quotient finding is equivalent to solving the following eigenvalue problem \ , \alpha = \lambda \begin{bmatrix } { \mathcal{x}}_{k } \\ { \mathcal{r}}_k \\ { \mathcal{p}}_k \end{bmatrix } [ { \mathcal{x}}_{k } , { \mathcal{r}}_k , { \mathcal{p}}_k]\ , \alpha.\ ] ] let us discuss tt version of the lobpcg .operations required to implement the lobpcg are presented below : [ [ preconditioning . ] ] preconditioning .+ + + + + + + + + + + + + + + + the key part of the lobpcg iteration is multiplication by a preconditioner . in this workwe use as a preconditioner .this preconditioner works well if the density of states is low , see . to make a preconditioner more efficientone can project it to the orthogonal complement of current approximation of the solution , see jacobi - davidson method .instead of forming we calculate matrix - by - vector multiplication . hence , similarly to the inverse iteration ( section [ sec : iii1 ] ) we propose solving a minimization problem we also use only several sweeps of als for this problem .we refer to this construction of preconditioner as a _ manifold preconditioner _ as it retracts the residual on a manifold of tensors with fixed rank .note that if is known to be positive definite , then minimization of energy functional can be used instead of minimization of the residual .[ [ summation - of - two - tensors . ] ] summation of two tensors .+ + + + + + + + + + + + + + + + + + + + + + + + + given two tensors and with ranks in the tt - format the cores of the sum are defined as and thus , tensor is explicitly represented with ranks .[ [ inner - product - and - norm . ] ] inner product and norm .+ + + + + + + + + + + + + + + + + + + + + + + to find inner product of two tensors and in the tt - format we first need to calculate the hadamard product , which can calculated as therefore , where using special structure of matrices the inner product can be calculated in complexity .the norm can be computed using inner product as .[ [ reducing - rank - rounding . ] ] reducing rank ( rounding ) .+ + + + + + + + + + + + + + + + + + + + + + + + + as we have seen , after summation ranks grow . to avoid rank growth there exists special _ rounding _ operation .it suboptimally reduces rank of a tensor with a given accuracy . in 2dthe rounding procedure of looks as follows .first we calculate qr decompositions of matrices and : hence finally , to reduce the rank we calculate the svd decomposition of and truncate singular values up to required accuracy . in idea is generalized to the tt case .the complexity is .* : calculates via the cross approximation algorithm ( section [ sec : iib ] ) .* : block multiplication of a vector of tt tensors by real - valued matrix using function , see .* : orthogonalizes tt tensors : via cholesky ( section [ sec : iib ] ) or modified gram - schmidt procedure .matrix - by - vector multiplications are done using using . * : solves , `length` using sweeps of als method to minimize with a rank constraint .* : orthogonalizes tt tensors : with respect to : .to avoid rank growth we use rounding if ` length` is small and via ` multifuncrs ` if ` length` is large . * : truncates each tensor : with rank using rounding procedure .[ [ matrix - by - vector - multiplication . ] ] matrix - by - vector multiplication .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + to calculate matrix - by - vector operation in it is convenient to have hamiltonian represented in the tt - format . in this casethere exists explicit representation of matrix - by - vector multiplication when both and are represented in the tt - format which gives representation with tt - rank . to reduce the rank one can either use the rounding procedure or use als minimization of the following optimization problem which is faster than rounding for large ranksin the previous section we discussed the algorithm for a single eigenvector . in this sectionwe extend the algorithm to a block case .the difference is that we need to make additional block operations such as othogonalization and block multiplication . in this section we present manifold - projected version of the simultaneous inverse iteration , which is also known as block inverse power method .assume we are given a good initial approximation to eigenvalues and eigenvectors of linear operator given by its tt - representation .in this case inverse iteration yields fast convergence rate with low memory requirements .[ alg : lobpcg ] group close eigenvalues in clusters ] compute shift average eigenvalue in cluster compute corresponding subvector of ) ] given initial approximation we first split eigenvalues into clusters of eigenvalues .proximity of eigenvalues is defined using a threshold parameter .if a cluster consists of one eigenvalue , then we run a version described in section [ sec : iii1 ] .otherwise we need additional orthogonalization on each iteration step .the orthogonalization is done using the qr decomposition via cholesky factorization .let us consider orthogonalization procedure for vector ] such that if is small , say , then the summation can be done using summation and rounding .the typical value of in numerical experiments is or , so to reduce the constant in complexity we use the so - called _ cross approximation method _ , which is able to calculate the tt - decomposition of a tensor using only few of its elements .namely , the cross approximation is able to build tt - representation of a tensor of exact rank using only elements ( number of parameters in the tt - format ) using interpolation formula with complexity .if tensor is approximately of low rank then tensor can be approximated with given accuracy using the same approach and quasioptimality estimates exist .to find tt - representation of we calculate elements of tensor by explicitly calculating elements in and summing them up with coefficients .this approach allows to calculate general type of functions of a block vector and is referred to as ` multifuncrs ` .it is typically used when either the number of input tensors is large or some nonlinear function of input tensors must be computed .the similar approach was used in for a fast computation of convolution .the result of solving the block system using of optimization procedure is denoted as , where .\ ] ] the overall algorithm is summarized in algorithm [ alg : inverse ] .all discussed auxiliary functions are presented in algorithm [ alg : functions ] . if the cluster size is much smaller than the number of required eigenvalues , the complexity of finding each cluster is fully defined by the inversion operation which costs .thus , the overall complexity of the inverse iteration scales linearly with : .we note that each cluster can be processed independently in parallel .we use the lobpcg method to get initial approximation for the manifold - preconditioned simultaneous inverse iteration .the problem is that each iteration of the lobpcg is much slower compared to the inverse iteration for large number of eigenvalues .we hence only run lobpcg with small ranks and then correct its solution with larger ranks using the inverse iteration .[ alg : lobpcg ] [ alg : lobpcg ] augment with corresponding tensors from restart the algorithm with new ( optionally ) increase truncation rank the lobpcg algorithm is summarized in algorithm [ alg : lobpcg ] .auxiliary functions used in the algorithm are presented in algorithm [ alg : functions ] .we also use matlab like notation for submatrices , e.g. is the corresponding submatrix in matrix .block matrix - by - vector multiplication arises when multiplying by matrix , where is the number of eigenvalues to be found .when is a large number and we use ` multifuncrs ` for block matrix - by - vector product instead of straightforward truncation .this is the most time consuming step in the algorithm and it costs .another time - consuming part is matrix - by - vector multiplication which costs .thus , the overall complexity of each iteration is . in order to accelerate the method, we consider a version of lobpcg with deflation .deflation technique is that we stop iterating converged eigenvectors . in this casethe residual must be orthogonalized with respect to the converged eigenvectors .this procedure is denoted as ` ortho ` and is described in more details in algorithm [ alg : functions ] .it might be also useful to increase rank of vectors that did not converged after previous deflation step .the prototype is implemented in python using the ` ttpy ` library https://github.com/oseledets/ttpy .the code of the proposed algorithm can be found at https://bitbucket.org/rakhuba/ttvibr . for the basic linear algebra tasks the mkl library is used .python and mkl are from the anaconda python distribution https://www.continuum.io .python version is 2.7.11 .mkl version is 11.1 - 1 .tests were performed on a single intel core i7 2.6 ghz processor with 8 gb of ram .however , only 1 thread was used .first of all , we test our approach on a model hamiltonian when the solution is known analytically . following choose bilinearly coupled 64 dimensional hamiltonian with and .the tt - rank of this hamiltonian is for all inner modes independently of or . to solve this problem we first use manifold - preconditioned lobpcg method ( section [ sec : mpb ] ) with rank and then correct it with the mp inverse iteration .the thresholding parameter for separation of energy levels into clusters in the inverse iteration is ( and are in the same cluster if ) .the mode size is constant for each mode . as it follows from figure[ fig : berror ] the mp inverse iteration significantly improves the accuracy of the solution .we used 10 iterations of the lobpcg .the mp lobpcg computations took approximately 3 hours of cpu time and the mp inverse iteration took additionally 30 minutes .we also tested tensor version of the preconditioned inverse iteration ( pinvit ) that in case of a single eigenvector looks as where is selected to minimize the rayleigh quotient .figure [ fig : conv_sd ] illustrates convergence behavior of last 10 eigenvalues for different methods .the pinvit which also allows for explicit preconditioner converged to wrong eigenvalues .lobpcg method without preconditioner is unstable due to rank thresholding .we note that all these iterations converged to correct eigenvalues if the number of eigenvalues to be found was less than . in this part we present calculations of vibrational spectra of acetonitrile ( ch ) molecule .the hamiltonian used is described in and looks as follows it contains terms : kinetic energy terms , quadratic , cubic , and quartic potential terms .we chose the same basis size that was used in , namely the mode sizes were corresponding to the order described in that work .we found that ranks of the hamiltonian for this particular molecule do not strongly depend on the permutation of indices , namely the largest rank we observed among random permutations was , while the maximum rank of the best permutation was . in computations we permuted indices such that array of sorted in a decaying order .table [ tab : hamranks ] contains ranks of the hamiltonian in the tt - representation .we note that total ranks are ranks of a sum of potentials after rounding , and hence they are not equal to the sum of ranks of potentials in table [ tab : hamranks ] . to assemble the potential one needs to add rank-1 terms and rank-1 terms .each rank-1 term can be expressed analytically in the tt - format . as was mentioned in section [ sec : mp1 ] after each summation the rank grows , so the rounding procedure is used . recall that the rounding procedure requires operations .thus , the complexity of assembling the hamiltonian is where ` nnz ` stands for number of nonzeros , is the maximum mode size and is the maximum rank of .the total time of assembling the hamiltonian was less than second ..tt - ranks of the parts of the hamiltonian with threshold .+ [ cols="<,^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ] we ran the lobpcg method with tt - rank equal to 12 and used manifold preconditioner .initial guess is chosen from the solution of the harmonic part of the hamiltonian .eigenvectors of multidimensional quantum harmonic oscillator are tensor product of 1d oscillator eigenvectors and therefore can be represented analytically in the tt - format with rank .shift for the preconditioner is set to be the lowest energy of the harmonic part .the convergence of each eigenvalue is presented on figure [ fig : conv_ch3cn ] .the obtained eigenfunctions were used as an initial approximation to the inverse iterations with ranks equal to .shifts were chosen to be lobpcg energies .the thresholding parameter for separation of energy levels into clusters in the inverse iteration is these results were further corrected with the inverse iteration with rank . as it follows from table[ tab : energies ] and figure [ fig : ch3cn_error ] both corrections are more accurate than the h - rrbpm method .the latter correction with rank yields energy levels lower than those of the smolyak quadrature method which means that our energy levels are more accurate .we note that storage of a solution with is less than the storage of the smolyak method ( 180 mb compared with 1.5 gb ) . timings and storage of the h - rrbpm method were taken from . on this examplethe state - of - the - art method ` eigb ` method converges within approximately several days .the problem is that all basis functions are considered in one basis , which leads to large ranks .nevertheless , the ` eigb ` becomes very efficient and accurate when small number of eigenpairs are required . of eigenvalues of the acetonitrile molecule with respect to the eigenvalue number .are energies obtained by smolyak quadratures . negative value of error stand for more accurate than smolyak quadratures energies .black line denotes zero value of the error .mp sii stands for manifold - projected inverse iteration.,width=340 ]the simplest basis set for representing unknown eigenfunctions is the direct product ( dp ) of one - dimensional basis functions . if a fast matrix - by - vector operation is given , then krylov iterative methods are available and the only problem is the exponential storage requirements . alternatively one can prune a direct product basis or use a basis that is a product of functions with more than one variable . in this work ,we focus on dp basis and further reduce storage by approximating unknown eigenvectors in the tt - format .we refer the reader to for detailed surveys on tensor decompositions .canonical tensor decomposition ( also called cp decomposition or candecomp / parafac model ) of the eigenvectors of vibrational problems was considered in work by leclerc and carrington .the authors used rank - reduced block power method ( rrbpm ) .each iteration of this method involves matrix - by - vector product , which can be efficiently implemented in tensor formats .the problem is that this method has poor convergence .moreover , canonical decomposition is known to have stability issues .the hierarchical rrbpm ( h - rrbpm ) proposed in by thomas and carrington is a significant improvement of the rrbpm .this method also utilizes sum - of - products representation , but treats strongly coupled coordinates together .coupled coordinates are further decomposed hierarchically .the multi configuration time dependent hartree ( mctdh ) approach also uses tensor format , namely the tucker decomposition .this approach reduces complexity , but suffers from the curse of dimensionality .this problem was solved in the multilayer mctdh which is similar to the hierarchical tucker representation .we would also like to discuss tensor algorithms for solving eigenvalue problems developed in mathematical community .there are two natural directions of solving eigenvalue problems in tensor formats .one direction is straightforward generalization of iterative methods to tensor arithmetics with rank reduction on each iteration . for the canonical decomposition power method with shiftswas generalized in and used in the rrbpm method .the preconditioned inverse iteration ( pinvit ) for tensor formats was considered in .the inverse iteration used in this paper differs from the pinvit , which is basically preconditioned steepest descent .tensor version of the inverse iteration based on iterative solution of arising linear systems was considered in .the pinvit iteration can explicitly utilize a preconditioner .the construction of preconditioners in tensor formats for eigenvalue problems was considered in .the approach for a general operator uses newton - schulz iteration in order to find a good preconditioner .however , due to high amount of matrix multiplications , this approach becomes time - consuming . in order to construct a preconditioner one can use approximate inverse matrix or approximate solution of linear systems .see for solvers in tensor formats .the more advanced lobpcg method was for the first time generalized in .we utilize this method and construct preconditioner based on optimization procedure .the pinvit method with the proposed preconditioner and the lobpcg with and without preconditioning were tested in section [ sec : num ] .although rank - truncated pinvit method is fast and is able to accurately calculate small amount of eigenvalues , it fails to converge when a lot of eigenvalues are required .alternatively to iterative methods , one can pose an optimization problem minimization of the rayleigh quotient with the constraint on rank .this approach was recently considered in the matrix product state ( mps ) community and independently in .the only disadvantages is that all eigenfunctions are treated in one basis , which leads to large ranks and the method becomes slow ( calculation of the acetonitrile took several days ) .nevertheless , this approach becomes very efficient and accurate when small number of eigenpairs are required .one of the most interesting missing bits is the theory of the proposed approach .first , why the eigenfunctions can be well - approximated in the tt - format and what are the restrictions on the pes .second , the convergence properties of the manifold preconditioner have to be established . these questions will be addressed in future works . from practical point of view, the applicability of the proposed approach for general molecules has to be studied .currently , it requires the explicit knowledge of sum - of - product representation .obtaining such a representation is a separate issue , which can be done by using existing methods : potfit or more general tt - cross approximation approach .if pes has large ranks , coordinate transformation can be helpful .we would like to thank prof .tucker carrington jr . andhis research group for providing data for the numerical experiment section .the authors also would like to thank alexander novikov for his help in improving the manuscript .we also thank anonymous referees for their comments and constructive suggestions . this work has been supported by russian science foundation grant 14 - 11 - 00659 .let us discuss technical details of solving the problem . to illustrate the idealet us start from the skeleton decomposition in 2d . in this case , where .this representation is equivalent to the tt - format in 2d . indeed , by defining cores and we get .the als procedure starts with fixing one core .let us fix with orthonormal columns ( can always be done by qr decomposition ) and find the updated from the minimization problem thanks to well - known property of kronecker products the problem can be equivalently represented as a minimization problem on unknown vector where denotes vectorization of by reshaping it into a column vector .finally we get small linear system where matrix is of size , while the initial matrix is of size . to avoid squared condition numberone can formally use the projectional approach which corresponds to the zero gradient of the energy functional the problem is that is not positive definite unless the smallest eigenvalue is required .nevertheless , we found that this approach also works if is close enough to the eigenvector corresponding to the eigenvalue closest to .this assumption holds as before running mp ii we find a good initial guess for both and eigenvector , see section [ sec : mp1 ] .note that operator is also given in the low - rank matrix format ( also corresponds to the tt - matrix format in 2d ) : where .hence , the matrix of a local system can be represented as and one can do matrix - vector products with complexity .indeed , matrices are of size and are .hence multiplication of by a vector requires operations .similarly to the 2d case in arbitrary dimension we get the following linear system on the vectorized -th core where matrix is a multiplication of first cores reshaped into matrix : matrix is defined by analogy .typically additional orthogonalization of and is done .since operator is given by its tt - representation , matrix - vector multiplication requires operations , where is tt - rank of the tensor and is the maximum rank of the operator .compared with of the 2d case tt - cores between the first and last dimensions are of size for tt - tensor and for tt - matrix , so squared ranks appear .we note that if dimension and/or mode size are large : , then the total complexity is . + & & & & & reference + & & & & & & & & smolyak + & & & & & & & & gb + & & & & & & & & + level & sym . & & & & & & & & & & & & & + zpve & & 9837.893 & 0.485 & 9837.525 & 0.118 & 9837.429 & 0.022 & 9837.463 & 0.056 & 9837.408 & 0.001 & & & 9837.4073 + & & 361.24 & 0.25 & 361.04 & 0.04 & 361.00 & 0.01 & 361.016 & 0.03 & 360.991 & 0.000 & -0.001 & 360.990 & 360.991 + & & 361.24 & 0.25 & 361.06 & 0.07 & 361.00 & 0.01 & 361.029 & 0.04 & 360.991 & 0.000 & -0.001 & 360.990 & 360.991 + & & 723.60 & 0.42 & 723.41 & 0.23 & 723.22 & 0.04 & 723.274 & 0.09 & 723.180 & -0.001 & -0.001 & 723.180 & 723.181 + & & 723.60 & 0.42 & 723.41 & 0.23 & 723.22 & 0.04 & 723.276 & 0.09 & 723.180 & -0.001 & -0.001 & 723.180 & 723.181 + & & 724.27 & 0.44 & 724.05 & 0.23 & 723.87 & 0.04 & 723.919 & 0.09 & 723.827 & 0.000 & -0.001 & 723.826 & 723.827 + & & 901.25 & 0.59 & 900.83 & 0.16 & 900.71 & 0.05 & 900.722 & 0.06 & 900.660 & -0.002 & -0.004 & 900.658 & 900.662 + & & 1035.33 & 1.21 & 1034.23 & 0.10 & 1034.19 & 0.07 & 1034.211 & 0.08 & 1034.127 & 0.001 & -0.002 & 1034.124 & 1034.126 + & & 1035.34 & 1.21 & 1034.23 & 0.10 & 1034.20 & 0.07 & 1034.241 & 0.11 & 1034.127 & 0.001 &-0.002 & 1034.124 & 1034.126 + & & 1087.23 & 0.67 & 1086.86 & 0.31 & 1086.66 & 0.11 & 1086.720 & 0.17 & 1086.554 & 0.000 & -0.002 & 1086.552 & 1086.554 + & & 1087.23 & 0.67 & 1086.86 & 0.31 & 1086.66 & 0.11 & 1086.734 & 0.18 & 1086.554 & 0.000 & -0.001 & 1086.553 & 1086.554 + & & 1088.52 & 0.74 & 1088.09 & 0.31 & 1087.88 & 0.11 & 1087.960 & 0.18 & 1087.776 & 0.000 & -0.001 & 1087.775 & 1087.776 + & & 1088.52 & 0.75 & 1088.09 & 0.31 & 1087.88 & 0.11 & 1088.005 & 0.23 & 1087.776 & 0.000 & -0.001 & 1087.775 & 1087.776 + & & 1260.70 & 0.82 & 1260.04 & 0.16 & 1259.90 & 0.02 & 1259.947 & 0.07 & 1259.811 & -0.071 & -0.073 & 1259.809 & 1259.882 + & & 1260.70 & 0.82 & 1260.14 & 0.26 & 1259.90 & 0.02 & 1260.035 & 0.15 & 1259.811 & -0.071 & -0.073 & 1259.809 & 1259.882 + & & 1390.32 & 1.34 & 1389.44 & 0.47 & 1389.16 & 0.18 & 1390.003 & 1.03 & 1389.007 & 0.034 & -0.002 & 1388.971 & 1388.973 + & & 1396.40 & 1.71 & 1394.90 & 0.21 & 1394.79 & 0.10 & 1395.349 & 0.66 & 1394.701 & 0.012 & -0.007 & 1394.682 & 1394.689 + & & 1396.40 & 1.72 & 1394.90 & 0.21 & 1394.80 & 0.11 & 1395.519 & 0.83 & 1394.709 & 0.020 & -0.007 & 1394.682 & 1394.689 + & & 1396.56 & 1.65 & 1395.10 & 0.19 & 1395.01 & 0.10 & 1395.745 & 0.84 & 1394.916 & 0.009 & -0.007 & 1394.900 & 1394.907 + & & 1399.39 & 1.70 & 1398.07 & 0.38 & 1397.85 & 0.16 & 1398.570 & 0.88 & 1397.727 & 0.040 & -0.003 & 1397.684 & 1397.687 + & & 1452.27 & 1.17 & 1451.65 & 0.55 & 1451.35 & 0.25 & 1451.409 & 0.31 & 1451.095 & -0.006 & -0.008 & 1451.093 & 1451.101 + & & 1452.27 & 1.17 & 1451.65 & 0.55 & 1451.35 & 0.25 & 1451.501 & 0.40 & 1451.095 & -0.006 & -0.008 & 1451.093 & 1451.101 + & & 1454.09 & 1.26 & 1453.37 & 0.55 & 1453.08 & 0.25 & 1453.179 & 0.35 & 1452.821 & -0.006 & -0.008 & 1452.819 & 1452.827 + & & 1454.24 & 1.41 & 1453.37 & 0.55 & 1453.08 & 0.25 & 1453.231 & 0.40 & 1452.821 & -0.006 & -0.008 & 1452.819 & 1452.827 + & & 1454.85 & 1.45 & 1453.95 & 0.55 & 1453.65 & 0.25 & 1453.739 & 0.34 & 1453.397 & -0.006 & -0.008 & 1453.395 & 1453.403 + & & 1484.53 & 1.30 & 1483.33 & 0.10 & 1483.30 & 0.07 & 1483.455 & 0.23 & 1483.225 & -0.004 & -0.009 & 1483.220 & 1483.229 + & & 1484.54 & 1.31 & 1483.34 & 0.11 & 1483.30 & 0.07 & 1483.545 & 0.32 & 1483.226 & -0.003 & -0.008 & 1483.221 & 1483.229 + & & 1621.95 & 1.73 & 1620.86 & 0.64 & 1620.45 & 0.22 & 1620.540 & 0.32 & 1620.202 & -0.020 & -0.024 & 1620.198 & 1620.222 + & & 1621.96 & 1.74 & 1620.86 & 0.64 & 1620.45 & 0.22 & 1620.729 & 0.51 & 1620.202 & -0.020 & -0.024 & 1620.198 & 1620.222 + & & 1622.56 & 1.79 & 1621.40 & 0.64 & 1620.99 & 0.22 & 1621.281 & 0.51 & 1620.748 & -0.019 & -0.024 & 1620.743 & 1620.767 + & & 1751.15 & 1.62 & 1750.18 & 0.65 & 1749.76 & 0.23 & 1750.918 & 1.39 & 1749.591 & 0.061 & -0.005 & 1749.525 & 1749.53 + & & 1751.15 & 1.62 & 1750.43 & 0.90 & 1749.76 & 0.23 & 1751.229 & 1.70 & 1749.612 & 0.082 & -0.003 & 1749.527 & 1749.53 + & & 1758.64 & 2.22 & 1756.85 & 0.42 & 1756.60 & 0.18 & 1757.736 & 1.31 & 1756.475 & 0.049 & -0.007 & 1756.419 & 1756.426 + & & 1758.65 & 2.22 & 1756.85 & 0.42 & 1756.60 & 0.18 & 1757.756 & 1.33 & 1756.476 & 0.050 & -0.007 & 1756.419 & 1756.426 + & & 1759.26 & 2.13 & 1757.53 & 0.39 & 1757.30 & 0.17 & 1758.023 & 0.89 & 1757.148 & 0.015 & -0.010 & 1757.123 & 1757.133 + & & 1759.26 & 2.13 & 1757.53 & 0.40 & 1757.31 & 0.18 & 1758.263 & 1.13 & 1757.154 & 0.021 & -0.009 & 1757.124 & 1757.133 + & & 1762.20 & 2.43 & 1760.37 & 0.59 & 1759.98 & 0.21 & 1761.112 & 1.34 & 1759.848 & 0.076 & -0.004 & 1759.768 & 1759.772 + & & 1762.20 & 2.43 & 1760.56 & 0.78 & 1759.98 & 0.21 & 1761.381 & 1.61 & 1759.859 & 0.087 & -0.002 & 1759.770 & 1759.772 + & & 1787.24 & 2.04 & 1785.52 & 0.31 & 1785.37 & 0.16 & 1785.338 & 0.13 & 1785.128 & -0.079 & -0.087 & 1785.120 & 1785.207 + & & 1818.50 & 1.70 & 1817.82 & 1.02 & 1817.07 & 0.27 & 1817.207 & 0.41 & 1816.789 & -0.010 & -0.012 & 1816.787 & 1816.799 + & & 1818.50 & 1.70 & 1817.82 & 1.02 & 1817.07 & 0.27 & 1817.219 & 0.42 & 1816.789 & -0.010 & -0.012 & 1816.787 & 1816.799 + & & 1820.78 & 1.83 & 1819.98 & 1.03 & 1819.22 & 0.27 & 1819.635 & 0.68 & 1818.943 & -0.009 & -0.012 & 1818.940 & 1818.952 + & & 1820.78 & 1.83 & 1819.98 & 1.03 & 1819.22 & 0.27 & 1819.644 & 0.69 & 1818.944 & -0.008 & -0.011 & 1818.941 & 1818.952 + & & 1821.92 & 1.89 & 1821.07 & 1.04 & 1820.30 & 0.27 & 1820.482 & 0.45 & 1820.021 & -0.010 & -0.014 & 1820.017 & 1820.031 + & & 1821.92 & 1.89 & 1821.07 & 1.04 & 1820.30 & 0.27 & 1820.575 & 0.54 & 1820.021 & -0.010 & -0.014 & 1820.017 & 1820.031 + & & 1845.94 & 1.68 & 1844.45 & 0.19 & 1844.36 & 0.10 & 1845.160 & 0.90 & 1844.312 & 0.054 & -0.008 & 1844.250 & 1844.258 + & & 1846.03 & 1.70 & 1844.53 & 0.20 & 1844.43 & 0.10 & 1845.309 & 0.98 & 1844.387 & 0.057 & -0.008 & 1844.322 & 1844.33 + & & 1846.04 & 1.71 & 1844.53 & 0.20 & 1844.43 & 0.10 & 1845.610 & 1.28 & 1844.389 & 0.059 & -0.008 & 1844.322 & 1844.33 + & & 1846.41 & 1.72 & 1844.89 & 0.20 & 1844.79 & 0.10 & 1845.985 & 1.30 & 1844.745 & 0.055 & -0.009 & 1844.681 & 1844.69 + & & 1934.56 & 3.01 & 1931.84 & 0.29 & 1931.70 & 0.15 & 1931.849 & 0.30 & 1931.523 & -0.024 & -0.033 & 1931.514 & 1931.547 + & & 1934.58 & 3.03 & 1931.84 & 0.29 & 1931.70 & 0.16 & 1931.897 & 0.35 & 1931.523 & -0.024 & -0.032 & 1931.515 & 1931.547 + & & 1984.05 & 2.20 & 1983.51 & 1.66 & 1982.14 & 0.29 & 1982.577 & 0.73 & 1981.821 & -0.028 & -0.034 & 1981.815 & 1981.849 + & & 1984.05 & 2.20 & 1983.51 & 1.66 & 1982.14 & 0.29 & 1982.783 & 0.93 & 1981.821 & -0.029 & -0.035 & 1981.815 & 1981.85 + & & 1985.33 & 2.48 & 1984.51 & 1.65 & 1983.14 & 0.28 & 1983.483 & 0.63 & 1982.823 & -0.034 & -0.041 & 1982.816 & 1982.857 + & & 1985.33 & 2.48 & 1984.51 & 1.66 & 1983.14 & 0.29 & 1983.575 & 0.72 & 1982.823 & -0.034 & -0.041 & 1982.816 & 1982.857 + & & 2062.65 & 5.58 & 2058.66 & 1.59 & 2057.17 & 0.10 & 2058.000 & 0.93 & 2057.071 & 0.003 & -0.020 & 2057.048 & 2057.068 + & & 2069.96 & 4.67 & 2066.78 & 1.49 & 2065.38 & 0.10 & 2065.811 & 0.53 & 2065.272 & -0.014 & -0.019 & 2065.267 & 2065.286 + & & 2070.21 & 4.92 & 2066.80 & 1.51 & 2065.38 & 0.10 & 2066.355 & 1.07 & 2065.297 & 0.011 & -0.018 & 2065.268 & 2065.286 + & & 2118.65 & 7.27 & 2112.50 & 1.12 & 2111.75 & 0.37 & 2113.788 & 2.41 & 2111.511 & 0.131 & -0.003 & 2111.377 & 2111.38 + & & 2118.67 & 7.29 & 2112.50 & 1.12 & 2111.75 & 0.37 & 2113.928 & 2.55 & 2111.565 & 0.185 & -0.001 & 2111.379 & 2111.38 + & & 2120.01 & 7.71 & 2113.35 & 1.05 & 2112.67 & 0.38 & 2114.145 & 1.85 & 2112.467 & 0.170 & -0.003 & 2112.294 & 2112.297 + & & 2120.94 & 1.61 & 2119.96 & 0.63 & 2119.66 & 0.33 & 2121.397 & 2.07 & 2119.396 & 0.069 & -0.010 & 2119.317 & 2119.327 + & & 2121.14 & 1.81 & 2119.96 & 0.63 & 2119.66 & 0.33 & 2122.058 & 2.73 & 2119.422 & 0.095 & -0.010 & 2119.317 & 2119.327 + & & 2121.54 & 1.00 & 2121.10 & 0.56 & 2120.84 & 0.30 & 2122.132 & 1.59 & 2120.583 & 0.042 & -0.012 & 2120.529 & 2120.541 + & & 2122.07 & 1.53 & 2121.10 & 0.56 & 2120.85 & 0.31 & 2122.481 & 1.94 & 2120.586 & 0.045 & -0.011 & 2120.530 & 2120.541 + & & 2122.07 & 1.16 & 2121.46 & 0.55 & 2121.21 & 0.30 & 2123.442 & 2.53 & 2120.943 & 0.033 & -0.013 & 2120.897 & 2120.91 + & & 2123.23 & 0.40 & 2124.17 & 1.34 & 2123.16 & 0.32 & 2124.548 & 1.71 & 2122.845 & 0.011 & 0.001 & 2122.835 & 2122.834 + & & 2123.32 & 0.49 & 2124.18 & 1.34 & 2123.16 & 0.33 & 2125.200 & 2.37 & 2122.997 & 0.163 & 0.004 & 2122.838 & 2122.834 + & & 2123.59 & 0.29 & 2124.72 & 1.42 & 2123.63 & 0.33 & 2126.877 & 3.58 & 2123.419 & 0.118 & -0.001 & 2123.300 & 2123.301 + & & 2145.31 & 2.70 & 2144.46 & 1.85 & 2142.68 & 0.06 & 2142.585 & -0.03 & 2142.390 & -0.224 & -0.235 & 2142.379 & 2142.614 + & & 2145.31 & 2.70 & 2144.60 & 1.99 & 2142.68 & 0.06 & 2143.580 & 0.97 & 2142.390 & -0.224 & -0.235 & 2142.379 & 2142.614 + & & 2186.73 & 3.09 & 2185.33 & 1.70 & 2184.00 & 0.37 & 2184.164 & 0.53 & 2183.622 & -0.013 & -0.016 & 2183.619 & 2183.635 + & & 2186.73 & 3.09 & 2185.33 & 1.70 & 2184.00 & 0.37 & 2184.254 & 0.62 & 2183.622 & -0.013 & -0.016 & 2183.619 & 2183.635 + & & 2189.99 & 3.86 & 2187.88 & 1.74 & 2186.50 & 0.36 & 2187.113 & 0.98 & 2186.124 & -0.014 & -0.019 & 2186.119 & 2186.138 + & & 2190.00 & 3.86 & 2187.88 & 1.74 & 2186.50 & 0.36 & 2187.230 & 1.09 & 2186.124 & -0.014 & -0.019 & 2186.119 & 2186.138 + & & 2191.88 & 4.24 & 2189.41 & 1.77 & 2188.00 & 0.36 & 2188.710 & 1.07 & 2187.626 & -0.016 & -0.021 & 2187.621 & 2187.642 + & & 2192.02 & 4.38 & 2189.42 & 1.78 & 2188.00 & 0.36 & 2188.820 & 1.18 & 2187.626 & -0.016 & -0.021 & 2187.621 & 2187.642 + & & 2192.61 & 4.46 & 2189.93 & 1.78 & 2188.50 & 0.36 & 2189.111 & 0.97 & 2188.127 & -0.017 & -0.022 & 2188.122 & 2188.144 + & & 2208.60 & 1.98 & 2207.08 & 0.45 & 2206.79 & 0.17 & 2208.281 & 1.65 & 2206.740 & 0.114 & -0.011 & 2206.615 & 2206.626 + & & 2208.61 & 1.98 & 2207.09 & 0.45 & 2206.80 & 0.17 & 2208.938 & 2.31 & 2206.772 & 0.139 & -0.009 & 2206.624 & 2206.633 + & & 2208.73 & 1.97 & 2207.23 & 0.46 & 2206.95 & 0.18 & 2208.948 & 2.18 & 2206.810 & 0.044 & 0.001 & 2206.767 & 2206.766 + & & 2208.73 & 1.97 & 2207.23 & 0.46 & 2206.95 & 0.18 & 2209.447 & 2.68 & 2206.876 & 0.110 & 0.002 & 2206.768 & 2206.766 + & & 2209.57 & 2.01 & 2208.01 & 0.45 & 2207.72 & 0.16 & 2209.486 & 1.93 & 2207.604 & 0.045 & -0.010 & 2207.549 & 2207.559 + & & 2209.59 & 2.03 & 2208.02 & 0.46 & 2207.72 & 0.16 & 2209.739 & 2.18 & 2207.615 & 0.056 & -0.009 & 2207.550 & 2207.559 + bill poirier and tucker carrington jr .accelerating the calculation of energy levels and wave functions using an efficient preconditioner with the inexact spectral transform method ., 114(21):92549264 , 2001 .gustavo avila and tucker carrington jr .using a pruned basis , a non - product quadrature grid , and the exact watson normal - coordinate kinetic energy operator to solve the vibrational schrdinger equation for c2h4 .135(6):064101 , 2011 . richard dawes and tucker carrington jr . how to choose one - dimensional basis functions so that a very efficient multidimensional basis may be extracted from a direct product of the one - dimensional functions : energy levels of coupled systems with as many as 16 coordinates . , 122(13):134101 , 2005 .
we propose a new algorithm for calculation of vibrational spectra of molecules using tensor train decomposition . under the assumption that eigenfunctions lie on a low - parametric manifold of low - rank tensors we suggest using well - known iterative methods that utilize matrix inversion ( lobpcg , inverse iteration ) and solve corresponding linear systems inexactly along this manifold . as an application , we accurately compute vibrational spectra ( 84 states ) of acetonitrile molecule ch on a laptop in one hour using only mb of memory to represent all computed eigenfunctions .
a recent breakthrough in quantum computing has been the realization that quantum computation can proceed solely through single - qubit measurements on an appropriate quantum state .the canonical example of such a resource state is the cluster state , which is a universal resource for mbqc on suitable lattices or graphs .a handful of other universal resources for mbqc have recently been identified , but it is still not known what properties of a quantum many - body system allow for mbqc to proceed . in recent work , we have proposed that the ability to perform mbqc on a quantum many - body system may be identified using appropriate correlation functions as order parameters .this claim stems from the observation that mbqc is a means of preparing resource states for gate teleportation . with a cluster state, it is possible by local measurements and feedforward alone to prepare such resource states allowing gate teleportation for a universal set of gates between essentially any set of qubits in the lattice .the performance of mbqc can be determined by calculating the fidelity between the resource state that is actually prepared and the ideal resource state . here, we demonstrate how to express the resource state _after _ the measurements in terms of correlation functions of the original state _ prior _ to the measurements .these results provide an alternate perspective to theorem 1 in , which shows that the gates in the cluster state mbqc scheme function because of certain correlations in the original cluster state .in particular , our results apply to characterize gate performance in quantum states that are _ not _ the cluster state ; with such correlation functions , it is possible to directly quantify the suitability of a given quantum many - body state for performing such mbqc gates .we note that our work has a close relation to the concept of localizable entanglement . for the state of a quantum many - body system, the localizable entanglement between two arbitrary qubits is defined as the maximum amount of entanglement that can be created between these two qubits by performing local measurements on the remaining qubits .if this entangled state is viewed as a resource for quantum teleportation , then the localizable entanglement serves to quantify the ability to perform the trivial or ` identity ' gate ( i.e. , teleportation ) using local measurements .we note that the localizable entanglement in some systems can be quantified by correlation functions , using similar techniques as described here .our work generalizes these results by considering non - trivial quantum gates , which include multi - qubit quantum gates .the paper is structured as follows . in sec .[ sec : background ] , we review some of the essential terminology and mathematical structure of cluster - state quantum computing .[ sec : general ] presents the key general results of the paper , relating mbqc quantum gates to correlation functions .the correlation functions for the identity gate are calculated explicitly in sec .[ sec : identity ] , and non - trivial gates including the -gate , hadamard , and -rotation gates are presented in sec .[ sec : nontrivial ] .two - dimensional gates are addressed in sec .[ sec:2d ] , and a general method for concatenating gates in sec .[ sec : concatenate ] .we finish with some brief conclusions in sec .[ sec : conclusions ] .the pauli matrices are labeled , , and are defined as the group generated by the pauli matrices under matrix multiplication is known as the pauli group .the pauli group on -qubits is defined as .the clifford group on qubits is defined to be the group of unitary operators that map the pauli group onto itself ; i.e. , is in the clifford group if for all . the stabilizer formalism is a method of describing a state of a quantum system by specifying a set of eigenvalue relations instead of its components in some basis .the eigenvalues of a complete set of commuting observables completely specifies a state .we define the _ stabilizers _ of a quantum state to be the set of operators for which the state is a eigenstate .clearly , any two stabilizers must commute , and thus the set of stabilizers forms an abelian group .this group can be specified by its generators , and homomorphisms on the group can be specified completely by their effect on the generators . as a resultit is sufficient to study the generators , a subset of the group , instead of the whole group .for example , the state can be described as the state which is stabilized by the operators and ; the stabilizer group of this state is generated by these two stabilizer operators .the stabilizer formalism was first used to describe quantum error correction codes , but are widely applicable to a variety of other situations .the standard stabilizer formalism is defined to only allow elements of the pauli group to be used as stabilizers .a set of pauli stabilizers satisfies some key properties : * elements of either commute or anti - commute . *if and anticommutes with both and , then commutes with .the power of the stabilizer formalism lies in its ability to compactly describe certain quantum states , as well as their evolution under clifford group operations and pauli measurements .first , an arbitrary -qubit state requires complex numbers to completely describe , by specifying the contribution from each of the basis vectors of the -qubit system . for systems which are stabilized by the pauli group ,a set of stabilizers is sufficient to describe an -qubit stabilizer state , i.e. , stabilizers can form a complete set of commuting observables . hence if applicable , the stabilizer formalism offers a compact way of denoting states of a quantum system .second , the stabilizer formalism is very efficient in describing a state under the evolution of unitary operators belonging to the clifford group , the group of operators which map the pauli group back to itself under conjugation .it is also efficient for describing the evolution of a state of a multi - qubit system under a projective measurement in the , , or basis . a simple prescription exists which tells us how to obtain the stabilizers of the post - evolution / measurement state from the pre - evolution / measurement state .the cluster state is a many - qubit entangled state .consider an arbitrary graph with a qubit at each vertex ; the cluster state on this graph is characterized by the stabilizers for each qubit of the state , where is the set of qubits adjacent to ( via the graph structure ) .in other words , the cluster state satisfies for all .two examples of cluster states on graphs will be considered here : a one - dimensional line , for which the stabilizers take the form , and a two - dimensional square lattice , for which the stabilizers take the form .consider a one - dimensional lattice of qubits prepared in the state . singling out two qubits , and , we wish to consider a measurement sequence on the remaining qubits in the lattice that yields a two - qubit resource state on qubits and for gate teleportation .let label the measurement outcomes , and be the corresponding projector .following the measurements , a unitary conditional on is applied to qubit .averaged over all possible measurement outcomes , the resulting resource state is equivalently , we can characterize this resource state using expectation values of bipartite pauli operators on qubits and , as = { \textstyle \sum_j } { \rm tr } [ ( ab_j)p_j \rho_0 p_j ] \,,\ ] ] where . the set of such correlation functions , for spanning the set of pauli operators , will completely specify the bi - partite resource state . for each of the mbqc gates given in , we can make use of a remarkable relation : there exists a string of operators acting on some set of the measured qubits which is _ independent _ of the measurement outcomes , and an operator on that is also independent of , such that now , using the projector properties and gives \,.\ ] ] thus we can relate the resource state prepared _after _ the sequence of measurements to a correlation function of the original state _ prior _ to measurements .that is , the correlation functions characterize the _ _ post-__measurement resource state using expectation values of strings of operators on the _ _ pre-__measurement state .this argument is trivially extendible to multi - qubit gates .it is critical to this development that one can identify such a string of operators . in the examplespresented , the correction operator is a pauli operator , and essentially it is the simple algebraic properties of the pauli operators which are responsible for the existence of .that the corrections are always pauli operators makes the analysis of the clifford gates especially simple , and we find that for such operators takes the form of a product of pauli operators . even for our non - clifford gate , we can still identify an appropriate operator ( this time , a sum of product of pauli operators ) .we will consider performing an identity gate between two qubits in a line with an odd number of qubits between , qubit and qubit . to perform this gate one measures on each of the qubits between these two , and measures on qubit and the qubit .we label measurement outcomes by the eigenvalues of the measured operators , which for pauli operators are either or . specifically , we label the measurement outcome for the measurement on the qubit by , and the outcome for the measurement on the qubit ( ) by ( ). we can then define two parities /2 \ , , \qquad p_x = \bigl[1- \bigl(\prod_{j=1}^{j = l } m_{2j-1 } \bigr)\bigr]/2 \,,\ ] ] and write the correction unitary we now seek to identify the commutation identity of eq . .the unitaries only act on the space ( qubit ) so operators that act only on the space are unchanged on commuting through the projectors and do not depend on the measurement outcome . for the operators and , we need to absorb the factors of that arise from the correction unitary by adding factors of the measured observables . if we perform an measurement for example then , where is the projector on the eigenstate .we have and therefore we have the relations at this stage , we note that the correlation functions we would expect for an ideal identity gate are . using eq . , we find = { \rm tr } \bigl[\bigl(\prod_{j=0}^{j= l } k_{k+2j } \bigr ) \rho_0 \bigr]\ , , \\\label{eq : zz } \langle zz\rangle & = { \rm tr } \bigl [ z_{k } \bigl(\prod_{j=1}^{j = l } x_{k+2j-1 } \bigr ) z_{k+2l+1 } \rho_0 \bigr ] = { \rm tr } \bigl[\bigl(\prod_{j=1}^{j = l } k_{k+2j-1 } \bigr ) \rho_0 \bigr]\,.\end{aligned}\ ] ] the correlation function follows similarly .as these correlations are equivalent to expectation values of cluster stabilizers , if is the cluster state these expectation values will both be unity .other expectation values , for example local expectation values or , or those involving any other pauli operator combination , can also be explicitly determined .the set of all such correlation functions will completely characterize the resource state .we now turn our attention to other single - qubit gates . rather than directly consider correlation functions over arbitrary lengths , we restrict our attention to a fixed length ( specifically , 3 intermediate qubits ) between and . in section [ sec : concatenate ] , we will show how to concatenate such fixed - length gates together ( possibly with the identity gate ) to form gates of arbitrary lengths .specifically , we consider creating bipartite resource states between qubits and by measuring qubits 2 , 3 , and 4 along with measurements on qubits 0 ( to the left ) and 6 ( to the right ) .see fig .[ fig:1dgates ] . [ cols="<,^,^,^,^,^,^,^",options="header " , ] here we construct the correlation functions for the hadamard gate .we perform the hadamard gate , by measuring on qubits 0 and 6 and measuring on qubits 2 , 3 and 4 .this will give us correlation functions between qubits 1 and 5 .the ideal resource state for hadamard gate teleportation satisfies . defining /2 \, , \qquad p_x^h = \bigl[1 - m_0 m_3 m_4 \bigr]/2 \,,\ ] ] the correction unitary for the hadamard gate is then we then have and therefore the correlation functions are given by = { \rm tr}\bigl[k_1 k_3 k_4 \rho_0 \bigr ] \ , , \nonumber \\ \langle zx\rangle & = { \rm tr}\bigl[z_1 y_2 y_3 x_5 z_6 \rho_0 \bigr ] = { \rm tr}\bigl[k_2 k_3 k_5 \rho_0 \bigr ] \,.\end{aligned}\ ] ] we note that these correlation functions can be expressed as expectation values of products of cluster stabilizers ; they will yield a value of one if is the cluster state .the gate is performed on a 1-d cluster state with qubit 1 as input and qubit 5 as output as in fig .[ fig:1dgates ] . the gate is implemented by measuring on qubits 0 and 6 , on qubit 3 and on qubits 2 and 4 .the ideal resource state for gate teleportation satisfies . defining /2 \ , , \qquad p_z^{\pi/2 } = \bigl[1 - m_0m_2m_3m_6\bigr]/2 \,,\ ] ]the correction unitary for the gate is then the relevant correlation functions are then = { \rm tr}\bigl[k_2 k_4 \rho_0 \bigr]\ , , \nonumber \\\langle x(-y)\rangle & = -{\rm tr}\bigl[z_0 x_1 y_3 x_4 y_5 z_6 \rho_0 \bigr ] = { \rmtr}\bigl[k_1 k_3 k_4 k_5 \rho_0\bigr]\ , . \label{eq : pi2correlation}\end{aligned}\ ] ] again , we note that these correlation functions can be expressed as expectation values of products of cluster stabilizers ; they will yield a value of one if is the cluster state .we now consider a non - clifford gate a rotation by angle about the axis .again , we consider three intermediate qubits , as in fig .[ fig:1dgates ] .the ideal resource state on and for gate teleportation of satisfies , where . in mbqc, such a resource state is prepared on qubits 1 and 5 by measuring on qubits 0 and 6 , measuring on qubits 2 and 4 , and measuring on qubit 3 , where .we note that the measurement basis on qubit 3 depends explicitly on the outcome of the measurement on qubit 2 .the correction unitary on qubit 3 is with /2 \ , , \qquad p_z^{\theta } = \bigl[1 - m_0m_3m_6\bigr]/2 \,.\ ] ] for the measurements on qubits 2 and 4 , we can use and , and for qubits 0 and 6 we can use and . for the measurement yielding result ,the situation is slightly more complicated , because .however , it is straightforward to show that thus , we have note that the right hand side of the equations is of the desired form : independent of the measurement results .these results allow us to express the two - qubit expectation values for the post - measurement resource state in terms of correlation functions on the pre - measurement state .we have = { \rm tr}\bigl[k_2 k_4 \rho_0 \bigr ] \\ \langle x x_{-\theta}\rangle & = { \rm tr}[(z_0x_1x_3x_5z_6(\cos^2\theta + \sin^2\theta z_3x_4z_5 ) \nonumber \\ & \qquad\qquad + \cos\theta\sin\theta z_0 x_1 x_2y_3 x_5 z_6(1 -z_3x_4z_5))\rho_0 ] \nonumber \\ & = { \rm tr}[(k_1k_3k_5(\cos^2\theta + \sin^2\theta k_4 ) + \cos\theta\sin\theta ( z_0y_1z_2 ) k_2 k_3 ( 1-k_4)k_5)\rho_0 ] \,.\end{aligned}\ ] ] the term is not a cluster state stabilizer , and has an expectation value of 0 for the cluster state .all other terms are stabilizers of the cluster state , and both correlation functions can be seen to have an expectation value 1 on the perfect cluster state .in addition , this result agrees with eq .( [ eq : pi2correlation ] ) for .the single - qubit gate sequences , and their corresponding correlation functions , can be straightforwardly generalized to the cluster state on a two - dimensional square lattice .one - dimensional ` strips ' can be created in the square lattice by performing measurements ( and their corresponding pauli corrections ) to remove qubits from either side of the strip .however , for the purposes of defining simple correlation functions , it is easier to define single - qubit gate sequences along diagonal lines , as in fig .[ fig:2dgates](a ) .such diagonals eliminate the need for measurements along the sides of the strip , and they are only required at the ends .consider an example where we label qubits such that is at coordinate and is at coordinate .for an ideal cluster state , the products of stabilizers and along and parallel to this diagonal are themselves stabilizers . by measuring qubits for and qubits for ,we use the standard rules for updating stabilizers to give {1,1}x_{n , n}z_{0,1}z_{1,0}z_{n , n+1}z_{n+1,n } \,,\\ \prod_{i=1}^{i = n-1 } k_{i+1,i-1 } & \rightarrow \bigl[1-\textstyle{\prod_{i=1}^{i = n-1}}m_{i+1,i-1}\bigr ] z_{1,1}z_{n , n } z_{2,0 } z_{n+1,n-1 } \,.\end{aligned}\ ] ] from this expression , we see that it is only necessary to measure qubits the six ` end ' qubits in the -basis in order to obtain the two - qubit resource state stabilized by and ; it is _ not _ necessary to measure the qubits on either side of the diagonal strip . with such diagonal strips , the correlation functions for the identity gate take the form \ , , \qquad \langle zz \rangle = { \rm tr}\bigl [ \bigl ( \prod_{i=1}^{j = n-1 } k_{i+1,i-1 } \bigr ) \rho_0 \bigr ] \,.\ ] ] we consider the simplest version of a two - qubit gate : the csign gate ( a clifford gate ) that also implements a cross - over of control and target qubits .this gate is defined in terms of its action on pauli operators as the measurement sequence is illustrated in fig .[ fig:2dgates ] .the relevant correlation functions from the above ` input - output ' relations that will characterize the csign gate are the expectations of four products of stabilizers : \ , , \\\langle z_{a_{\rm in } } x_{b_{\rm in } } x_{b_{\rm out } } \rangle & = { \rm tr}\bigl [ k_{b_{\rm in}}k_4 k_{b_{\rm out } } \rho_0\bigr]\ , , \\\langle z_{a_{\rm in } } z_{a_{\rm out } } \rangle & = { \rm tr}\bigl [ k_1 k_4 \rho_0\bigr]\ , , \\\langle z_{b_{\rm in } } z_{b_{\rm out } } \rangle & = { \rm tr}\bigl [ k_2 k_3 \rho_0\bigr]\,,\end{aligned}\ ] ] can be appended with diagonal strings of stabilizers in the direction of the arrows ( and terminated with measurements as in fig .[ fig:2dgates](a ) ) to reach distant qubits . with measurements on qubits 1 - 4, the resulting state provides the csign transformation . and , where ( ) denotes a measurement in the -basis ( -basis ) .the two strings of stabilizer products , centred on sites connected by the parallel diagonal lines , directly quantify the fidelities of single - qubit gates between and in mbqc .( b ) the measurement sequence corresponding to the csign gate between and ., width=384 ]it is straightforward to concatenate many clifford gates together into a single gate , and to calculate the resulting correlation functions .the essential idea is to equate the ` output ' qubit of the first gate with the ` input ' qubit of the second . for the gates sequences defined here , recall that measurements are performed at one qubit beyond each end of the gates . in this case where we combine two gates , the measurement prior to the ` input ' end of the second gate and the ` output ' qubit of the first gate are not performed , and their corrections are to be left out . finally , an measurement is performed on this joining qubit . as we have expressed the correlation functions of our clifford gates in terms of cluster stabilizers , it is straightforward to determine the correlation functions describing a combined gate : one simply takes the product of the corresponding stabilizer operators .we note that it is still possible , though less straightforward , to concatenate non - clifford gates .the difficulty with non - clifford gates is that they involve adaptive measurements , wherein the measurement basis can depend on the entire ` past history ' of the computation . as a result ,it is typically impossible to write the general form of the correlation functions for non - clifford gates in a way that is independent of the specific choice of prior gates . however , in any given situation , such correlation functions can be defined using the methods presented here .we have presented a general method for expressing the performance of quantum gates in the cluster - state model of mbqc as correlation functions on the pre - measurement resource state . with such correlation functions , viewed as order parameters, one can investigate the existence of a robust ordered phase in various models of quantum many - body systems that will allow for mbqc to occur .one such model is considered in .this work is supported by the australian research council .
in measurement - based quantum computation ( mbqc ) , local adaptive measurements are performed on the quantum state of a lattice of qubits . quantum gates are associated with a particular measurement sequence , and one way of viewing mbqc is that such a measurement sequence prepares a resource state suitable for ` gate teleportation ' . we demonstrate how to quantify the performance of quantum gates in mbqc by using correlation functions on the pre - measurement resource state .
in the rapidly growing literature on the modeling of complex networks one of the most important classes of network models is the random graph .one well - studied such model is the model consisting of the ensemble of all graphs that have a given degree sequence , and this model has proved useful in understanding a variety of network properties .realistic applications often require that we restrict ourselves to graphs with no multiple edges between any vertex pair and no self - edges . unfortunately, both the analytic and numerical study of such networks is known to present challenges . in this short paperwe consider computer algorithms for generating graphs uniformly from this ensemble .we are concerned primarily with directed graphs , since the examples we will consider are directed , but the concepts discussed generalize in a straightforward fashion to the undirected case also .there are two algorithms in common use for the generation of random graphs with single edges. we will refer to them as the _ switching algorithm _ and the _ matching algorithm _ .we argue that , under certain circumstances , both of these algorithms can generate a nonuniform sample of possible graphs .we then present a new algorithm based on the monte carlo procedure known as _ go with the winners _ , which generates uniformly sampled graphs .we compare the three methods in the context of a particular network problem estimation of the density of commonly occurring subgraphs or _motifs_and show that , in this context , the difference between them is small .this result is of some practical importance , since the `` go with the winners '' algorithm , although statistically correct , is slow , while the other two algorithms are substantially faster .[ cols="<,^,^,^,^,^,^,^,^,^,^,^,^ " , ]in this section we describe the three algorithms under consideration .first , we describe the switching algorithm , which uses a markov chain to generate a random graph with a given degree sequence . for simplicity ,we discuss directed networks with no mutual edges ( vertex pairs with edges running in both directions between them ) .the case with mutual edges is a simple generalization .the method starts from a given network and involves carrying out a series of monte carlo switching steps whereby a pair of edges is selected at random and the ends are exchanged to give .however , the exchange is only performed if it generates no multiple edges or self - edges ; otherwise it is not performed .the entire process is repeated some number times , where is the number of edges in the graph and is chosen large enough that the markov chain shows good mixing .( exchanges that are not performed because they would generate multiple or self - edges are still counted to insure detailed balance . )this algorithm works well but , as with many markov chain methods , suffers because in general we have no measure of how long we need to wait for it to mix properly .theoretical bounds on the mixing time exist only for specific near - regular degree sequences .we empirically find , however , that for many networks , values of around appear to be more than adequate ( see fig .[ fig2 ] ) .an alternative approach is the matching algorithm , in which each vertex is assigned a set of `` stubs '' or `` spokes''the sawn - off ends of incoming and outgoing edges according to the desired degree sequence .( one can also assign mutual - edge stubs for networks that include such edges . )then in - stubs and out - stubs are picked randomly in pairs and joined up to create the network edges .if a multiple or self - edge is created , the entire network is discarded and the process starts over from scratch .this process will correctly generate random directed graphs with the desired properties .unfortunately , however , many real - world networks have a heavy - tailed degree distribution that includes a small minority of vertices with high degree .all other things being equal , the expected number of edges between two such vertices will often exceed one , making it unlikely that the procedure above will run to completion , except in the rarest of cases . to obviate this problem a modification of the method can be used in which , following selection of a stub pair that creates a multiple edge , the network is not discarded , and an alternative stub pair is selected at random . in general this method generates a biased sample of possible networks but , as we will show , not significantly so for our purposes ( see table [ table1 ] ) .the `` go with the winners '' algorithm is a non - markov - chain monte carlo method for sampling uniformly from a given distribution . when applied to the problem of graph generation , the method is as follows .we consider a colony of graphs . as with the matching algorithm ,we start with the appropriate number of in - stubs and out - stubs for each vertex and repeatedly choose at random one in - stub and one out - stub from the graph and link them together to create an edge .if a multiple edge or self - edge is generated , the network containing it is removed from the colony and discarded . to compensate for the resulting slow decline in the size of the colony , its sizeis periodically doubled by cloning each of the surviving graphs ; this cloning step is carried out at a predetermined rate chosen to keep the size of the colony roughly constant on average .the process is repeated until all stubs have been linked , then one network is chosen at random from the colony and assigned a weight : where is the number of cloning steps made and is the number of surviving networks .the mean of any quantity ( for example , the number of occurrences of a given subgraph ) over a set of such networks is then given by where is the value of in network .in fig . [ hub1 ] we show a comparison of the performance of our three algorithms when applied to a simple toy network .the network consists of an out - hub with ten outgoing edges , an in - hub with ten incoming edges , and ten nodes with one incoming edge and one outgoing edge each . given this degree sequence, there are just two distinct network topologies with no multiple edges , as shown in fig .[ hub1]a and [ hub1]b .there is only a single way to form the network in [ hub1]a , but there are 90 different ways to form [ hub1]b .we generated random networks using each of the 3 methods described here and the results are summarized in fig . [ hub1]c .as the figure shows , the matching algorithm introduces a bias , undersampling the configuration of fig .this is a result of the dynamics of the algorithm , which favors the creation of edges between hubs .the switching and `` go with the winners '' algorithms on the other hand sample the configurations uniformly , generating each graph an equal number of times within the measurement error on our calculations .the `` go with the winners '' algorithm truly samples the ensemble uniformly but is far less efficient than the two other methods .the results given here indicate that the switching algorithm produces essentially identical results while being a good deal faster .the matching algorithm is faster still but samples in a measurably biased way .now consider the study of network motifs .we are interested in knowing when particular subgraphs or motifs appear significantly more or less often in a real - world network than would be expected on the basis of chance , and we can answer this question by comparing motif counts to random graphs .some results for the case of the `` feed - forward loop '' motif are given in table [ table1 ] .in this case the densities of motifs in the real - world networks are many standard deviations away from random , which suggests that any of the present algorithms is adequate for generating suitable random graphs to act as a null model , although the `` go with the winners '' and switching algorithms , while slower , are clearly more satisfactory theoretically .the matching algorithm was measurably nonuniform for our toy example above , but seems to give better results on the real - world problem .overall , our results appear to argue in favor of using the switching method , with the `` go with the winners '' method finding limited use as a check on the accuracy of sampling .accuracy checks are also supplied by analytical estimates for subgraph numbers .numerical results in were done using the switching algorithm .in this paper we have compared three algorithms for generating random graphs with prescribed degree sequences and no multiple edges or self - edges .two of the three have been used previously , but suffer from nonuniformity in their sampling properties , while the third , a method based on the `` go with the winners ''monte carlo procedure , is new and samples uniformly but is quite slow .of the two older algorithms , we show that one , which we call the `` matching '' algorithm , has measurable deviations from uniformity when compared to the `` go with the winners '' method , although for graphs typical of practical studies these deviations are small enough to make no significant difference to most previously published results .the other older algorithm , which we call the `` switching '' algorithm and which is based on a markov chain monte carlo method , samples correctly in the limit of long times and in practice is found to give good results when compared with the `` go with the winners '' method .overall , therefore , we conclude that the switching algorithm is probably the algorithm of choice , with the `` go with the winners '' algorithm finding a supporting role as a check on uniformity , although its slowness makes it impractical for large - scale use .+ we thank oliver d. king for discussions and for pointing out and demonstrating that the matching algorithm of the supplementary online material of does not uniformly generate simple graphs .99 b. bollobas , _ random graphs _ , 2nd edition , academic press , new york ( 2001 ) .e. bender and e. canfield , the asymptotic number of labelled graphs with given degree sequences , _ j. combin. theory ser .a _ * 24 * , 296307 ( 1978 ) .m. molloy and b. reed , the size of the giant component of a random graph with a given degree sequence , _ combinatorics , probability and computing _ * 7 * , 295305 ( 1998 ) .m. molloy and b. reed , a critical point for random graphs with a given degree sequence , _ random structures and algorithms _ * 6 * , 161179 ( 1995 ) .m. e. j. newman , s. h. strogatz , and d. j. watts , random graphs with arbitrary degree distribution and their applications , _ phys .e _ * 64 * , 026118 ( 2001 ) .f. chung and l. lu , the average distances in random graphs with given expected degrees , _ proc ._ * 99 * , 1587915882 ( 2002 ) .t. a. b. snijders , enumeration and simulation methods for 01 matrices with given marginals , _ psychometrika _ * 56 * , 397417 ( 1991 ) .a. r. rao , r. jana , and s. bandyopadhya , a markov chain monte carlo method for generating random -matrices with given marginals , _indian j. of statistics _ * 58 * , 225242 ( 1996 ) . j. m. roberts , jr . , simple methods for simulating sociomatrices with given marginal totals , _ social networks _ * 22 * , 273283 ( 2000 ) .r. kannan , p. tetali , and s. vempala , simple markov - chain algorithms for generating bipartite graphs and tournaments , _ proceedings of the acm symposium on discrete algorithms _ ( 1997 ) .y. chen , p. diaconis , s. p. holmes , and j. s. liu , sequential monte carlo methods for statistical analysis of tables , _ discussion paper 03 - 22 _ , institute of statistics and decision sciences , duke university ( 2003 ) .s. itzkovitz , r. milo , n. kashtan , g. ziv , and u. alon , subgraphs in random networks , _ phys .e _ * 68 * , 026127 ( 2003 ) .s. maslov , k. sneppen , and a. zaliznyak , pattern detection in complex networks : correlation profile of the internet , preprint cond - mat/0205379 ( 2002 ) .j. park and m.e.j .newman , the origin of degree correlations in the internet and other networks , _ phys .e _ * 68 * , 026112 ( 2003 ) .m. e. j. newman , assortative mixing in networks , _ phys .* 89 * , 208701 ( 2002 ) .s. maslov and k. sneppen , specificity and stability in topology of protein networks , _ science _ * 296 * , 910913 ( 2002 ) .a. roberts , l. stone , island - sharing by archipelago species ._ oecologia _ * 83 * , 560 - 567 ( 1990 ) .s. shen - orr , r. milo , s. mangan , and u. alon , network motifs in the transcriptional regulation network of escherichia coli , _ nature genetics _ * 31 * , 6468 ( 2002 ) .r. milo , s. shen - orr , s. itzkovitz , n. kashtan ., d. chklovskii , and u. alon , network motifs : simple building blocks of complex networks , _ science _ * 298 * , 824827 ( 2002 ) .r. milo , s. itzkovitz , n. kashtan , r. levitt , s. shen - orr , i. ayzenshtat , m. sheffer and u. alon , superfamilies of designed and evolved networks , _ science _ * 303 * , 153842 ( 2004 ) . o. d. king , private communication .d. aldous and u.v .vazirani , `` go with the winners '' algorithms , _ proceedings of the ieee symposium on foundations of computer science _ , pp .492501 ( 1994 ) .p. grassberger and w. nadler , `` go - with - the - winners '' simulations , preprint cond - mat/0010265 ( 2000 ) .f. brglez , d. bryan , and k. kozminski , combinatorial profiles of sequential benchmark circuits , _ proceedings of ieee international symposium on circuits and systems _ , 1929 ( 1989 ) .there are singular networks where the markov process is not ergodic .this lack of ergodicity can be removed by making small modifications as in , and by choosing the number of switching steps per edge to be randomly distributed around q.
random graphs with prescribed degree sequences have been widely used as a model of complex networks . comparing an observed network to an ensemble of such graphs allows one to detect deviations from randomness in network properties . here we briefly review two existing methods for the generation of random graphs with arbitrary degree sequences , which we call the `` switching '' and `` matching '' methods , and present a new method based on the `` go with the winners '' monte carlo method . the matching method may suffer from nonuniform sampling , while the switching method has no general theoretical bound on its mixing time . the `` go with the winners '' method has neither of these drawbacks , but is slow . it can however be used to evaluate the reliability of the other two methods and , by doing this , we demonstrate that the deviations of the switching and matching algorithms under realistic conditions are small compared to the `` go with the winners '' algorithm . because of its combination of speed and accuracy we recommend the use of the switching method for most calculations .
extinction is becoming a greater and greater issue all over the world and is a cause of extreme concern .it has been estimated that anthropogenic extinctions are resulting in the loss of a few percent of the current world s biosphere , which is of magnitude 3 to 4 times the natural background rate .the world conservation union ( iucn ) , through its species survival commission ( scc ) develops criteria to assess the extinction rate for plants and animals all over the world which enables them to keep a so - called _ red list _( www.redlist.org ) of species which are threatened with extinction in order to promote their conservation .the list currently shows over 16,000 threatened species around the world - a 45% increase on the figure from the year 2000 .it has been shown by examining both discrete and continuous populations that , at least analytically speaking , there exists a critical habitat size above which survival of a population is assured . herewe examine what role the population density plays since , intuitively , one would expect that even for , a sufficiently large population would be needed for growth . lattice based models are widely used in ecology ( see for example * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) and so we introduce such a model that incorporates birth , death and diffusion . unlike other similar models such as the _ contact process _ ( e.g. * ? ? ?* ; * ? ? ?* ) , here two individuals must meet in order to reproduce whereas one individual can die by itself .this results in negative growth rates for populations that fall below a critical density due to reproduction opportunities becoming rare . in real populations ,the positive correlation between size and per capita growth rate of a population is known as the allee effect , which has recently received much interest ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?if the allee effect is strong enough , the population size may even decrease for small population sizes as in our model . due to this behaviour, the effect has been examined with respect to extinction ( see for example * ? ? ?* ; * ? ? ?* ; * ? ? ?* and references therein ) but _ primarily _ deterministically and so without fluctuations in the population density . since fluctuations are likely to be highly significant for small populations , we include the effects of these by examining monte carlo ( mc ) simulations in the hope to gain a more realistic picture of the importance of the population density on the chances of survival .further to the stochastic methods used to study allee effects , such as stochastic differential equations ( e.g. * ? ? ?* ) , discrete - time markov - chains ( e.g. * ? ? ?* ) or diffusion processes ( e.g. * ? ? ?* ) , lattice based models have space , as well as time , as a variable and take into account individuals , rather than just the macroscopic view of the population .+ + after introducing the model in the next section , we examine the allee effects present in our model in section [ section : allee effects ] , particularly with respect to a sudden decrease in population in section [ section : decrease in popdens ] .the effects of the fluctuations are examined in section [ section : fluctuations ] .we have a -dimensional square lattice of linear length where each site is either occupied by a single particle ( 1 ) or is empty ( 0 ) .a site is chosen at random .if the site is occupied , the particle is removed with probability , leaving the site empty .if the particle does not die , a nearest neighbour site is randomly chosen .if the neighbouring site is empty , the particle moves to that site .if however the neighbouring site is occupied , with probability and not only . ], the particle reproduces , producing a new particle on another randomly selected neighbouring site , conditional on that chosen square being empty .we therefore have the following reactions for a particle : where represents an empty site .a time step is defined as the number of lattice sites and so is equal to approximately one update per site .we use nearest neighbours and , throughout most of the paper , periodic boundary conditions which , although more unrealistic than , say , reflective boundary conditions , allow for better comparison with analytical results , since periodic systems remain homogenous .we later , however , examine some results with reflective boundary conditions . due to the conflict between the growth and decay processes in the model , we expect that with certain values of and , extinction of the population would occur .indeed , many models displaying such a conflict ( see for example * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) show a critical parameter value separating an _ active _ state and an _ inactive _ or _ absorbing _ state that , once reached , the system can not leave . as the rate of decay increases , the so - called _ order parameter _( often the density of active sites ) decreases , becoming zero at a critical point , marking a change in phase or _phase transition_. in our case , the absorbing state would represent an empty lattice and so extinction of the population . to show that this is indeed the case for our model, we derive a so - called mean field equation ( e.g. * ? ? ?* ) for the density of occupied sites .assuming the particles are spaced homogeneously in an infinite system we have the first term is the proliferation term and so is proportional to , the probability that the particle does not die , the probability that the next randomly chosen site to give birth on is empty and finally the probability that it gives birth if this is the case , .the second term represents particle annihilation and so is proportional to both and , the probability that the chosen particle dies .( [ mean field ] ) has three steady states , for , are imaginary , resulting in being the only real stationary state and so , here , extinction occurs in all circumstances . keeping a constant from now on, we then have that our critical death rate is given by which separates the active phase representing survival and the absorbing state of extinction . clearly eq .( [ mean field ] ) is limited by the exclusion of diffusion and noise as well as the false assumption of a homogenous population density .we do however find at least good qualitative support for our mean field analysis through numerical simulations .[ critical parameters ] a ) shows the critical values of and separating the regions with one and three stationary states according to both the mean field equation and numerical simulations for , and dimensions .c + for the mean field ( line ) , 1 ( ) , 2 ( ) and 3 ( ) dimensional simulations.,title="fig:",width=302 ] + + for the mean field ( line ) , 1 ( ) , 2 ( ) and 3 ( ) dimensional simulations.,title="fig:",width=302 ] we see convincing agreement between our analytical and numerical results , particularly for higher dimensions . the mc simulations were carried out on an initially fully occupied lattice with linear sizes , and for each dimension respectively and we observed whether extinction occurred during time steps .for each birth rate , the simulation was repeated 500 times .if a single run survived , was increased , whereas if extinction occurred in all runs , was reduced .using the same initial seed for the random number generator , an iterative procedure produced a critical value with accuracy .this iterative procedure was then repeated 5 times with different seeds and the average taken .only a small number of repeats were needed since the largest variance of the values obtained was of the order of . from the figure we find that to 3 d.p . for , , 0.098 and 0.105 in 1,2 and 3 dimensions respectively .due to the finite size of the lattices and the finite time used for the above simulations , the actual critical death rates are likely to differ slightly from those given and more accurate techniques would have to be used to obtain them ( see * ? ? ?* for examples of such techniques ) . with , as increased , the steady - state population density decreases , becoming zero at as shown in fig .[ critical parameters ] b ) , marking the phase transition .we see that the steady state population density _ appears _ to change continuously in 1 dimension , whilst discontinuously in 2 and 3 dimensions in agreement with the mean field results . if indeed this is the case , we call such phase transitions _ continuous _ and _ first - order _ respectively . in both cases ,the phase transition is marked by a very rapid decrease in population density .+ + briefly relating our model to biology , we note that the death rate for a given species may fluctuate for any number of reasons but it must certainly be true that at least the average value of must be less than for the species to have ever been in existence . however , as we have shown , even a temporary increase in above , due to deforestation or disease for example , will cause a very rapid and perhaps unrecoverable decrease in population .extinctions however may also occur for reasons other than having a super - critical death rate .we investigate the roles of allee effects and that of fluctuations in the next three sections where we examine simulations in the sub - critical or active phase and use the constant value .one reason we observe a decline in population growth at low densities is due to individuals finding it harder to find a mate .this is empirically known to occur in both plant ( e.g. * ? ? ?* ) and animal ( e.g. * ? ? ?* ) populations . in our model, this aspect is incorporated by the fact that two individuals are required for reproduction whereas an individual can die by itself .as density decreases , each individual therefore finds it increasingly difficult to find another for reproduction before they die . to examine this, we return to our mean field equation ( [ mean field ] ) .it is easy to show that whereas and are stable stationary points of eq .( [ mean field ] ) , is unstable . since in the active phase , , any populationwhose density will be driven to extinction by the dynamics of the system .in fact we find that for , we test this numerically in 1 , 2 and 3 spatial dimensions by finding the value of that separates the active and absorbing states for different initial conditions .the mc simulations were carried out and the critical death rate found iteratively in the same fashion as in section [ section : the model ] .the results are shown in fig .[ phase diagram ] separating the 2 long - term outcomes of the system for different initial population density according to the mean field ( line ) and the 1 ( + ) , 2 ( ) and 3 ( ) dimensional mc simulations.,width=302 ] and clearly show the importance of the initial population density for survival .the density dependence appears to increase with dimensionality , which we expect , since two individuals meeting becomes progressively harder as the dimensionality of the system increases .the existence of this critical population density is highly significant to the conservation of species .it is clear that a sufficiently small population will not grow , regardless of how much space and resources are available .it also has repercussions if a population density were to suddenly decrease due to disease or particularly harsh meteorological conditions , for example .we examine this further in section [ section : decrease in popdens ] after examining the role of fluctuations .we expect extinction due to fluctuations in the population density to occur when the order of the fluctuations approaches the mean population density .empirically , demographic stochasticity ( that is , chance events of mortality and reproduction ) is known to be greater in smaller populations than in larger ones .population and habitat size are , on average , positively correlated and so , particularly with the existence of the critical population density , we expect extinction due to fluctuations to occur for smaller lattice sizes as has been suggested by others ( e.g. * ? ? ? * ; * ? ? ?we see in fig .[ fluctuations ] ) and 3 ( ) dimensional systems .the hashed line has gradient -0.5 for the eye and indicates the power law behaviour .insert : the fluctuations v.s . dimensional case with the same symbol notation.,width=302 ] that , numerically , the fluctuations in the population density decrease with the number of lattice sites through a power law with exponent -0.50 in all dimensions , which is what we would expect from the _ central limit theorem_. simulations were carried out for fixed and and the standard deviation obtained from surviving runs for each lattice size . the insert in fig .[ fluctuations ] shows how the size of the fluctuations also increase as the critical point is approached .these larger fluctuations will also increase the probability of extinction as indicated in fig .[ increasingl ] varies with for the 1 dimensional model with ( from left to right ) , = 0.04 , 0.05 and 0.06 .similar results are seen in 2 and 3 dimensions.,width=302 ] where we examine the probability of survival , that is , the probability that extinction has not occurred up to some time .we examine the 1 dimensional case only using three different values of with and repeat the simulation times for each lattice size .the figures clearly show how the probability of survival increases with , yet decreases as increases .indeed , as is approached , population density decreases and fluctuation size increases resulting in species with higher death rates being more susceptible to extinction .this is indeed observed in nature where long - lived species are known , in general , to have a higher chance of survival than short - lived ones .apart from the initial conditions , it is certainly conceivable that the population density could fall below the critical value due to a reduction in population size . from eq .( [ long term behaviour ] ) we expect that the population will survive only as long as .we simulate this by increasing to 1 at some and then returning to what it was before , once a density has been reached .we examine this here in 2 dimensions with now reflective rather than the previously used periodic boundary conditions .qualitatively , all previous results have been very similar when using reflective boundary conditions but here we want to increase this degree of realism in our model . for 2 dimensional simulations with an initial population density , the critical death rate is 0.093 ( 3 d.p . ) as shown in fig .[ phase diagram ] .so , for , we would expect that if , the population will survive , with the population density returning to what it was before , whereas for , extinction will occur . due to the fluctuations in the population that occur in the simulations , we expect more of an increase in the likelihood of extinction as rather than the definite survival / extinction result that the mean field predicts .c + ( hashed line ) , i.e. the probability that extinction has not occurred up to time .,title="fig:",width=302 ] + + ( hashed line ) , i.e. the probability that extinction has not occurred up to time .,title="fig:",width=302 ] fig .[ disease ] shows the results for , where we see that for those runs that _ did _ survive , the population density does indeed return to what it once was .we also see , as expected , that most of the runs did result in extinction .in fact the survival rate was 0.004 . from fig .[ disease ] b ) we observe that there is a time delay of approximately 40 time steps between the sudden decrease in population and when the survival probability begins to fall . assuming a particle that dies the time it is picked , survives time steps , it is easy to show that the expected lifetime ( in time steps ) of an individual is given by .we therefore have a time delay of approximately four lifetimes ( recall ) which , for a lot of species , is ample time to act . in order to prevent extinction in such a case, the population density must be increased beyond .this has important ecological implications since it shows that the probability of extinction can be decreased , not only by increasing the population ( which is of course not always possible ) , but also by a _decrease _ in habitat size . to see whether this hypothesis holds, we simulate this again using but this time so that the chance of survival is negligible .this time however , once the population density has been reduced , the area covered by the lattice is reduced by half .the organisms in the half that remains are left where they are , whereas those in the half that is removed are randomly placed in the remaining half .this then doubles the population density , bringing the population out of the sub - critical population density .once the population has recovered and stabilised , the lattice size is returned to how it once was .the results are shown in fig .[ disease recovery ] for the surviving runs only after a disease breakout at due to the re - sizing of the lattice .the lattice is returned to how it was originally at and the population recovers its original value.,width=302 ] and clearly show the recovery of the population once the lattice size has been reduced .in fact , out of 1000 runs , the probability of survival rose from 0.003 to 0.281 .we expect there to be an optimal habitat reduction size - too large a reduction and the population will be in danger from large fluctuations associated with smaller habitat sizes whereas too small a reduction and the density will not be increased sufficiently .we therefore plot in fig .[ habitatreduction ] , starting from .,width=302 ] the probability that the system does not go extinct up to some , , against the reduction in , .we again use and . for small , little due to the density not being reduced enough , yet for larger , the larger fluctuations resulting from the smaller value of also cause to be small . whilst reflective boundary conditions were used here, very similar results were obtained using periodic boundary conditions .in fact , with periodic boundary conditions , the probability of survival increased more significantly by the decrease in due to the population being able to grow in two directions rather than in just one after the habitat size has been returned to what it once was .this of course could be achieved in reality by reducing the habitat from more than one direction .+ + this model was proposed to represent how the area in which a population is found could be reduced in real - life . the species could be driven towards one end of the habitat with a boundary placed to prevent them leaving the desired area .this boundary could then be removed once the population has recovered .clearly this is easier for larger , land - based animals but in principle , at least , could be achieved for all species .allee effects are certainly observed in nature ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* et al ) and have been studied with respect to extinction . using a lattice model ,we have observed allee effects together with the role of fluctuations , with the advantage of being able to examine the effects of habitat size .being able to model the population as a group of _ individuals _ which move , breed and die , rather than as a variable in an equation , has enabled us to gain a more realistic insight into how real populations behave . rather than the clear - cut conclusions that deterministic models produce ,conservationists often examine the _ probability _ that a population will maintain itself without significant demographic or genetic manipulation for the foreseeable ecological future . in this spirit , for a sufficiently large population density we have shown that the probability of survival does increase with habitat size due to the smaller fluctuations. however , far more important are the death rate and population density since if these fall on the wrong side of their critical values , extinction is almost a certainty .our findings are certainly significant for the design of habitats .the notion of a critical habitat size , mentioned in the introduction , is misleading , since , it is certainly not true that for a fixed population size , the larger the habitat size the better .regardless of the amount of space and resources available , a population will only grow if the density is above its critical value .we also proposed , in the last section , a method for greatly reducing the probability of extinction by reducing the habitat size once a species has become rare .our notion of density has been that of the number of individuals per unit area . while we assumed this to be constant in space when deriving our mean field equation ( [ mean field ] ) , clearly this will vary amongst real populations .in fact , for populations that are found in patches , the value of the density will depend very much on the scales used .the same is true of the mc results as shown in figure [ clusteringpicture ] , .a value of was used and the picture was taken at when ,width=302 ] where we see clear examples of clustering . in nature , species will cluster to varying degrees and hence the value of the critical population density will also vary and would need to be estimated in each case .+ + compared to other stochastic models , we claim the use of lattice models gives a more realistic insight into the way in which real populations behave .we do still however , recognise the inaccuracies in our model and the difficulties in implementing the observations .we believe the model to be valid , to a greater or lesser degree to all species which rely on others for growth , perhaps particularly those who live alone yet sexually reproduce .in fact , due to the great variety of species , we have presented the above as ideas which may be of qualitative , rather than quantitative , relevance to conservation management .we would like to thank beta oborny for very helpful discussions and references .alastair windus would also like to thank the engineering and physical sciences research council ( epsrc ) for his ph.d .studentship .gyllenberg , m. , hanski , i. , hastings , a. , 1997 . structured metapopulation models . in : hanski , i. , gilpin , m. ( eds . ) ,metapopulation biology : ecology , genetics and evolution . academic press , london ,
in the interest of conservation , the importance of having a large habitat available for a species is widely known . here , we introduce a lattice - based model for a population and look at the importance of fluctuations as well as that of the population density , particularly with respect to allee effects . we examine the model analytically and by monte carlo simulations and find that , while the size of the habitat is important , there exists a critical population density below which extinction is assured . this has large consequences with respect to conservation , especially in the design of habitats and for populations whose density has become small . in particular , we find that the probability of survival for small populations can be increased by a reduction in the size of the habitat and show that there exists an optimal size reduction . extinction ; allee effects ; critical population density ; habitat size ; fluctuations ; mean field ; monte carlo simulations
respiratory mucus is found in the conducting airways covering the ciliated epithelium .the mucus is typically split into two layers , the periciliary layer between the cilia and the top layer forming a viscoelastic gel .the mucus layer protects the epithelium from inhaled particles and foreign materials due to its sticky nature .accumulation of these materials is avoided as a result of the coordinated beating of the cilia the so - called mucociliary clearance .the mucus together with the mucociliary escalator of the conducting airways is a very efficient clearance mechanism also preventing efficient drug delivery across this barrier . +this respiratory mucus , composed from mucin macromolecules , carbohydrates , proteins , and sulphate bound to oligosaccharide side chains forms a biological gel with unique properties .the interaction of all kind of inhaled drugs and drug carriers with this layer and the penetration potential in and through the mucus is of outmost importance for possible therapeutic approaches .+ clearly , for drug delivery purposes the biochemistry of penetrating objects plays an important role but also the rheological behavior of the mucus layer .the rheological properties of mucus have been already investigated in many studies , most of them focusing on human tracheal mucus but they also include the examination of cystic fibrosis sputum , cervicovaginal mucus , gastropod pedal mucus , as well as pig intestinal mucus .an excellent overview on the rheological studies is given by lai et al. .since typically only small amounts of mucus are available for experiments , microscopic methods like magnetic microrheometry with test beads of the size of to were already applied in the 1970 s .multiple particle tracking ( mpt ) has evolved to one of the most favored methods in context with the microrheological characterization of biological fluids in general and of mucus in particular .still , the number of microrheological studies where the viscoelastic moduli are determined from the brownian fluctuations spectrum of colloidal probes remain limited . one important observation in this particular study of lai et al .was that the viscosity observed using a sized colloidal probe is much smaller than the results obtained on the macroscale .the results were interpreted with a model that assumes that the colloidal probe used can diffuse almost freely through the polymeric mucin network . in consequence , the influence of a variety of particle coatings has been examined extensively during the past decade with the goal to optimize particle transport through this natural barrier . only recently ,it was shown by use of active microrheology and cryogenic - scanning - electron - microscopy ( csem ) that mucus should have a porous structure on the micron scale .the active manipulation of immersed particles offers a deeper insight into the material properties of mucus , especially into the strength of its scaffold .a further step was to demonstrate , that passive immersed particles show a very heterogenous diffusion behavior , ranging from particles firmly sticking to the supposed scaffold and particles moving almost freely in an viscous environment .however , so far , studies utilizing optically trapped microparticles have been scarce although they are able to greatly enhance our understanding of material properties .they enable the mapping of pore sizes and , by taking the local mobility of particles into account , allow to distinguish in an unambiguous way between a weak and a strong confinement . by utilizing strong optical traps ,the rigidity of the mucus mesh can be probed in order to determine which forces the material is able to resist to . in this study , we will first use a sophisticated linear response theory based on the kramers - kronig relation in order to obtain the microscopic complex loss and storage modulus .due to the heterogeneity of the mucus , these values show a significant scattering , especially if compared to our model gel , a hydroxyethylcellulose gel ( hec ) . while the mucins in the mucus form the gel network by non - covalent interchain interactions , the hec is a classical hydrogel without any covalent interchain interactions. therefore one might expect certain differences , but an explanation for the cause of the large heterogeneity of the mucus is still missing .additionally we compare our microscopic data to results obtained by macroscopic oscillatory shear rheometry .the results from the microscopic and macroscopic measurements are in perfect agreement for the hec gel , while there is a huge difference for the mucus that seems to be much stiffer on the macroscopic scale .the csem images allow to hypothesize a foam like structure for the mucus with a comparable rigid scaffold and pores with `` walls '' that are filled with a solution of low viscosity and elasticity , compared to the mesh like structure of hec . by evaluating the volume percentage of the pores compared to the scaffold we can estimate its elastic module by use of a foam model .clearly , the biochemistry of penetrating objects plays an important role in the diffusional properties of the mucus but we will show that it has also unique viscoelastic properties that differ strongly from synthetic gels . we postulate that both aspects need to be considered for drug delivery to the airways using particulate carriers .all our experiments on mucus were performed with native respiratory horse mucus .it was obtained during bronchoscopy from the distal region of four healthy horses and stored at until use . according to earlier studies ,such storage conditions are not known to influence the material properties . as a synthetic model gel for comparison , a ( w / w ) hydroxyethylcellulose gel ( hec ; natrosol 250 hhx pharm , ashland aqualon functional ingredients )was chosen because it had similar viscoelastic moduli on the microscale .for the microrheology two kinds of particles were used , polymethylmethacrylate ( pmma ) beads with a size of and melamin resin beads with a size of ( sigma - aldrich ) . a gene frame ( art .- no .ab-0576 , abgene , epsom , united kingdom ) was used in microrheology as a sample cell to handle the low sample volume of .+ in preparation of the experiments , hec was dissolved in water and shaken gently for 24 hours . for the microrheology ,approximately of each particle suspension ( solid content : ) were mixed with of sample resulting in particle concentrations of less than .thus , hydrodynamic interactions between multiple particles are negligible .these samples were vortexed for about 5 minutes before use to make sure that the beads were distributed homogeneously . afterwards , a gene frame was filled with the respective amount of sample and sealed airtight using a coverslip .no additional preparation of the samples was necessary for experiments in the cone and plate rheometer .all experiments in both setups were performed at room temperature .a rotational mars ii ( thermo scientific gmbh , karlsruhe , germany ) was used to perform the small and large amplitude oscillatory shear ( saos and laos ) experiments . with saos experiments the linear response of the materialis tested , whilst laos experiments are used to characterize the nonlinear properties .first strain amplitude sweeps were performed in order to determine the region of linear response and the nonlinear properties of both materials and then a frequency sweep in the linear range was performed .the rheometer was equipped with a cone and plate geometry with a cone angle of for the measurements on mucus and a second geometry with an angle of in case of the hec gel . in case of mucus, this enabled us to perform measurements on volumes as small as with an acceptable signal - to - noise ratio . in case of hec ,bigger sample volumes were available so using the more sensitive geometry was a feasible option .the optical tweezers setup described in ref . was used to perform passive microrheology .particle positions in the focus of the laser beam were recorded with a high speed camera ( hispec 2 g ; fastec imaging ) at a frame rate of .the recorded picture series were analyzed using a particle tracking algorithm based on the cross - correlation of successive images .the complex shear modulus was then determined by applying a method proposed by schnurr . for this purpose , the langevin equation describing the interaction of the confined bead with its surroundingsis recast in frequency - space in such a way that particle displacements and the brownian random force are linked by the susceptibility or compliance where the susceptibility is a function of the trap stiffness and the frequency - dependent friction coefficient .it is a complex quantity whose imaginary part is related to the power spectral density of particle displacements by the fluctuation - dissipation - theorem with boltzmann s constant and the temperature .the kramers - kronig - relations allow the determination of the real part of the compliance by computing the principal value integral the function contained within the integral encompasses two poles at which are excluded from integration by the means of the principal value integral indicated by the letter `` p '' in the integration symbol .finally , the relation of the compliance and the complex shear modulus is given by where is the particle radius .the dependence of the complex shear modulus on the particle size given in this equation is the general one which arises due to the increasing drag force when choosing larger spheres .however , it does not include additional influences like for example caging effects of the spheres in pockets of a porous material like mucus .such size dependencies which are caused by inhomogeneous structures within a fluid can be explicitly studied by varying the particle size ( see for example ) .this was not conducted in our study , though .+ just as in case of the macrorheologic shear modulus , the microrheologic shear modulus as well is composed of the elastic contribution and the viscous contribution , where .however , due to the presence of the optical trap , there is an additional elastic contribution which has to be subtracted from the measured in order to gain the actual sample properties . while it is possible to perform an online calibration of the trap stiffness in newtonian fluids this is not possible in complex fluids like mucus .thus , separate measurements with colloids in water were performed beforehand in a separate sample cell for this purpose using both the equipartition and the drag force method . typically , the stiffness ranged between and .due to experimental restrictions in terms of the duration of a measurement as well as the influence of a translational drift a frequency of was chosen as the lower frequency cutoff .hence , the microrheologic shear modulus is only given starting from a frequency of .there is an upper frequency cutoff as well which is defined by the nyquist sampling theorem as half of the recording frequency , i.e. in our case . in order to minimize aliasing errors , which may be caused due to the fourier - transforms , we chose a value of well below the nyquist frequency as the upper cutoff , instead .cryo - sem images were taken as described in ref .sample gels were filled in a thin dialysis capillary and immediately frozen in liquid propane to only allow formation of amorphous water and circumvent formation of crystalline water .capillaries were cut to expose the brim to sublimation of the amorphous water inside the gels .finally the surface of the dry polymer scaffold was sputter - coated with platinum and samples were transferred into the sem ( dsm 982 gemini ; zeiss ) and imaged at ( , working distance ) .additional csem measurements were performed with a jsm-7500f sem ( jeol , tokyo , japan ) equipped with an alto 2500 cryo transfer system ( gatan , abingdon , uk ) .respiratory horse mucus was placed between two metal freezing tubes ( gatan , abingdon , uk ) and the samples were frozen by plunging into liquid nitrogen . inside the cryo transfer system the upper tube was knocked off to create a fracture surface and sublimation was performed for 15 min at 178 .samples were sputter - coated with platinum at 133 , transferred to the sem cryo - stage and imaged at 133 and 5 kv acceleration voltage ( working distance 8.0 mm ) .csem images were analyzed by imagej 1.48v software ( national institutes of health , usa ) to determine the fraction of pore volume in the mucus .the relation of pore area to measured surface area at the brim was assumed to correspond to the relation of pore volume to mucus volume .image contrast and brightness was adjusted appropriately and a threshold was set to distinguish the inside of the pores from the pore walls ( fig.[mucuscsem2 ] b ) .pore areas were determined by the program using the _ analyze particles _ function ( fig.[mucuscsem2 ] c ) ) .the sum of the pore areas was related to the total image area .6 images with an overall area of 1458 were analyzed .the shear modules from the microrheological measurements are shown in fig .[ hecmucusot ] .data sets were recorded by confining particles in the focus of the optical tweezers at different locations within the bulk of the sample .the average values of more than 10 measurements are depicted by symbols while the regions in which all values are distributed are drawn as shaded areas .both the elastic and the viscous shear modulus of mucus and the hec gel are in the range from to . in case of hec ( fig .[ hecmucusot](a ) and ( b ) ) , the shear modulus shows a limited variance when switching locations within the sample , but for the case of mucus ( fig .[ hecmucusot](c ) and ( d ) ) , this variability is significantly enhanced , especially in the intermediate frequency range . for mucus , both viscous and elastic shear moduli increase monotonically and reach a plateau eventually .these results agree with earlier observations .the hec data sets can not be compared directly to that former study since in the present study a higher concentration of was chosen to give a better representation of the microrheologic properties of mucus .nonetheless , besides the larger scatter for the mucus , the results for both the passive microrheology of mucus and of the hec gel in our actual study are quite comparable , i.e. the absolute values are very similar , they lay in the same order of magnitude and even their functional behavior in our accessible frequency domain is almost indistinguishable . a completely different result is found in the macrorheology .results from large amplitude oscillatory shear ( laos ) experiments are shown as shear stress versus shear strain plots , i.e. lissajous plots , together with the respective shear modulus versus strain amplitude ( fig .[ hecmucusampsweep ] ) . while the lissajous plots for hec gels are always elliptic within the examined strain amplitude range ( fig .[ hecmucusampsweep](a ) ) , this is not the case for mucus ( fig . [ hecmucusampsweep](c ) ) . instead of ellipses ,the curves deform into parallelograms when exceeding a strain amplitude of . while the response of a linear viscoelastic material typically has the shape of an ellipse in a lissajous plot , deviations indicate a non - linear response which is the case for mucus .this is also confirmed by the shear modulus versus strain amplitude plots ( fig .[ hecmucusampsweep](b ) and ( d ) ) . whilein case of hec both the elastic and the viscous modulus only show weak changes up to strain amplitudes of , in case of mucus a significant decrease becomes apparent for both .the onset of this decrease in can already be observed at .when exceeding a value of , it additionally becomes apparent in .this nonlinear behavior is an indication of the particular behavior of mucus .however , in order to avoid higher harmonics in the small amplitude oscillatory ( saos ) linear response measurements , the shear strain has to be kept below this onset of nonlinearity . for the hec model gel ,the critical shear amplitude is and for the mucus .thus , for hec a constant strain amplitude of and for the mucus a much lower value of for the frequency sweep was used .after completion of the amplitude sweep , a series of frequency sweeps was performed with the same sample . using the strain amplitudes determined during the amplitude sweep , frequencies between and were applied stepwise with five repetitions each to reduce the influence of noise while keeping the total duration of the experiment as short as possible .a short measurement duration was important to avoid evaporation of the samples .for both the hec gel and mucus , the average of three of these sweeps is shown in fig .[ hecmucusfreqsweep ] . in the measured frequency range from to find a monotonous increase in the moduli for the hec gel but for the mucus already a roughly constant plateau is observed .furthermore , the hec gel shows a viscous behaviour at low frequencies while the mucus has a higher elastic modulus for all frequencies .this is most likely a consequence of the strong non - covalent interchain interactions of the mucins . in the same graph, we plot the averaged data from the microrheology ( fig .[ hecmucusot ] ) . here, the most striking differences between mucus and the hec gel becomes apparent .for the hec , we observe a continuous transition from the macro- to the microrheologic data .it is even possible to fit the combined saos and microrheology data approximately with the two - component maxwell fluid model that consists of a viscoelastic contribution for the polymeric part and a newtonian contribution for the solvent .deviations from the model occur for at frequencies below . in principleone could improve the agreement between the fit and the data by incorporating more relaxation times but the additional physical insight will be limited .one crossover frequency between elastic and viscous part is visible at and a second crossover might be present above , however , it can not be verified in the scope of our experiments since the relevant frequencies lie outside of the accessible spectrum .thus , the hec gel behaves mostly as a viscoelastic fluid below and as a viscoelastic solid above this value .+ in case of mucus in fig .[ hecmucusfreqsweep]b , no such smooth transition from the macro- to the microrheologic data set is observed .a significant gap between the results gained by both experiments is present which encompasses three to four orders of magnitude .the saos data sets indicate that and are only weakly dependent on the frequency within the probed frequency range .a slightly more pronounced frequency dependence is observed for the microrheology data .however , all values of the viscous and elastic modulus remain between and for over more than three orders of magnitude in frequency .this clearly shows that there is a remarkable difference between the viscoelastic properties on the micro- and the macroscale .of course , it is known that the microrheolical properties of mucus depend on the particle size even well below 1 , but our optical detection method did not allow to explore this regime . in any case , as one expects to find an even lower viscosity for smaller particles , the difference in fig [ hecmucusfreqsweep]b will be even more pronounced .[ mucuscsem ] the csem images of a hec gel and a mucus sample are shown for two different spatial resolutions . the polymeric network of the hec shows a typical homogeneous mesh for a gel . the mucus shows a more heterogeneous distribution of polymeric material and especially in the large magnification a heterogeneous porous structure is visible .this scaffold of pore walls is made out of much thicker polymeric material than the polymeric network of the hec gel .when comparing the microrheologic shear modulus of hec and mucus ( fig . [ hecmucusot ] )we find similar viscoelastic properties .both the elastic as well as the viscous modulus show a comparable response spectrum .it should be noted , though , that the local properties in mucus vary more significantly which is due to the heterogeneity of the material that could be observed in csem images . at frequencies above , roughly stays constant at , a value that is significantly below the value of that is found in the macrorheology at a frequency of .the laos measurements also revealed significant differences between the hec gel and the mucus .the latter showed a nonlinear response behavior already at strain amplitudes of .a similar behavior was found by ewoldt et al. in laos experiments with gastropod pedal mucus . from the csem images we know that the mucus has a porous structure with a thick scaffold that builds the pore walls .while the rheometer probes the whole bulk of the fluid , the microrheology accesses mostly the contents of the pores which is formed by an aqueous solution of dissolved biopolymers .this structure is very similar to that of a foam .foams in general consist of a porous material which is filled with another material of much lower stiffness .this foam like structure can be modeled only if we assume significant simplifications .a suitable approach is the mori - tanaka model which considers a foam - like material with elastic walls . in this case ,the material is composed of two phases , one of which is the wall material and the other one of which is the material filling the pores . due to the very large difference in elastic propertieswe will fully neglect the contribution of the aqueous solution in the pores and then the total macroscopic shear modulus of mucus is linked to the shear modulus of the material of the pore walls by where is the volume fraction of the filling material and is a dimensionless number . under the assumption that the wall material is isotropic and homogeneous , it is given by with poisson s ratio . under the assumption of a volume fraction of the pores of , which we determined from the csem images , while assuming incompressibility of the pore walls ( ) the actual shear modulus of the wall or scaffold material lies above the values measured by the rheometer by a factor of .this means that the gap between macro- and microrheology increases even further when taking material porosity into account .given that the liquid inside the pores is rheologically comparable to an aqueous solution , the diffusion in mucus can be as fast as in water for small particles .for larger particles , size exclusion effects occur .particles above a certain cut - off size , which is determined by the pore size , can be trapped inside the mucus .however , also smaller particle can be retained in the mucus due to interactions with mucus components .our optical tweezers measurements showed the comparable microrheology of hec gel and respiratory horse mucus .thus hec gel might be an appropriate model to study if diffusion of particles through mucus is impeded by size exclusion effects , given the mesh sizes are similar to mucus pore sizes .however , it needs to be considered that retention of particles due to interaction with mucus components can not be evaluated by using hec gel .rheological characteristics on the micro- and on the macroscale of native equine respiratory mucus were compared to a synthetic hydroxyethylcellulose ( hec ) hydrogel for reference .our measurements revealed that mucus has peculiar rheological properties that may be best explained by its foam - like microstrucure .this foam like structure is an unique property of the mucus and has to be considered if transport properties of drugs have to be optimized .as the physiologial function of mucus is different at various organs ( e.g. respiratory , digestive , or reproductive tract ) , it appears intriguing to investigate whether such differences are also reflected in different structures and rheological properties across various organs and also species .+ obviously , the entrapment and clearance by mucus as well as the penetration of micro- and nanoparticles by and through mucus , respectively , will strongly depend on the interaction with mucus and the particular path taken by such objects .besides the chemistry of the interacting object the mucus behavior due to its structure is essential .knowledge of the basic structure and the understanding of the impact of those structural and functional features of mucus will have important bearings for the design of pulmonary drug delivery systems .of course , in any realistic situation of physiological relevance , the local ion strength , ph , temperature and local mechanical ( shear ) stresses will affect the mechanical properties of the mucus .these parameters might not only induce quantitative changes , but future studies have also to reveal if , e.g. , under certain circumstances a collapse of the scaffold structure might occur .michael hellwig and andreas schaper ( philipps - university marburg ) are acknowledged for assistance in csem measurements .we thank the german research association ( dfg - le 1053/16 - 1 and grk 1276 ) for financial support .we thank julian kirch for the part of the recording of the cryogenic scanning - electron - micrographs .afre torge thanks the fidel - project ( `` cystic fibrosis delivery '' , grant n 13n12530 ) for financial support by the german federal ministry of education and research ( bmbf ) .+ + b. button , l .- h .cai , c. ehre , c. kesimer , d. b. hill , j. k. sheehan , r. c. boucher , m. rubinstein , a periciliary brush promotes the lung health by separating the mucus layer from airway epithelia , science 337 ( 2012 ) 937 .http://dx.doi.org/10.1126/science.1223012 [ ] .a. henning , m. schneider , m. bur , f. blank , p. gehr , c .- m .lehr , embryonic chicken trachea as a new in vitro model for the investigation of mucociliary particle clearance in the airways , aaps pharmscitech 9 ( 2008 ) 521 .k. forier , a .- s .messiaen , k. raemdonck , h. deschout , j. rejman , f. de baets , h. nelis , s. c. de smedt , j. demeester , t. coenye , k. braeckmans , transport of nanoparticles in cystic fibrosis sputum and bacterial biofilms by single - particle tracking microscopy , nanomedicine 8 ( 2013 ) 935 .k. forier , k. raemdonck , s. c. de smedt , j. demeester , t. coenye , k. braeckmans , lipid and polymer nanoparticles for drug delivery to bacterial biofilms , journal of controlled release in press ( 2014 ) . r. h. ewoldt , c. clasen , a. e. hosoi , g. h. mckinley , rheological fingerprinting of gastropod pedal mucus and synthetic complex fluids for biomimicking adhesive locomotion , soft matter 3 ( 2007 ) 634 . http://dx.doi.org/10.1039/b615546d [ ] . a. macierzanka , n. m. rigby , a. p. corfield , n. wellner , f. bttger , e. n. c. mills , a. r. mackie , adsorption of bile salts to particles allows penetration of intestinal mucus , soft matter 7 ( 2011 ) 8077 .http://dx.doi.org/10.1039/c1sm05888f [ ] .m. yang , s. k. lai , y .- y .wang , w. zhong , c. happe , m. zhang , j. fu , j. hanes , biodegradable nanoparticles composed entirely of safe materials that rapidly penetrate human mucus , angewandte chemie , international edition 50 ( 2011 ) 2597 .j. kirch , a. schneider , b. abou , a. hopf , u. f. schfer , m. schneider , c. schall , c. wagner , c .- m .lehr , optical tweezers reveal relationship between microstructure and nanoparticle penetration of pulmonary mucus , proc .natl . acad .u. s. a. 109 ( 45 ) ( 2012 ) 18355 . http://dx.doi.org/10.1073/pnas.1214066109 [ ] .x. murgia , p. pawelzyk , u. f. schaefer , c. wagner , n. willenbacher , c. lehr , size - limited penetration of nanoparticles into porcine respiratory mucus after aerosol deposition , biomacromolecules 17 ( 2016 ) 1536 .a. ziehl , j. bammert , l. holzer , c. wagner , w. zimmermann , direct measurement of shear - induced cross - correlation of brownian motion , phys .103 ( 2009 ) 230602 . http://dx.doi.org/10.1103/physrevlett.103.230602 [ ] .m. capitanio , g. romano , r. ballerini , m. giuntini , f. s. pavone , d. dunlap , l. finzi , calibration of optical tweezers with differential interference contrast signals , rev .73 ( 4 ) ( 2002 ) 1687 .
native horse mucus is characterized with micro- and macrorheology and compared to hydroxyethylcellulose ( hec ) gel as a model . both systems show comparable viscoelastic properties on the microscale and for the hec the macrorheology is in good agreement with the microrheology . for the mucus , the viscoelastic moduli on the macroscale are several orders of magnitude larger than on the microscale . large amplitude oscillatory shear experiments show that the mucus responds nonlinearly at much smaller deformations than hec . this behavior fosters the assumption that the mucus has a foam like structure on the microscale compared to the typical mesh like structure of the hec , a model that is supported by cryogenic - scanning - electron - microscopy ( csem ) images . these images allow also to determine the relative amount of volume that is occupied by the pores and the scaffold . consequently , we can estimate the elastic modulus of the scaffold . we conclude that this particular foam like microstructure should be considered as a key factor for the transport of particulate matter which plays a central role in mucus function with respect to particle penetration . the mesh properties composed of very different components are responsible for macroscopic and microscopic behavior being part of particles fate after landing . mucus , respiratory mucus , horse , microrheology , saos , laos
coevolution of the dynamics and topology of networks is widely observed in diverse systems from cellular biology to social networks . in the brain ,the spiking dynamics of neurons depends on how they are connected . on the other hand ,the connectivity can be modified by the spiking activity .the connections ( synapses ) between neurons in many brain areas are modified according to the spike - timing - dependent plasticity ( stdp ) rule .a most common stdp rule for excitatory neurons is as follows : the connection strength ( synaptic weight ) from neurons a to b is strengthened if a spikes before b ( long - term potentiation , or ltp ) , and weakened if a spikes after b ( long - term depression , or ltd ) .the amount of modification decreases with the time difference between the spikes of a and b. the connection between two neurons is lost if the synaptic weight is reduced below a threshold ; conversely , it can be established through consistent parings of the spikes of the neurons .studies that use stdp in spiking neural networks have shown a number of emergent properties . in this paper , we show that synfire chain connectivity , in which subsequent groups of neurons are connected into a feedforward network that supports sequential spiking of the neurons , emerges through coevolution of the spiking activity and the connectivity across many presentations of a training stimulus to a subset of neurons ( training neurons ) .sequential spiking of neurons is observed in a number of brain areas .some of the strongest experimental evidence for synfire chains producing spike sequences is from zebra finch premotor nucleus hvc ( proper name ) .projection neurons in hvc spike sequentially at precise times relative to the learned song .consistent with the synfire chain dynamics , cooling hvc uniformly slows down the song , and the sub - threshold membrane potentials of neurons rapidly depolarize 5 - 10 ms before they spike .it is well - established that synfire chains robustly produce spike sequences .however , how neurons are wired into synfire chains is not well - understood .an intriguing possibility is that synfire chains self - organize through activity - driven synaptic plasticity .earlier studies using stdp or similar hebbian rules resulted in short chains with a few groups .the most likely reason is that these rules are prone to producing unstable growth of connections .two recent studies introduced additional homeostatic synaptic rules to limit such instability , and showed that long synfire chains can form .the key idea behind both studies is to restrict the connectivity of the network after a certain amount of growth has occurred .fiete et al achieved this by limiting the total synaptic weight in and out of every neuron .however , a study using large scale simulations and mean - field analysis suggested that the regulation of the total synaptic weights does not prevent unstable network growth . in ( jun - jin model ) , we took a different approach , and imposed an axon remodeling rule that limits the number of strong connections , defined as those with synaptic weights above a threshold , that one neuron can maintain .reaching the limit leads to pruning of all weak connections from the neuron .there are two additional features of the jun - jin model .first , the model includes an gradual , activity - independent decay of synaptic weights , which we call potentiation decay .second , an activation threshold switches synapses on or off depending on the magnitude of the synaptic weights .this rule allows the active connections between neurons to form or disappear as the synaptic weights are modified .simulations of 1000 leaky integrate - and - fire neurons showed that synfire chains emerge from initially random active connections when 6 to 40 training neurons are intermittently activated by external inputs for many trials .the number of neurons in each group roughly equals to ( set to 10 in the simulations ) , and is not affected by the number of training neurons except for the first 2 - 3 groups . in this paper, we perform an in - depth analysis of the jun - jin model .we address unresolved fundamental questions such as what determines the lengths of the emergent chains and how the length distributions are influenced by the total number of the neurons ( network size ) .we establish that , when the network is randomly active , the synaptic plasticity rules in this model allow the network connectivity to fluctuate , but the synaptic weights remain in a statistically stationary distribution .this ensures that the chain formation does not depend on specific initial network connectivity .we demonstrate that in between training trials , sequential spikes can spontaneously emerge in the forming chain .this noise - induced re - activation of the chain creates connections from neurons outside of the chain to those in the chain , and plays a critical role in determining the length distributions of the final chains .most notably , there is an upper limit for the mean chain length as the network size becomes large . we show that slow potentiation decay leads to short chains with narrow length distributions , while fast potentiation decay leads to long chains with a wide length distributions .we compare the results of network simulations to a lottery - type stochastic process in which neurons are selected iteratively to enter a chain , and the chain stops growing when a loop is formed , either by selecting neurons already in the chain or by selecting neurons connected to the chain .the distribution of chain lengths from the network simulations fits well with distributions generated by the lottery process .the analysis of this simple growth model shows that the rate of potentiation decay influences the chain length distributions by controlling the emergence of the connections to the growing chain .these connections also lead to a finite limit for the mean chain length as the network size increases .we simulate synfire chain formation in recurrent networks of excitatory neurons .we model each neuron using a leaky integrate - and - fire ( lif ) model .the neurons interact via pulse conductances , and they receive dominant feedback inhibition .they are also driven by upstream regions that we do not simulate , but instead model as independent , fluctuating external inputs .synaptic weights between neurons are modified according to an stdp rule .the details of modeling can be found in the appendix .our model choices were dictated by necessity to simulate the network dynamics quickly .a large number of training trials ( to ) are required for synfire chain formation in our model , and many training sessions are needed to construct the chain length distributions for a range of model parameters .therefore it was necessary that our simulation algorithm be efficient .we modified a fast , event - driven algorithm that had been developed to generate activity of pulse - coupled neurons that are targeted by a fluctuating external input .when the external input is modeled by gaussian white noise ( gwn ) , one can numerically solve the fokker - planck equation , store particular solutions in `` lookup tables '' and sample them during the network simulation to generate spike times .the steps of the algorithm are detailed in the appendix .the computational advantage of using pulse - coupled neurons is that the response of the membrane potential is instantaneous and can be calculated exactly .the time - evolution of the membrane potential between spikes is calculated from the lookup tables . by our measurements ,this algorithm is up to 150 times faster than simulating with 4th - order runge - kutta method . instead of scaling with number of timesteps ,the simulation time scales with number of spikes , which results in increased simulation speed .two differences distinguish our simulation algorithm from that reported in .first , we simulate conductance - based neurons instead of current - based neurons .second , instead of a spike latency , we impose a time resolution on the arrival times of spikes , as suggested in .algorithmically , imposing a time resolution on the spike arrival times means that , instead of the neuron with the single earliest predicted time emitting a spike , any neurons that spike within an interval of the earliest predicted spike time effectively spike together .the arrival time of the spikes at synapses is picked to be at the end of the resolution interval .this method has the same effect as a random latency , allowing neurons to cooperate to excite common targets , but it requires no additional queuing of events , which can be computationally intensive and slows simulation considerably .the population of neurons we simulate make excitatory connections to each other .however , we assume a population of interneurons targets the excitatory population , and these neurons reliably spike immediately when the excitatory neurons spike .all of the neurons in the excitatory population are inhibited at the end of the resolution interval .near - global inhibition is observed in neocortical circuits and in the songbird premotor area hvc .we do not simulate interneuron activity in order to conserve computational resources .the inhibition is stronger when more neurons spike within the spike resolution window , but we put a constraint on it based on the assumption that there is a finite size to the interneuron population targeting the simulated neurons .details are left to the appendix .the synaptic weight between each pair of simulated neurons is modified based on the stdp rule ( details are in the appendix ) .three additional synaptic plasticity rules are implemented to deter unchecked synaptic strengthening that stdp alone can lead to , and to ensure stable synaptic weight distribution when the network is in the state of spontaneous activity with no training stimulations ._ activation threshold : _ silent synapses are those with no post synaptic ampa receptors . at physiological conditions, these synapses do not produce responses in the postsynaptic neuron , hence ltp can activate silent synapses to become functional synapses ; conversely , ltd can silence active synapses .abundant especially during the development , silent synapses allow the possibility of sculpting wide variety of neural circuits through neural activity . we model silencing and activation of synapses by thresholding the synaptic weights .if synaptic weight from neuron onto neuron , , grows larger than , then it is active and evokes a response from its target ; otherwise it is silent and behaves as if it has zero weight . for our simulations , we pick . regardless of whether a synapse is silent or active , it obeys the stdp rule .our results do not depend on synaptic depression acting on silent synapses. active synaptic connectivity _ in vivo _ is sparse with a single neuron connecting to less than 10% of its neighbors .the activation threshold directly avoids densely connected network states by deactivating all sufficiently weak connections .a synapse between any pair of neurons can be activated , so activity can drive the development of any possible synaptic connectivity . in other words , there is no _ a priori _ restriction on how neurons can be connected after training .parameters are selected such that the connectivity remains within the sparseness bounds observed experimentally .a common modeling approach for avoiding dense connectivity is to specify a sparse connectivity between neurons and allow only synaptic weights of these connections to change . in this strategyno new connections can form , and the effects of training on the connectivity is much more restricted than in our model ._ potentiation decay : _ in addition to the activation threshold , a potentiation decay is applied to the weights of all synapses , amounting to a slow memory leak within the system .the decay is activity - independent and is implemented as a rescaling of all synaptic weights , where , as in previous phenomenological synaptic growth models .long - term potentiation of synapses usually decays to baseline within three hours , which is called the early ltp ( e - ltp ) .we assume that the reduction of the synaptic weight during the trial time is insignificant given the typical three - hour time scale of potentiation decay ; therefore , weight rescaling is applied between consecutive two - second training trials , but not during the trial interval .implementing the rescaling during trials is computationally intensive and produces no observable differences . in this simplified model of potentiation decay, all synaptic weights are subjected to the weight rescaling between each trial , including weights of silent synapses .the decay of the silent synapses is important for our model .consider what happens to synaptic weights that become deactivated due to either potentiation decay or synaptic depression . if synaptic depression were the only mechanism that modifies the weights of silent synapses , then a deactivated weight may remain close to the activation threshold .this would lead to an accumulation of weights near the threshold that require a small increase in order to become active .on the other hand , if deactivated synapses have their weights immediately set to zero after deactivation , then a synapse that is consistently active , but does not evoke a spike from its target because of noisy fluctuations over a few consecutive training trials , is immediately destroyed . choosing to implement a decay of the silent synaptic weightsis proposed as a balanced solution to these two scenarios .the decay of silent synapses can be related to gradual elimination of spines observed on dendrites .its biological mechanism is most likely different from the decay of e - ltp .we apply the same decay rule for both silent and active synapses for the sake of simplicity .the details of how silent synapses decay do not matter .the functional role of the potentiation decay in our growth simulations is to regulate runaway synaptic growth .we will demonstrate that , in combination , the activation threshold and the potentiation decay have a stabilizing effect on the growth of the network ._ axon remodeling : _ synaptic weights are clipped if they are strengthened above a threshold ( see the appendix ) . however , this does not limit the number of strong synapses that approach the strongest allowed weight .another mechanism , axon remodeling , regulates the number of strong synapses a neuron can maintain with limited resources available . limiting the number of strong synapses stabilizes an emerging synaptic topology of strong synapses .if axon remodeling were not imposed on each neuron s axonal tree , a well - connected neuron would continue to accrue targets .neurons _ in vivo _ are observed sending out many axons during development , then retracting most and maintaining only the strongest .a small number of strong synapses in a network have also been measured in experiments .a slower potentiation decay is also applied to the strongest synapses , resulting in further stabilization .axon remodeling is implemented with the following rules , which are nearly identical to those in . 1 .a second threshold , , in addition to the active threshold , is introduced within the range of allowed synaptic strengths .weights greater than this value characterize a strong active synapse , which we deem a _supersynapse_. supersynapses elicit spikes reliably from a target despite the noisy fluctuations of the membrane potential .the supersynapse threshold is greater than the active threshold : .a limit , , is imposed on the number of neurons that a presynaptic neuron contacts along supersynapses .this is the maximum number of axons a neuron can maintain with its limited resources . when this number of supersynapses is attained ,the neuron is said to be `` saturated . ''once a neuron is saturated , the stdp rule is only applied to its supersynapses . after saturationall synaptic weights continue to decay .the potentiation decay reduces the weights of non - supersynapses , and as a result they will eventually approach zero with no opportunity to be potentiated unless the neuron de - saturates .supersynapses are reinforced by repeated ltp ; without regular reinforcement , potentiation decay can cause de - saturation and all connections will undergo stdp again . if de - saturation occurs frequently , no stable synaptic structure emerges .high membrane potential variability reduces the frequency of ltp at a supersynapse because higher noise reduces reliability of a supersynapse to produce a spike from its target . in order to ensure ltpoccurs frequently enough to overcome the potentiation decay , in all simulations , we apply a slower potentiation decay to supersynapses of a saturated neuron .this corresponds to the slower decay of the late phase ltp ( l - ltp ) compared to e - ltp .axon remodeling and the synaptic cap are non - essential to stability of the weight distribution of a network before training ; they are only necessary when the network is presented with a stimulus . in the next section, the roles of of axon remodeling will be further articulated where the training regimen is described .network training is broken into a series of identical trials .supersynapses can emerge within a network as it is presented repeatedly with a training stimulus .we model this stimulus with a short , strong excitatory current onto a small subset of training neurons .the training excitation originates in an upstream brain area , possibly one processing sensory stimuli .training continues until the number of supersynapses contained in the network stabilizes .a trial commences with the presentation of the training signal to the training neurons .the signal is modeled by a strong external drive biasing the training neurons to spike within several milliseconds of the beginning of the trial .after 8 milliseconds the driving current onto the training neurons returns to its baseline value .the spontaneous activity and synaptic weight dynamics are simulated for one second after the training signal is withdrawn .the trial ends after this specified trial time and an inter - trial interval commences , which we do not simulate . during this period , which is assumed much longer than one second, synaptic weights are reduced by the potentiation decay factor and the membrane potentials are randomized .training is repeated until the number of supersynapses reaches a stable value for 2,500 trials ; this may take as few as 5,000 trials up to 100,000 trials depending on the size of the network and learning scale factor ( see appendix ) .we will show in the next section that the training neurons form the seed for development of a synaptic chain of neurons connected by supersynapses .synfire chain growth in response to training is governed by stochastic selection of post synaptic targets until a loop forms and the growth stops .repeated stimulations of the training neurons change the synaptic weight distribution and produce strong , stable synaptic chain connectivity .chain growth emanates from the training neurons .neurons that spontaneously spike shortly after the training neurons may be targeted by the training neurons due to the stdp rule .since the training neurons spike synchronously , they make convergent connections to the same set of neurons .subsequent training strengthens these connections until the the synapses become supersynapses .once supersynapses develop , reliable spikes can be evoked in these targets on nearly every trial .when this is the case , we say that the targets have been _ recruited_. the cooperation via the convergent synapses is important for the targets to overcome membrane noise .axon remodeling restricts the number of supersynapses that one neuron can maintain .consequently , the number of recruited neurons is close to regardless of the number of the training neurons , although some fluctuations exist due to the noise in the recruiting process .the recruitment process continues as the second group accrues their own targets via the same cooperative process .new groups are recruited until previously - recruited neurons are recruited again , forming a closed loop .this stochastic , iterative process yields stable synfire topologies that produce long , stereotypical sequences of spikes .the chains consist of an introductory sequence that begins with the training neurons , which feeds a loop of strong synaptic connectivity , examples of which are displayed in fig . [ chain_sam ] . the network structure is clearly reflected in raster plots of the activity of the population during a typical trial after the chain is fully formed , as shown in fig .[ raster ] .the length of the chain formed by this process varies from trial to trial , and depends on the values of synaptic plasticity parameters and the size of the network ( fig .[ fig_synfire_sims ] ) .we find that the potentiation decay is the crucial model parameter that predicts the mean and variance of the distribution .we present a simple , analytically solvable , `` lottery '' chain - growth model to explain how the potentiation decay controls the characteristics of the length distributions .our lottery model predicts that the mean chain length approaches a finite value as network size is increased .this simple model reproduces what is observed in the full , simulated model .the mean chain length in the lottery model is controlled by a small parameter quantifying the likelihood that a neuron that is recruited by the chain already targets the chain .we observe these preferential connections from unrecruited neurons onto recruited neurons in the full simulations , and we describe why these connections appear .what these results show is that training alters not only the network topology among neurons recruited to the chain , but also the connections from all other neurons to the neurons in a chain .this `` global '' response of the connectivity to an excitation targeting only a small subset of the population is indicative of a synergistic relationship between the spike activity on the network and the underlying topology .chain growth is initiated by stimulating the training neurons . before training , the initial values of synaptic weightsare drawn from a particular distribution , which we refer to as a _dynamic ground state_. around 2 - 10% of the synapses are active .this connection probability of the active synapses is a generally accepted range for cortical networks .spontaneous activity occurs in the network due to the noise and the active connections .the rules that govern synaptic dynamics ( stdp , active threshold , potentiation decay , etc . )yield a distribution of synaptic weights that is statistically stationary as the population is spontaneously spiking .the dynamic ground state is stationary due to the homeostatic effect of the potentiation decay and the activation threshold . if not for the interplay between these two plasticity rules , supersynapses would spontaneously emerge due to positive feedback across the strongest synapses .instead , a unimodal distribution of synaptic weights emerges . in fig .[ necessary_decay ] we compare a synaptic weight distribution when potentiation decay is acting on the synaptic weights ( fig . [ necessary_decay]a ) to when it is not ( fig .[ necessary_decay]b ) .a particular synaptic weight in a dynamic ground state takes a random walk with steps generated by the stdp rules and the potentiation decay .stability relies on potentiation decay that prevent synaptic weights from diffusing to large values .synapses driven below the activation threshold by the potentiation decay and ltd can be reactivated by ltp . in a dynamic ground state, any neuron is connected to of the other neurons via active synapses at any given moment .the distribution of synaptic weights in the dynamic ground state is obtained by letting the weights evolve while simulating spontaneous activity without a training signal over a sufficent number of trials .these `` initialization '' trials are identical to the training trials except that the training neurons are not subjected to the focused strong excitation .neurons are driven with noisy excitation over two - second trials resulting in spontaneous activity while the synaptic weights evolve according to the plasticity rules .after several hundred initialization trials , the stationary weight distribution emerges .we identify this network state when the number of active synapses in the network reaches a stable value .a dynamic ground state does not emerge for all sets of the synaptic plasticity parameters ( details in the appendix ) .for example , if the maximum possible potentiation of the synaptic weight is small compared to the activation threshold and the potentiation decay is fast , the stationary state may contain only a few , short - lived active synapses because newly activated synapses are driven below the threshold before they can be further strengthened .the opposite situation is also possible when the potentiation decay is too slow to admit a stable weight distribution in which 2 - 10% of the synapses are activated .finding the full parameter space for a stable dynamic ground state requires a parameter search .however , a working combination can be found by setting the maximum potentiation slightly larger than the activation threshold , and the potentiation decay rate fast enough to deactivate a newly activated synapse within 10 s of trials .these parameters produce a stationary distribution in which the number of active synapses is likely smaller than 2% .the number of the active synapses can be increased by decreasing the maximum potentiation strength and the potentiation decay rate from this point .when a training stimulus is presented repeatedly to a network in the dynamic ground state , the stationary distribution of synaptic strengths is disturbed .this response of the network to the training stimulus drives emergence of the synfire chain within the initially disordered ground state network . during the training ,the neurons recruited into the emerging chain have different synaptic strength distributions compared to those unrecruited ( or `` pool '' ) neurons . to illustrate why this is so , consider specifically the training neurons as they contact neurons in the pool .because the network is initialized in the dynamic ground state before training , all neurons , including the training neurons , have the same initial distribution of synaptic strengths onto their targets. however , when training begins , the training neurons spike at the beginning of each trial with high probability , and the synaptic weights from the training neurons onto the pool neurons are more likely to increase because the training neurons spike reliably every trial .the weights approach a new equilibrium that has higher average weight than the dynamic ground state .this is shown in fig .[ seq_shift ] . the positive shift of the average strength of a synapse targeting the pool is a result of spiking with near - certainty every trial .as the distribution of weights of synapses from training neurons onto the pool shifts positive , the potentiation decay is not sufficient to deter rapid growth from positive feedback .consequently supersynapses emerge from the training neurons onto pool neurons .these synapses tend to be convergent since the convergence allows the training neurons to evoke reliably a spike from a shared target .furthermore , the training neurons that do not share the target are likely to develop connections onto a shared target since the shared target spikes frequently after the training neurons , which spike synchronously at the start of each trial . as the training progress , the training neurons accrue supersynapses onto shared targets , with their strengths capped at .when the number of supersynases from each training neuron hits the limit imposed by the axon remodeling rule , all weak synapses are pruned and decay away due to the potentiation decay .training neurons maintain only supersynapses .consequently , no more targets are recruited , and the second group is formed . the number of recruited neurons is close to because of the convergence . because of the strong , convergent connections from the training neurons , the second group spikes reliably in each trial .they accrue their own targets in the pool following the same process as the training neurons .the result is a positive shift of the distribution of synaptic weights away from the stationary distribution of the dynamic ground state . like the training neurons , the second group of neurons can eventually saturate by accruing shared targets within the pool until axon remodeling prevents further growth .the targets of the second group form a third group whose distribution of synaptic weights responds similarly .iterations of this recruitment process result in emergence of a synfire chain within the network .as the chain network develops , spikes propagate along the chain when it is initiated by the training signal , and the ordering and timing of the spikes is almost the same across trials .a sequence may also be ignited by spontaneous activity , which we call _ re - ignition_. this can be observed directly in raster plots of spontaneous activity in networks with developing chains .an example is shown in fig .[ reignition_prob]a .spontaneous activity can initiate spike propagation from a random point in the chain . to quantify this observation , we simulated spontaneous activity of a network in which a subset of neurons are wired into a synfire chain and all other connections are randomly set ( synaptic plasticity was suppressed ) .we measured the spontaneous firing rates of all neurons .as shown in fig .[ reignition_prob]b , the downstream neurons in the chain have higher firing rates than the upstream neurons .this is because spikes reliably propagate down the chain wherever the re - ignition starts .the linear increase of the firing rates down the chain suggests that the probability of starting re - ignition is uniform across the chain .re - ignition has direct impact on the distribution of synaptic weights of the network .after multiple re - ignitions , the number of neurons targeting the chain increases .this is shown in fig .[ reignition_prob]c .pool neurons that are spontaneously active immediately before chain re - ignition have increased likelihood of targeting the chain . once these synapses from pool to chainare activated there is a decreased likelihood of ltd events on these synapses , since the strong connectivity within the chain makes it more likely for activity to remain on the chain after chain neurons are spontaneously active .hence , pool neurons tend to connect to a developing synfire chain .this positive shift of the weights from pool neurons onto neurons in the chain plays a role in the closure of the chain .once these preferential connections from the pool to the chain become numerous , it becomes likely that the pool neurons newly recruited into the chain are already connected to the chain , forming a loop that stops the chain growth .in fig.[reignition_prob]c it is clear that for faster potentiation decay , the total synaptic strength targeting the chain is smaller , implying that the stronger decay is more effective at reducing the strengths of pool neurons targeting the chain .chains recruit more groups and produce longer sequences if there are fewer pool neurons preferentially targeting the chain , which can be facilitated by strong decay .the length distributions reflect this association ( fig .[ fig_synfire_sims]a ) : for slower potentiation decay , chains tend to be shorter with a smaller variance , compared to chains subject to stronger decay .to test this association , chain length distributions are generated by a simple lottery growth model .we model chain growth as a random process : neurons in the chain are drawn sequentially from a lottery of all neurons with equal probability . for simplicity , we assume that there is one neuron in each group in the chain , which is equivalent to setting and using one neuron in the training set .the number of training neuron is also 1 . at the iterationthere are neurons in the chain out of the total network size .this simple model allows us to derive the chain length distribution analytically .we first consider the case that chain closes when a previously drawn neuron is re - drawn the second time , forming a loop in the chain and stopping its growth .the probability that the neuron is drawn from the pool neurons and the chain does not close at length is the probability that it is re - drawn from the neurons in the chain and the chain closes at this iteration is . using these conditional probabilities ,the probability of a mature chain with length is given by \prod_{i=1}^{a-1 } p(i+1|i),\ ] ] which , plugging in eq .( [ cond_p_rr ] ) , becomes after applying stirling s approximation . to calculate the mean chain length as a function of network size , we expand to lowest order in and approximate the sum as an integral to find as , the mean chain length is on the order of and is unbounded .this is because the chance of re - drawing neurons in the chain is zero when .we now consider the case that chain also closes when a pool neuron preferentially connected to the chain is drawn , in addition to re - drawing a neuron in the chain .as we have shown in the previous section , slower potentiation decay enhances the probability of preferential targeting of the chain and reduces mean chain length ( fig . [fig_synfire_sims]a ) . to modelthis effect , we introduce a parameter , which is the probability that a pool neuron is preferentially targeting one neuron in the chain . the probability that the neuron is drawn from the pool anddoes not close the chain is there are two scenarios in which the chain ends with neurons .one , when the chain has length , it can recruit a neuron from the pool of neurons that has at least one connection onto a chain neuron .two , when the chain has length , it can recruit one of the neurons above it in the chain .therefore , the probability of chain closes at length has two terms : q(a-1|a-2) ... q(2|1 ) + \frac{a-1}{n-1 } q(a|a-1) ... q(2|1).\ ] ] in the first term , the quantity in the brackets is the probability of selecting a pool neuron that has at least one connection onto a chain neuron .equation ( [ eqn - q - a-2 ] ) can be re - written into the form of eq .( [ gen_prob_a ] ) , with the conditional probability of chain not closing at length modified to given the above conditional probability , the probability distribution of chain lengths is then : \\ & \times p_a^{(rr ) } \frac{(1-p_0)^{\frac{1}{2 } a ( a-1)}}{a-1 } \end{split}\ ] ] where is eq .( [ rand_recruit_stir ] ) , the probability of chain length assuming no preferential targeting . equation ( [ gen_prob_a2 ] ) can be simplified in the large limit and moments of this distributions can be computed . however , the expressions are too onerous to print here .a notable feature is that the mean of this distribution approaches a finite limit as . in this limit , andthe probability of chain closing at length becomes according to eq .( [ gen_prob_a ] ) .the mean chain length is ^{(k+1)k } = 1 + \frac{\vartheta_2(z=0,(1-p_0)^{1/2})}{2 ( 1-p_0)^{1/8}},\ ] ] where is the jacobi theta function .this is a finite number .since every neuron in the pool has a non - zero probability of preferential targeting the chain , the mean chain length does not diverge even for . in fig .[ fig_models]a we display several chain length distributions for different . as is increased , the distribution shifts toward shorter chains and becomes sharper , indicating that the chains close at an increasingly predictable length .this trend corresponds to the sharpening of the length distribution of synfire chains as the potentiation decay is slowed , shown in fig.[fig_synfire_sims]a . to confirm the model predictionthat the mean chain length approaches an asymptotic value even as the network size grows very large , we performed a set of simulations with different network sizes .we set and the number of training neurons to 1 to make the simulations directly comparable to the model . since neurons can not cooperate , the variance of the gwn used in the simulations was reduced to .also the ltd time constant was set to ( see below ) .the potentiation decay was kept constant .the mean chain lengths in the simulations are well fit with the model using a single value of , and show clear sign of saturation as the network size increases ( fig.[fig_models]b ) .this trend is also observed in the fully complex , cooperative simulations which produce synfire chains .figure [ fig_synfire_sims]b is indicative of an upper bound on the length of the emergent synfire chain as the network size is increased .an minor effect omitted from the lottery growth model that also contributes to the shape of the length distribution is the ltd window function ( see eq .( [ ltd ] ) in the appendix ) .the width of the window controlled by sets a soft minimum on sequence length .this effect was mentioned in ; here we show more detailed measurements in fig .[ fig_ltd_shift ] .the effect can be attributed to reliable propagation of the training signal along a partially formed chain during each trial .a recruited neuron may target an upstream chain neuron directly or by targeting a pool neuron that is targeting the chain , contributing to the likelihood of chain closure .however , during each training trial upstream chain neurons spike before the newly - recruited neuron .therefore , the synapse onto the targeted neuron is weakened by ltd .if the temporal distance from the spike of the targeted neuron to that of the recruited neuron falls within the ltd window , the weight reduction quickly silences the synapse and any possibility of reconnection is eliminated . in the simulations that are used to validate the growth model ( fig .[ fig_models ] ) , we used ms to minimize the ltd effect . besides ltd , there are other simplifications assumed by our growth model .a time - independent model parameter describes the probability that a pool neuron targets one of the entrained sequence neurons and that the chain closes on itself by recruiting such a neuron .this assumption ignores the non - equilibrium dynamics of the weight distribution as the chain recruits additional neurons .re - ignition of the partial chain precedes development of connections that preferentially target the chain .this is a random event that occurs at finite intervals , implying that preferential connectivity has an associated time scale depending on the probability of a re - ignition event .the response time of preferential targeting can be seen directly in fig .[ reignition_prob ] , showing that the sum of weights targeting the chain , averaged over all members , saturates only after a number of groups have formed .the above discussion of the effect of the ltd window function also indicates that is not uniform over the length of the partial chain .in fact , it is effectively zero for neurons immediately upstream from the end of the chain . furthermore , because chain re - ignition is a random process driven by spontaneous activity , fluctuations in the strength and number of synapses targeting the partial chain may contribute to the probability of closure .a constant ignores such fluctuations .however , the model still gives reasonably accurate predictions .in large recurrent networks with stdp , axon - remodeling and an activity - independent potentiation decay of synapses , we observed emergence of long , stereotypical sequences of spikes .the sequences are produced by stable synfire chain topologies that self - organize via a stochastic growth process .we studied the distributions of synfire chain lengths and concluded that the rate of potentiation decay in our synaptic plasticity model primarily controls the shape of the distributions .the chains develop in response to a stimulus presented to the network in a dynamic ground state , in which the distribution of synaptic weights is invariant to synaptic modifications due to spontaneous activity on the network .this network state would not exist without the potentiation decay .synfire chain growth in our network model results from a global response of the connectivity among the neurons to a stimulus that targets only a small subset of the population , the training neurons .repeated stimulations of the training neurons leads to iterative growth of a synfire chain embedded in the network .this result was expected based on previous work .however , what was not expected , but what we observed , is global response of the connectivity as the chain develops . as the sequence begins to emerge ,neurons in the pool are increasingly likely to target the neurons in the chain .we suggest this process of targeting the strongly connected neurons in the chain is loosely analogous to preferential attachment in other complex networks .in contrast to other systems with preferential attachment , a scale - free distribution does not emerge from training because of the topological constraint imposed on the network by axon remodeling .the complex response of synapses throughout the network illustrates co - evolution of spike activity ( the emerging sequence ) and synaptic topology ( preferential targeting ) .we expect this observation generalizes to other recurrent network models with stdp in which spike sequences emerge .since pre - post synaptic strengthening is a common feature of stdp models , other neurons will attach to sequence members when a sequence is initiated by spontaneous activity .we believe our insight may explain the observation of neuron clustering and small - world network degree distributions in other studies where the number of strong connections a neuron can make is unconstrained . the coevolution of the network activity and network connectivity in response to an external stimulus is reflected in the spectrum of length distributions of the synfire chains . when the potentiation decay is too slow to sufficiently reduce the weights of connections from pool neurons onto a partially formed chain , the variation of chain lengths is reduced .when potentiation decay is fast , the number of preferential connections is reduced and the synfire chain has an opportunity to grow longer .we contrast our mechanism for synfire chain development with other studies in which chains emerge in a recurrent network , such as in fiete , et .al . . in fiete s model ,the synaptic plasticity rules are designed in such a way that each neuron ( or group of neurons receiving correlated external input ) must connect to one other neuron ( or neuron group ) that is not already targeted .the selection of target is random , which leads to multiple closed loops where every neuron ( or neuron group ) is incorporated into a loop .the distribution of chain lengths in this model follows a power law .hence , short chains are more numerous than long chains . in contrast , the distribution of chain lengths in our model is close to a skewed gaussian .there are typical chain lengths , and short chains are rare . in our model ,not every neuron is part of the chain .we introduced a growth model that incorporates preferential targeting to confirm the general form of the length distributions of the chains .the growth model is verified with corresponding simulations of networks producing single - neuron chains .the model illustrates tuning of the length distribution through the potentiation decay rate .furthermore , it predicts that the mean chain length approaches a constant in the limit of large network size .simulations of the more complex process of synfire chain growth confirm the same saturation effect ( fig .[ fig_synfire_sims ] ) .this is in contrast to the case of , for which the mean length diverges as .any small preferential targeting probability limits the mean length as .this result indicates that chain size is bounded softly , even in the limit of very large networks. it would be interesting to confirm this plateau effect in recurrent networks larger than those we were able to simulate . in at least one case we know of ,a much larger network has been simulated .however , chains did not emerge upon externally stimulating the network in this study .contrasting the result of this study with our own , we have validated the iterative recruitment of synfire chain groups using a power - law stdp rule instead of the additive ltp / multiplicative ltd model ( see eqs .( [ ltp ] ) and ( [ ltd ] ) ) introduced by .additionally , we observe the growth process is unaffected when setting the number of allowed strong connections to larger values ( 50 instead of 5 used in the simulations presented in the results ) , demonstrated also in .key differences that may account for the emergence of chains in our model are , dually , the vast restructuring of the network connections allowed by imposing an activation threshold on each synaptic weight , and also restricting the influence of a single neuron by imposing the axon - remodeling rule . before training, networks are initialized to a dynamic ground state .the distribution of synaptic weights in a ground state network is stationary while neurons are spontaneously active .synapses are activated and silenced by random activity , and the average flux of weights across the active threshold is zero .our synaptic dynamics model is distinguished from others in two ways .one , we impose an activation threshold on a synaptic weight between every pair of neurons in the network .the picture that emerges is one in which neurons are actively connecting and disconnecting to other neurons in the population freely and on a relatively short time scale , minutes to hours .a number of imaging studies support this fast restructuring of network connectivity patterns .the time scale of emergence and subsequent withdrawl of dendritic spines can be as short as ten minutes and has been linked to synaptic activity .network rewiring is not permitted in all but a few network growth models that have been proposed .instead , it is much more common to select _ a priori _ the postsynaptic targets of each neuron .we argue that this modeling choice neglects an important feature of biological networks and places limits the emergent topology of the network .two , we subject all synapses to an activity - independent decay .we propose that this is related to the widely observed decay of the early phase long - term potentiation ( e - ltp ) .the time scale of the potentiation decay is several hours , much longer than the length of an individual training trial .the role of the potentiation decay is to avoid the accumulation of random potentiation of synaptic weights known to destabilize the network dynamics .the combination of these two rules yields a robust spectrum of stationary network states . as a final note, we compare our network rewiring rules with a similar approach taken by iglesias et al . in this study a recurrent network is initialized with all - to - all connectivity and network connections are eliminated via stdp .this modeling choice is also notable in that final network connectivity is not limited by only modifying weights between specific pairs of neurons .however , it is unclear from the results of this study whether sequences emerge after pruning .the stability of the ground state network indicates the rules of our model encode a homeostatic mechanism .several other models of homeostatic mechanisms have been proposed recently , including a sliding modification threshold based on post - synaptic firing history , a dependence on fluctuations of post - synaptic membrane potential and heterosynaptic plasticity that limits the total weight targeting a single neuron .an overlooked mechanism that we propose is activity - independent , multiplicative rescaling of weights , as we have implemented here , potentiation decay or decay of e - ltp .this form of ltp returns synaptic efficacies to the baseline within 3 hours and is independent of protein synthesis . only through repeated potentiation, e - ltp can turn into the late phase ltp ( l - ltp ) , which is maintained by protein synthesis and can last over days and weeks . in our model ,consistently potentiated synapses turn into supersynapses whose decay is much weaker than other active synapses .the supersynapses can be considered in the l - ltp state .emergence of the synfire chain relies on stabilization of the small percentage of supersynapses , while there are many weaker , more transient synapses .this long - tail synaptic weight distribution is consistant with physiological observation and appear in other theoretical studies .the functional role of e - ltp decay is largely ignored in the ltp literature .our model suggests that the e - ltp decay may be crucial in stabilizing synaptic weight distribution against random accumulations of ltp through spontaneous activity .moreover , the e - ltp decay can be important in the formation of functional networks through stdp .the time scale of the potentiation decay in our model is congruent with time scales of e - ltp decay , which can be seen with an order of magnitude estimate .if we assume learning occurs on a scale of tens of days and training trials are necessary for synaptic chains to crystalize , this places the time scale corresponding to our decay parameter , which we vary from 0.9 to 0.99 , in a range of and seconds .it will be interesting to test these ideas by manipulating the e - ltp decay constants in developing or learning brains _ in vivo_. a natural extension of this work is to construct a growth model in such a way that more complex asymptotic synaptic topologies emerge .many learned motor behaviors can be complex .for example , stochastic ordering of distinct elements of a behavior is one kind of behavioral complexity .the song of the bengalese finch can be described by this type of stochastic process .a single synfire chain can not capture this complexity since they produce only a single spike sequence ; multiple chain or branching chains would be necessary .one possible scenario for growing multiple chains in the same network would be to have distinct sets of training neurons .however , it turns out that preferential targeting of sequence members within the network prohibits the development of distinct synaptic structures .we implemented two distinct training groups of 5 neurons in a network of 2500 neurons .a training neuron set that is excited at the beginning of each learning trial . which of the two sets is selected at random with equal probability . in fig .[ two_training ] we display the resulting growth .the chains develop several groups individually , but ultimately they merge to a single chain .initially , the training neurons seed two disjoint sequences that recruit targets iteratively .emergence of synfire groups embeds two disjoint sequences in the network .these sequences are occasionally activated by spontaneous activity .therefore , neurons in the pool will target the partial chains preferentially . in particular , neurons at the end of one of the chainsmay target a neurons in the other chain with elevated probability .once one chain reliably activates neurons in the other chain , they will merge .merging occurs reliably each time we simulated two training groups .other growth mechanisms must be present , or the chains must be encoded within distinct populations of neurons .this conclusion is consistent with other studies , such as .in a recurrent network of neurons driven by high - frequency noisy input and synapses governed by a set of plasticity rules which include stdp , a potentiation decay and axon remodeling , we showed that neurons cooperate via convergent synapses to self - organize into a synfire chain characterized by a precisely - timed sequence .the network is initialized in a state characterized by a statistically stationary distribution of synaptic weights , invariant to network spontaneous activity .the combination of a potentiation decay plus an activation threshold imposed on the synaptic weights provides a homeostatic mechanism within the network . a small subset of neurons stimulated by a strong excitation forms the seed for recursive synaptic growth of synfire groups .during repeated presentations of this stimulus and emergence of the chain , we observe a complex response of the network connectivity that is reflected in the distribution of asymptotic chain lengths .we have demonstrated a clear example of interplay between neural activity and emergent synaptic topology in a developing network .the simulated networks consist of excitatory , conductance - based , pulse - coupled leaky integrate - and - fire ( lif ) neurons .the state of the neuron is described by a single variable , its membrane potential , which obeys where is synaptic input to the membrane .the lif neuron requires several parameters : leak reversal potential , membrane time constant , spike threshold and reset potential .if , the neuron emits a spike and is instantaneously reset to .the synaptic input to the neuron consists of three sources : a noisy external drive , an excitatory conductance , and an inhibitory conductance : \\ & + g_i^{(i)}(t ) \left [ e^{(i ) } -v_i \right ] .\end{split}\ ] ] we choose the reversal potentials and .the drive includes a gaussian white noise ( gwn ) term , obeying and with all higher order correlations equal to zero .the noise is uncorrelated across individual neurons .driving current is and .training neurons ( see methods ) are subjected to larger driving current ( 100 mv ) for the first 8ms of each training trial .the external drive originates in upstream regions , which we do not simulate .gaussian white noise is commonly employed to model this input .the conductances and take the form of sums of -functions centered on the spike times of neurons in the network .specifically , where is the excitatory synaptic weight from neuron onto and is the time of the spike of neuron .weight is zero if does not have a synapse onto . is in a range 12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , , ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ , ph.d .thesis , ( ) http://mathworld.wolfram.com/jacobithetafunctions.html [ `` , '' ] * * , ( ) * * , ( ) * * , ( ) * * ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( )fig.1 . samples of asymptotic configurations of synapses in a 4000 neuron network generate long , stereotypical sequences of spikes .the chains above have 25 ( left ) and 40 ( right ) synfire groups .the time between firing of adjacent groups is ms .the number of super synapses one neuron can maintain is set to 5 .the number of training neurons is 5 . *( a ) * blowup depicts regular synfire connectivity between groups .the regular structure observed is a result of neurons cooperating to excite targets .a neuron most effectively excites a target when the target is shared with a neuron in its group , so convergent synapses are favored for development into supersynapses .a group accumulates shared targets until the maximum allowed number of supersynapses is reached .there are five neurons per synfire group in this network because the maximum number of supersynapses allowed by axon remodeling is five per neuron . *( b ) * the reconnection point is splayed across several groups .the connections that form first are to the group nearest the top of the chain .the downstream connections follow due to elevated probability of re - ignition at the initial connection point .the splayed connectivity allows spontaneous activity to restart when the excitation reaches this point because neurons across several synfire groups spike and sequential activity is most stable when a full synfire group fires .however , the synapses at the reconnection point potentiated often enough to remain stable . * ( c ) * defects sometimes appear as the chain emerges .small defects like the one depicted can remain stable .severe defects are not stable and never appear due to lack of a clear sequence that is consistently reinforced by stdp .a raster plot of the population activity after a synfire chain has self - organized .the neurons are labeled according to their time of first spike .some neurons in the chain do not spike during the first iteration around the loop due to fluctuations of the membrane potential , but they do spike during subsequent iterations .these neurons have the highest valued labels across the top of the plot. + fig.3 . * ( a ) * two simulated synfire chain length distributions for two different potentiation decay parameters .as the potentiation decay slowed , the resulting distribution of lengths narrows .sample size : 100 networks . neurons . *( b ) * the mean synfire chain length as a function of network size .the mean length saturates as the network size increases .sample size of each data point : 100 networks . .the error bars denote the standard error of the mean .the potentiation decay suppresses runaway synaptic growth resulting from positive feedback along active synapses .two networks are compared by setting the activation threshold to ( `` active '' ) and ( `` silent '' ) , respectively .the `` silent '' network effectively has all synaptic conductance set to zero . in * ( a ) * , a potentiation decay is applied to all synapses after each 1 s interval of simulated time . in * ( b ) * , there is no potentiation decay . in both scenariosthere is no positive feedback in the silent networks , so these distributions ( light gray ) are stable in both * ( a ) * and * ( b)*. however , in the networks with active synapses , only the distribution in the network with potentiation decay * ( a ) * remains stable indefinitely . in the network without potentiation decay * ( b ) * , synapses grow large resulting in runaway activity .histogram of synaptic weights in network with silent synapses .a group of training neurons spikes at the beginning of every trial .the distribution of weights onto the non - training neuron ( pool neurons ) stabilizes with a higher mean weight .the synapses in these simulated networks are silenced ( i.e. , ) in order to emphasize that the net strengthening is independent of interactions between individual neurons .when interactions are allowed , the strongest synapses may overcome the potentiation decay leading to development of strong synapses within the network .+ fig.6 . * ( a ) * an example of re - ignition of a developing chain during the spontaneous activity period in a training trial .spike raster of a network of 400 neurons are shown . at , the training neurons are stimulated , and spikes propagate down the chain until around ms .spontaneous activity starts afterwards .the chain is re - ignited around ms , evident from the sequential spikes shown in the shaded area .the re - ignition starts from a random point in the chain . *( b ) * spike probability for 1000 neurons is plotted against neuron label over ms of spontaneous activity in a network with synaptic weights held fixed ( no synaptic plasticity ) .the neurons are wired such that a synfire chain is embedded in an otherwise randomly connected network .neurons labeled 1 through 130 are connected into a synfire chain , with 1 - 5 forming the first group and 126 - 130 forming the last .synaptic weights of all other connections are drawn from the the synaptic weight distribution in the dynamical ground state ( fig.[necessary_decay]a ) .random spiking of neurons in the synfire chain often leads to re - ignition and propagation of spikes down the chain .this makes neurons at the end of the synfire chain have the highest probability of spiking .the dashed line is the uniform spike probability expected in the absence of the embedded synfire chain . *( c ) * preferential targeting emerging during training is measured by averaging the sum of active synaptic weights targeting the chain over the length of the chain .this is plotted as a function of partial chain length over training trials for two different values of the potentiation decay .the `` enhancement '' is calculated as the sum of weights divided by the average sum of weights onto a pool neuron in the stable ground state .this measurement was repeated over 10 independent instances of the network . also measured and averaged ( dotted lines )is the sum of weights targeting random pool neurons .pool neurons are more likely to target chain neurons than other pool neurons and the likelihood increases as the chain grows .chain length distributions from the lottery growth model . *( a ) * chain length distributions are plotted for different probabilities from the pool neurons to the chain . as decrease ,the mean and the variance of the distribution increase . for , there are no preferential targeting , and the mean length and variance are maximal . *( b ) * comparison of the mean chain length as a function of the network size between the model and the simulations .the simulations were done for five network sizes .for each network size , 150 simulations were performed .the data points are the mean chain length and the error bars indicate the standard error of the mean . the model prediction is plotted as the solid line .the parameter in the model was picked such that the root - mean - square error between the predictions and the simulations at the five network sizes is minimized .the dotted line is the prediction for , for comparison .mean chain length is offset with the ltd window size .simulations of a 2500 neuron network with two training sets .preferential targeting causes the two chains to merge .
synfire chains are thought to underlie precisely - timed sequences of spikes observed in various brain regions and across species . how they are formed is not understood . here we analyze self - organization of synfire chains through the spike - timing dependent plasticity ( stdp ) of the synapses , axon remodeling , and potentiation decay of synaptic weights in networks of neurons driven by noisy external inputs and subject to dominant feedback inhibition . potentiation decay is the gradual , activity - independent reduction of synaptic weights over time . we show that potentiation decay enables a dynamic and statistically stable network connectivity when neurons spike spontaneously . periodic stimulation of a subset of neurons leads to formation of synfire chains through a random recruitment process , which terminates when the chain connects to itself and forms a loop . we demonstrate that chain length distributions depend on the potentiation decay . fast potentiation decay leads to long chains with wide distributions , while slow potentiation decay leads to short chains with narrow distributions . we suggest that the potentiation decay , which corresponds to the decay of early long - term potentiation of synapses ( e - ltp ) , is an important synaptic plasticity rule in regulating formation of neural circuity through stdp .
recently , wireless sensor networks ( wsns ) have attracted a great deal of research interest because of their unique features that allow a wide range of applications in the areas of defence , environment , health and home .wsns are usually composed of a large number of densely deployed sensing devices which can transmit their data to the desired destination through multihop relays . considering the traditional wireless networks such as cellular systems ,the primary goal in such systems is to provide high qos and bandwidth efficiency .the base stations have easy access to the power supply and the mobile user can replace or recharge exhausted batteries in the handset .however , power conservation is getting more important , especially for wsns .one of the most important constraints on wsns is the low power consumption requirement as sensor nodes carry limited , generally irreplaceable , power sources . therefore , low complexity and high energy efficiency are the most important design characteristics for wsns . in a cooperative wsn , nodes relay signals to each other in order to propagate redundant copies of the same signals to the destination nodes . among the existing relaying schemes , the amplify - and - forward ( af ) and the decode - and - forward ( df ) are the most popular approaches . in the af scheme ,the relay nodes amplify the received signal and rebroadcast the amplified signals toward the destination nodes . in the df scheme, the relay nodes first decode the received signals and then regenerate new signals to the destination nodes subsequently .some power allocation methods have been proposed for wsns to obtain the best possible signal - to - noise ratio ( snr ) or best possible quality of service ( qos ) at the destinations . by adjusting appropriately the power levels used for the links between the sources , the relays and the destinations ,significant performance gains can be obtained for a given power budget .most of the research on power allocation for wsns are based on the assumption of perfect synchronization and available channel state information ( csi ) at each node .a wsn is said to have full csi when all of its nodes have access to accurate and up - to - date csi .when full csi is available to all the nodes , the power of each node can be optimally allocated to improve the system efficiency and lower the outage probability or bit error rate ( ber ) . in wsns, some power allocation problems can be formulated as centralized or distributed optimization problems subject to power constraints on certain groups of signals . for the centralized schemes ,a network controller is required which is responsible for monitoring the information of the whole network such as the csi and snr , calculating the optimum power allocation parameters of each link and sending them to all nodes via feedback channels .this scheme considers all the available links but it has two major drawbacks .the first one is the high computational burden and storage demand at the network controller .the second one is that it requires a significant amount of control information provided by feedback channels which leads to a loss in bandwidth efficiency . for the distributed schemes , each node only needs to have the knowledge of its partner information and calculate its own power allocation parameter .therefore , a distributed scheme requires less control information and is ideally suited to wsns .however , the performance of distributed schemes is inferior to centralized schemes . due to the inherent limitations in the sensor node size , power and cost ,they are only able to communicate in a short range .therefore , multihop communication is employed to enhance the coverage of wsns . by using multihop transmissions , the rapid decay of the received signal which is caused by the increased transmission distancecan be overcome .moreover , pathways around the obstacles between the source and destination can be provided to avoid the signal shadowing .several works about power allocation of multihop transmission systems have been proposed in - .the work reported in develops a cross - layer model for multihop communication and analyzes the energy consumption of multihop topologies with equal distance and optimal node spacing .centralized and distributed schemes for power allocation are presented to minimize the total transmission power under a constraint on the ber at the destination in and . in ,two optimal power allocation schemes are proposed to maximize the instantaneous received snr under short - term and long - term power constraints . in ,the outage probability is considered as the optimization criterion to derive the optimal power allocation schemes under a given power budget for both regenerative and non - regenerative systems . in this paper , we consider a general multihop wsn where the af relaying scheme is employed .the proposed strategy is to jointly design the linear receivers and the power allocation parameters that contain the optimal complex amplification coefficients for each relay node via an alternating optimization approach .two kinds of linear receivers are designed , the minimum mean - square error ( mmse ) receiver and the maximum sum - rate ( msr ) receiver .they can be considered as solutions to constrained optimization problems where the objective function is the mean - square error ( mse ) cost function or the sum - rate ( sr ) and the constraint is a bound on the power levels among the relay nodes .then , the constrained mmse or msr expressions for the linear receiver and the power allocation parameter can be derived .the major novelty in these strategies presented here is that they are applicable to general multihop wsns with multi source nodes and destination nodes , as opposed to the simple two - hop wsns with one pair of source - destination nodes . unlike the previous works on the power allocation for multihop systems in - , in our work, the power allocation and receiver coefficients are jointly optimized .the joint strategies were proposed for a two - hop wsn with multiple relay nodes in our previous work . in order to increase the applicability of our investigation , in this paper ,we develop joint strategies for general multihop wsns .they can be considered as an extension of the strategies proposed for the two - hop wsns and more complex mathematical derivations are presented . moreover ,different kinds of power constraints can be considered and compared .for the mmse receiver , we present three strategies where the allocation of power level across the relay nodes is subject to global , local and individual power constraints . another fundamental contribution of this work is the derivation of a closed - form solution for the lagrangian multiplier ( ) that arises in the expressions of the power allocation parameters . for the msr receiver , the local power constraints are considered .we propose a strategy that employs iterations with the generalized rayleigh quotient to solve the optimization problem in an alternating fashion .some preliminary results of these work have been reported in .the main contributions of this paper can be summarized as : 1 ) : : constrained mmse expressions for the design of linear receivers and power allocation parameters for multihop wsns .the constraints include the global , local and individual power constraints .2 ) : : constrained msr expressions for the design of linear receivers and power allocation parameters for multihop wsns subject to local power constraints .3 ) : : alternating optimization algorithms that compute the linear receivers and power allocation parameters in 1 ) and 2 ) to minimize the mean - square error or maximize the sum - rate of the wsn .4 ) : : analysis of the computational complexity and the convergence of the proposed optimization algorithms .the rest of this paper is organized as follows .section ii describes the general multihop wsn system model .section iii develops three joint mmse receiver design and power allocation strategies subject to three different power constraints .section iv develops the joint msr receiver design and power allocation strategy subject to local power constraints .section v contains an analysis of the computational complexity and the convergence .section vi presents and discusses the simulation results , while section vii provides some concluding remarks .consider a general -hop wireless sensor network ( wsn ) with multiple parallel relay nodes for each hop , as shown in fig .the wsn consists of source nodes , destination nodes and relay nodes which are separated into groups : , , ... , .the index refers to the number of nodes after a given phase of transmission starting with and going up to .the proposed optimization algorithms in this paper refer to a particular instance , for which the roles of the nodes acted as sources , relays and destinations have been pre - detemined . in subsequent time slots these rolescan be swapped so that all nodes can actually work as potential sources .we concentrate on a time division scheme with perfect synchronization , for which all signals are transmitted and received in separate time slots .the sources first broadcast the signal vector * s * which contains signals in parallel to the first group of relay nodes .we consider an amplify - and - forward ( af ) cooperation protocol in this paper .an extension to other cooperation protocols is straightforward .each group of relay nodes receives the signals , amplifies and rebroadcasts them to the next group of relay nodes ( or the destination nodes ) . in practice ,we need to consider the constraints on the transmission policy . for example , each transmitting node would transmit during only one phase . in our wsn system , we assume that each group of relay nodes transmits the signal to the nearest group of relay nodes ( or the destination nodes ) directly .we can use a block diagram to indicate the multihop wsn system as shown in fig .2 . let denote the channel matrix between the source nodes and the first group of relay nodes , denote the channel matrix between the group of relay nodes and destination nodes , and denote the channel matrix between two groups of relay nodes as described by where ] for denote the channel coefficients between the group of relay nodes and the destination node .further , ] . similarly , for , we have where let then , we get ^{-1}\\ & \times e(\textbf{y}_{i-1}^h\textbf{b}_{i-1}\textbf{h}_d^h\textbf{w}\textbf{s})\\ = & [ \textbf{b}_{i-1}\textbf{h}_d^h\textbf{ww}^h\textbf{h}_d\textbf{b}_{i-1}^h\circ e(\textbf{y}_{i-1}\textbf{y}_{i-1}^h)^*+n_i\lambda \textbf{i}]^{-1}\\ & \times[\textbf{b}_{i-1}\textbf{h}_d^h\textbf{w}\circ e(\textbf{y}_{i-1}\textbf{s}^h)^*\textbf{u } ] .\end{split } \label{eq : mhop : mmse:(4)}\ ] ] from ( [ eq : mhop : mmse:(3 ) ] ) and ( [ eq : mhop : mmse:(4 ) ] ) , we conclude that ^{-1}\\ & \times e(\textbf{y}_{i}^h\textbf{b}_{i}\textbf{h}_d^h\textbf{w}\textbf{s})\\ = & [ \textbf{b}_{i}\textbf{h}_d^h\textbf{ww}^h\textbf{h}_d\textbf{b}_{i}^h\circ e(\textbf{y}_{i}\textbf{y}_{i}^h)^*+n_{i+1}\lambda \textbf{i}]^{-1}\\ & \times[\textbf{b}_{i}\textbf{h}_d^h\textbf{w}\circ e(\textbf{y}_{i}\textbf{s}^h)^*\textbf{u } ] \end{split } \label{eq : mhop : mmse:(5)}\ ] ] where please see the appendix to find the expressions of , , and .the expressions in ( [ eq : mhop : mmse:(2 ) ] ) and ( [ eq : mhop : mmse:(5 ) ] ) depend on each other .thus , it is necessary to iterate them with an initial value of ( ) to obtain the solutions .+ the lagrange multiplier can be determined by solving let and then , we get when is a real value , we have ^h = [ ( \boldsymbol{\phi}_i+n_{i+1}\lambda \textbf{i})^{h}]^{-1}=(\boldsymbol{\phi}_i+n_{i+1}\lambda \textbf{i})^{-1}.\ ] ] equation ( [ eq : mhop : mmse:(6 ) ] ) becomes using an eigenvalue decomposition ( evd ) , we have where consists of eigenvalues of and .then , we get therefore , ( [ eq : mhop : mmse:(9 ) ] ) can be expressed as using the properties of the trace operation , ( [ eq : mhop : mmse:(10 ) ] ) can be written as defining , ( 11 ) becomes since is a matrix with at most rank , only the first columns of span the column space of which causes the last columns of to become zero vectors and the last diagonal elements of are zero .therefore , we obtain the - order polynomial in the lagrange multiplier can be determined by solving following the same steps as in section iii.a , we obtain - order polynomials in thirdly , we consider the case where the power of each relay node is limited to some value .the proposed method can be considered as the following optimization problem = \arg\min_{\textbf{w},\textbf{a}_1, ... ,\textbf{a}_{m-1}}e[\|\textbf{s}-\textbf{w}^h\textbf{d}\|^2],\\ & { \textrm { subject to } } ~ p_{i , j}=p_{t , i , j},~i=1,2, ... ,m-1,~j=1,2, ... ,n_i , \end{split}\ ] ] where is the transmitted power of the relay node in the group , and . using the method of lagrange multipliers once again , we have the following lagrangian function + \sum_{i=1}^{m-1}\sum_{j=1}^{n_i}\lambda_{i , j}(n_{i+1}a_{i , j}^*a_{i , j}-p_{t , i , j } ) . \\\ ] ] following the same steps as described in section iii.a , we get the same optimal expression for as in ( [ eq : mhop : mmse:(2 ) ] ) , and the optimal expression for the amplification coefficient ^{-1}[\textbf{z}_i(j)-\sum_{l\in i , l\neq j}\boldsymbol{\phi}_i(j , l)a_{i , l}],\ ] ] where , and have the same expression as in ( [ eq : mhop : mmse:(7 ) ] ) and ( [ eq : mhop : mmse:(8 ) ] ). the lagrange multiplier can be determined by solving table i shows a summary of our proposed mmse designs with global , local and individual power constraints which will be used for the simulations . if the quasi - static fading channel ( block fading )is considered in the simulations , we only need two iterations .alternatively , low - complexity adaptive algorithms can be used to compute the linear receiver and the power allocation parameter vector . [ cols="<,<,<",options="header " , ]in this section , an analysis of the computational complexity and the convergence of the algorithms is developed .we first illustrate the computational complexity requirements of the proposed mmse and msr designs .we quantify the computational complexity of the algorithms , which require a given number of arithmetic operations per iteration .the lower the number of operations the lower the power consumption will be . then , we make use of the convergence results for the alternating optimization algorithms in and present a set of sufficient conditions under which our proposed algorithms will converge to the optimal solutions . table iii and table iv list the computational complexity per iteration in terms of the number of multiplications , additions and divisions for our proposed joint linear receiver design ( mmse and msr ) and power allocation strategies . for the joint mmse designs , we use the qr algorithm to perform the eigendecomposition of the matrix .please note that in this paper the qr decomposition employs the householder transformation .the quantities and denote the number of iterations of the qr algorithm and the power method , respectively . for the computational complexity of in table iii, it does not include the processing of solving the equation in ( 31 ) , ( 37 ) and ( 41 ) , because the method with a global power constraint , equation ( 31 ) is a higher order polynomial whose complexity is difficult to be quantified . as the multiplication dominates the computational complexity , in order to compare the computational complexity of our proposed joint mmse and msr designs , the number of multiplications versus the number of relay nodes in each group for each iteration are displayed in fig . 3 and fig 4 .for the purpose of illustration , we set , , and . for the mmse design , it can be seen that our proposed receiver with a global constraint has the same complexity as the receiver with local constraints . in practice ,when considering the processing of solving the equation in ( 31 ) , ( 37 ) , the method with a global constraint will require a higher computational complexity than the local constraints and the difference will become larger along with the increase of the number of hops ( ) .when the individual power constraints are considered , the computational complexity is lower than other constraints because there is no need to compute the eigendecomposition for it . for the msr design , employing the power method to calculate the dominant eigenvectors has a lower computational complexity than employing the qr algorithm . in this section, an analysis of the computational complexity and the convergence of the algorithms is developed .we first illustrate the computational complexity requirements of the proposed mmse and msr designs .then , we make use of the convergence results for the alternating optimization algorithms in and present a set of sufficient conditions under which our proposed algorithms will converge to the optimal solutions . table iii and table iv list the computational complexity per iteration in terms of the number of multiplications , additions and divisions for our proposed joint linear receiver design ( mmse and msr ) and power allocation strategies .for the joint mmse designs , we use the qr algorithm to perform the eigendecomposition of the matrix .please note that in this paper the qr decomposition by the householder transformation is employed by the qr algorithms .the quantities and denote the number of iterations of the qr algorithm and the power method , respectively . for the computational complexity of in table iii, it does not include the processing of solving the equation in ( 31 ) , ( 37 ) and ( 41 ) , because of the method with a global power constraint , equation ( 31 ) is a higher order polynomial whose complexity is difficult to be summarized . as the multiplication dominates the computational complexity , in order to compare the computational complexity of our proposed joint mmse and msr designs , the number of multiplications versus the number of relay nodes in each group for each iteration are displayed in fig . 3 and fig 4 .for the purpose of illustration , we set , , and . for the mmse design, it can be seen that our proposed receiver with a global constraint has the same complexity as the receiver with local constraints . in practice ,when considering the processing of solving the equation in ( 31 ) , ( 37 ) , the method with a global constraint will require a higher computational complexity than the local constraints and the difference will become larger along with the increase of the number of hops ( ) . when the individual power constraints are considered , the computational complexity is lower than other constraints because there is no need to compute the eigendecomposition for it . for the msr design , employing the power method to calculate the dominant eigenvectors has a lower computational complexity than employing the qr algorithm .c c c c c & power constraint & multiplications & additions & divisions + + & & & & + & & & & + & all & & & + & & & & + & & & & + + + & & & & + & global & & & + & & & & + + & & & & + & local & & & + & & & & + + & individual & & & + & & & & + + + & global & & & + + &local & & & + + & individual & & & + + c c c c c & power constraint & multiplications & additions & divisions + + & & & & + & local & & & + & qr algorithm & & & + & & & & + & & & & + & & & + & & & & + & local & & & + & power method & & & + & & & & + & & & & + + + & & & & + & local & & & + & qr algorithm & & & + & & & & + & & & & + + & & & & + & local & & & + & power method & & & + & & & & + & & & & + + to obtain convergence conditions , we need to define a metric space and the hausdorff distance that will extensively be used .a metric space is an ordered pair , where is a nonempty set , and is a metric on , i.e. , a function such that for any , the following conditions hold : \1 ) .\2 ) .\3 ) .\4 ) .the hausdorff distance measures how far two subsets of a metric space are from each other and is defined by the proposed joint mmse designs can be stated as an alternating minimization strategy based on the mse defined in ( 9 ) and expressed as where the sets , and the sequences of compact sets and converge to the sets and , respectively . although we are not given the sets and directly , we have the sequence of compact sets and .the aim of our proposed joint mmse designs is to find a sequence of and such that where and correspond to the optimal values of and , respectively . equation ( 65 ) can be considered as the necessary condition of the following equations if the other power allocation parameters are kept constant when computing during the iterations . to present a set of sufficient conditions under which the proposed algorithms converge, we need the so - called three - point and four - point properties .let us assume that there is a function such that the following conditions are satisfied : 1 ) : : _ three - point property _ : + for all , , , and + , we have 2 ) : : _ four - point property _ : + for all , , , and + , we have these two properties are the mathematical expressions of the sufficient conditions for the convergence of the alternating minimization algorithms which are stated in and .it means that if there exists a function with the parameter during two iterations that satisfies the two inequalities for the mse in ( 67 ) and ( 68 ) , the convergence of our proposed mmse designs that make use of the alternating minimization algorithm can be proved by the theorem below ._ theorem _ : let , be compact subsects of the compact metric space such that and let mse : be a continuous function .let conditions 1 ) and 2 ) hold .then , for the proposed algorithms , we have thus , equation ( 65 ) can be satisfied .a general proof of this theorem is detailed in and .the proposed joint msr designs can be stated as an alternating maximization strategy based on the sr defined in ( 47 ) that follows a similar procedure to the one above .in this section , we assess the performance of our proposed joint designs of the linear receiver and power allocation methods and compare them with the equal power allocation method which allocates the same transmitting power level equally for all links from the relay nodes . for the purpose of fairness ,we assume that the total transmitting power for all relay nodes in the network is the same which can be indicated as .we consider a 3-hop ( =3 ) wireless sensor network as an example even though the algorithms can be used with any number of hops .the number of source nodes ( ) , two groups of relay nodes ( ) and destination nodes ( ) are 1 , 4 , 4 and 2 , respectively .we consider an af cooperation protocol .the quasi - static fading channel ( block fading channel ) is considered in our simulations whose elements are rayleigh random variables ( with zero mean and unit variance ) and assumed to be invariant during the transmission of each packet . in our simulations ,the channel is assumed to be known at the destination nodes . for channel estimation algorithms for wsns and other low - complexity parameter estimation algorithms ,one refers to and . during each phase ,the sources transmit the qpsk modulated packets with 1500 symbols .the noise at the destination nodes is modeled as circularly symmetric complex gaussian random variables with zero mean . a perfect ( error free ) feedback channel between destination nodes and relay nodesis assumed to transmit the amplification coefficients . for the mmse design , it can be seen from fig .5 that our three proposed methods achieve a better performance than the equal power allocation method . among them , the method with a global constraint has the best performance whereas the method with individual constraints has the worst performance .this result is what we expect because a global constraint provides the largest degrees of freedom for allocating the power among the relay nodes whereas an individual constraint provides the least . for the msr design, it can be seen from fig . 6that our proposed method achieves a better sum - rate performance than the equal power allocation method . using the power method to calculate the dominant eigenvector yields a very similar result to the qr algorithm but requires a lower complexity . besides the equal power allocation scheme , a mmse power allocation scheme reported in whereonly the local power constraints are considered has also been used for comparison .it can be seen from fig . 7that our proposed mmse and msr designs can achieve a very similar or better performance .further advantage is that our proposed schemes only optimize the relay amplifying vectors ( or diagonal matrices ) whereas in the optimal relay amplifying matrices are needed which requires more feedback transmissions as well as information exchanges among relay nodes in each group .note that in order to have a fair comparison , we only employ power allocation schemes for the relay nodes and assume every source node has unit transmitting power in the simulations . and equal power allocation scheme.,width=336 ] in practice, the feedback channel can not be error free . in order to study the impact of feedback channel errors on the performance, we employ the binary symmetric channel ( bsc ) as the model for the feedback channel and quantize each complex amplification coefficient to an 8-bit binary value ( 4 bits for the real part , 4 bits for the imaginary part ) .the error probability ( pe ) of the bsc is fixed at .the dashed curves in fig . 5 and fig . 6show the performance degradation compared to the performance when using a perfect feedback channel . to show the performance tendency of the bsc for other values of pe, we fix the snr at 10 db and choose pe ranging from 0 to .the performance curves are shown in fig . 8 and fig .9 , which illustrate the ber and the sum - rate performance versus pe of our two proposed joint designs of the receivers .it can be seen that along with the increase in pe , their performance becomes worse .finally , we replace the perfect csi with the estimated channel coefficients to compute the receive filters and power allocation parameters at the destinations .we employ the beacon channel estimation which was proposed in .10 illustrates the impact of the channel estimation on the performance of our proposed mmse and smr design with local constraints by comparing it to the performance of perfect csi .the quantity denotes the number of training sequence symbols per data packet .please note that in these simulations perfect feedback channel is considered and the qr algorithm is used in the msr design .for both the mmse and msr designs , it can be seen that when is set to 10 , the beacon channel estimation leads to an obvious performance degradation compared to the perfect csi .however , when is increased to 50 , the beacon channel estimation can achieve a similar performance to the perfect csi .other scenarios and network topologies have been investigated and the results show that the proposed algorithms work very well with channel estimation algorithms and a small number of training symbols .in this paper , we have presented alternating optimization algorithms for receive filter design and power adjustment which can be applied to general multihop wsns .mmse and msr criteria have been considered in the development of the algorithmic solutions .simulations have shown that our proposed algorithms achieve a significant better performance than the equal power allocation and power allocation scheme in .a possible extension of this work is employing low - complexity adaptive algorithms to compute the linear receiver and power allocation parameters .the algorithms can also be employed in other multihop wireless networks along with non - linear receivers .j. n. laneman , d. n. c. tse and g. w. wornell , `` cooperative diversity in wireless networks : efficient protocols and outage behavior , '' _ ieee trans .inf . theory _3062 - 3080 , dec . 2004 .t. peng , r. c. de lamare and a. schmeink , adaptive distributed space - time coding based on adjustable code matrices for cooperative mimo relaying systems " , _ ieee trans . on communications _61 , no.7 , july 2013 .k. vardhe , d. reynolds , and b. d. woerner `` joint power allocation and relay selection for multiuser cooperative communication '' _ ieee trans .wireless commun ._ , vol . 9 , no .4 , pp . 1255 - 1260 , apr .2010 .j. adeane , m. r. d. rodrigues , and i. j. wassell , `` centralised and distributed power allocation algorithms in cooperative networks '' _ ieee 6th workshop on signal processing advances in wireless communications _ , 2005 .x. deng and a. haimovich , `` power allocation for cooperative relaying in wireless networks , '' _ ieee trans .50 , no . 12 , pp .3062 - 3080 , jul .j. huang , z. han , m. chiang , and h. v. poor , `` auction - based resource allocation for cooperative communications , '' _ ieee j. sel .areas commun ._ , vol . 26 , no . 7 , pp . 1226 - 1237 , sep . 2008 .t. wang , r. c. de lamare , and a. schmeink , `` joint linear receiver design and power allocation using alternating optimization algorithms for wireless sensor networks , '' _ ieee trans .9 , nov . 2012 . r.c .de lamare , r. sampaio - neto , minimum mean - squared error iterative successive parallel arbitrated decision feedback detectors for ds - cdma systems " , ieee trans .5 , may 2008 , pp . 778 - 789 .r. c. de lamare , adaptive and iterative multi - branch mmse decision feedback detection algorithms for multi - antenna systems " , _ ieee transactions on wireless communications _ , vol .2 , february 2013 .r. c. de lamare and r. sampaio - neto , adaptive reduced - rank processing based on joint and iterative interpolation , decimation , and filtering , " ieee trans .signal process .57 , no . 7 , july 2009 , pp .2503 - 2514 .de lamare and r. sampaio - neto , adaptive reduced - rank equalization algorithms based on alternating optimization design techniques for mimo systems , " ieee trans . veh .60 , no . 6 , pp . 2482 - 2494 , july 2011 . j. w. choi , a. c. singer , j lee , n. i. cho , improved linear soft - input soft - output detection via soft feedback successive interference cancellation , " _ ieee trans ._ , vol.58 , no.3 , pp.986 - 996 , march 2010 .p. li , r. c. de lamare and r. fa , multiple feedback successive interference cancellation detection for multiuser mimo systems , " _ ieee transactions on wireless communications _ , vol .10 , no . 8 , pp . 2434 - 2439 , august 2011 .
in this paper , we consider a multihop wireless sensor network with multiple relay nodes for each hop where the amplify - and - forward scheme is employed . we present algorithmic strategies to jointly design linear receivers and the power allocation parameters via an alternating optimization approach subject to different power constraints which include global , local and individual ones . two design criteria are considered : the first one minimizes the mean - square error and the second one maximizes the sum - rate of the wireless sensor network . we derive constrained minimum mean - square error and constrained maximum sum - rate expressions for the linear receivers and the power allocation parameters that contain the optimal complex amplification coefficients for each relay node . an analysis of the computational complexity and the convergence of the algorithms is also presented . computer simulations show good performance of our proposed methods in terms of bit error rate and sum - rate compared to the method with equal power allocation and an existing power allocation scheme . minimum mean - square error ( mmse ) criterion , maximum sum - rate ( msr ) criterion , power allocation , multihop transmission , wireless sensor networks ( wsns ) , relays
many interesting nonlinear models in physical sciences and engineering are given by systems of odes .when studying these systems it is desirable to have a global understanding of the bifurcation and chaos `` spectrum '' : the systematics of periodic orbits , stable as well as unstable ones at fixed and varying parameters , the type of chaotic attractors which usually occur as limits of sequences of periodic regimes , etc .however , this is by far not a simple job to accomplish neither by purely analytical means nor by numerical work alone . in analytical aspect , just recollect the long - standing problem of the number of limit cycles in _ planar _ systems of odes . as chaotic behaviormay appear only in systems of more than three autonomous odes , it naturally leads to problems much more formidable than counting the number of limit cycles in planar systems . as numerical studyis concerned , one can never be confident that all stable periodic orbits up to a certain length have been found in a given parameter range or no short unstable orbits in a chaotic attractor have been missed at a fixed parameter set , not to mention that it is extremely difficult to draw global conclusions from numerical data alone . on the other hand , a properly constructed symbolic dynamics , being a coarse - grained description ,provides a powerful tool to capture global , topological aspects of the dynamics .this has been convincingly shown in the development of symbolic dynamics of one - dimensional ( 1d ) maps , see , e.g. , .since it is well known from numerical observations that chaotic attractors of many higher - dimensional dissipative systems with one positive lyapunov exponent reveal 1d - like structure in some poincar sections , it has been suggested to associate the systematics of numerically found periodic orbits in odes with symbolic dynamics of 1d maps . while this approach has had some success ( see , e.g. , chapter 5 of ), many new questions arose from the case studies .for example , + 1 .the number of short stable periodic orbits found in odes is usually less than that allowed by the admissibility conditions of the corresponding 1d symbolic dynamics . within the 1d framework it is hard to tell whether a missing period was caused by insufficient numerical search orwas forbidden by the dynamics .\2 . in the poincar sections of odes , at a closer examination , the attractors often reveal two - dimensional features such as layers and folds .one has to explain the success of 1d description which sometimes even turns out much better than expected . at the same time, the limitation of 1d approach has to be analyzed as the poincar maps are actually two - dimensional .early efforts were more or less concentrated on stable orbits , while unstable periods play a fundamental role in organizing chaotic motion .one has to develop symbolic dynamics for odes which would be capable to treat stable and unstable periodic orbits alike , to indicate the structure of some , though not all , chaotic orbits at a given parameter set .the elucidation of these problems has to await a significant progress of symbolic dynamics of 2d maps .now the time is ripe for an in depth symbolic dynamics analysis of a few typical odes .this kind of analysis has been carried out on several non - autonomous systems , where the stroboscopic sampling method greatly simplifies the calculation of poincar maps . in this paperwe consider an autonomous system , namely , the lorenz model , in which one of the first chaotic attractor was discovered .the lorenz model consists of three equations it is known that several models of hydrodynamical , mechanical , dynamo and laser problems may be reduced to this set of odes .the system ( [ lorenz ] ) contains three parameters , and , representing respectively the rayleigh number , the prandtl number and a geometric ratio .we will study the system in a wide -range at fixed and .we put together a few known facts on eq .[ lorenz ] to fix the notations . for detailed derivations one may refer to the book by c. sparrow .for the origin is a globally stable fixed point .it loses stability at . a 1d unstable manifold and a 2d stable manifold come out from the unstable origin .the intersection of the 2d with the poincar section will determine a demarcation line in the partition of the 2d phase plane of the poincar map .for there appears a pair of fixed points these two fixed points remain stable until reaches 24.74 .although their eigenvalues undergo some qualitative changes at and a strange invariant set ( not an attractor yet ) comes into life at , here we are not interested in all this .it is at a sub - critical hopf bifurcation takes place and chaotic regimes commence .our -range extends from 28 to very big values , e.g. , 10000 , as nothing qualitatively new appears at , say , . before undertaking the symbolic dynamics analysiswe summarize briefly what has been done on the lorenz system from the viewpoint of symbolic dynamics .guckenheimer and williams introduced the geometric lorenz model for the vicinity of which leads to symbolic dynamics on two letters , proving the existence of chaos in the geometric model .however , as smale pointed out it remains an unsolved problem as whether the geometric lorenz model means to the real lorenz system .though not using symbolic dynamics at all , the paper by tomita and tsuda studying the lorenz equations at a different set of parameters and is worth mentioning .they noticed that the quasi-1d chaotic attractor in the poincar section outlined by the upward intersections of the trajectories may be directly parameterized by the coordinates .a 1d map was devised in to numerically mimic the global bifurcation structure of the lorenz model . c. sparrow used two symbols and to encode orbits without explicitly constructing symbolic dynamics . in appendixj of sparrow described a family of 1d maps as `` an obvious choice if we wish to try and model the behavior of the lorenz equations in the parameter range , and '' . in what followswe will call this family the _ lorenz - sparrow map_. refs . and have been instrumental for the present study . in fact , the 1d maps to be obtained from the 2d upward poincar maps of the lorenz equations after some manipulations belong precisely to the family suggested by sparrow . in systematics of stable periodic orbits in the lorenz equations was compared with that of a 1d anti - symmetric cubic map .the choice of an anti - symmetric map was dictated by the invariance of the lorenz equations under the discrete transformation indeed , most of the periods known to are ordered in a `` cubic '' way .however , many short periods present in the 1d map have not been found in the lorenz equations .it was realized in that a cubic map with a discontinuity in the center may better reflect the odes and many of the missing periods are excluded by the 2d nature of the poincar map . instead of devising model maps for comparison one should generate all related 1d or 2d maps directly from the lorenz equations and construct the corresponding symbolic dynamics .this makes the main body of the present paper . for physicistssymbolic dynamics is nothing but a coarse - grained description of the dynamics .the success of symbolic dynamics depends on how the coarse - graining is performed , i.e. , on the partition of the phase space . from a practical point of viewwe can put forward the following requirements for a good partition .1 . it should assign a _unique _ name to each unstable periodic orbit in the system ; 2 .an ordering rule of all symbolic sequences should be defined ; 3 .admissibility conditions as whether a given symbolic sequence is allowed by the dynamics should be formulated ; 4 .based on the admissibility conditions and ordering rule one should be able to generate and locate all periodic orbits , stable and unstable , up to a given length .symbolic dynamics of 1d maps has been well understood .symbolic dynamics of 2d maps has been studies in .we will explain the main idea and technique in the context of the lorenz equations .a few words on the research strategy may be in order .we will first calculate the poincar maps in suitably chosen sections . if necessary some forward contracting foliations ( fcfs , to be explained later ) are superimposed on the poincar map , the attractor being part of the backward contracting foliations ( bcfs ) .then a one - parameter parameterization is introduced for the quasi-1d attractor . for our choice of the poincar sections the parameterizationis simply realized by the coordinates of the points . in terms of these a first return map constructed . using the specific property of first return maps that the set remains the same before and after the mapping, some parts of may be safely shifted and swapped to yield a new map , which precisely belongs to the family of lorenz - sparrow map . in so doing , all 2d features ( layers , folds , etc . )however , one can always start from the symbolic dynamics of the 1d lorenz - sparrow map to generate a list of allowed periods and then check them against the admissibility conditions of the 2d symbolic dynamics . using the ordering of symbolic sequences all allowed periods may be located easily .what said applies to unstable periodic orbits at fixed parameter set .the same method can be adapted to treat stable periods either by superimposing the orbital points on a near - by chaotic attractor or by keeping a sufficient number of transient points .the poincar map in the plane captures most of the interesting dynamics as it contains both fixed points .the -axis is contained in the stable manifold of the origin .all orbits reaching the -axis will be attracted to the origin , thus most of the homoclinic behavior may be tracked in this plane . in principle , either downward or upward intersections of trajectories with the plane may be used to generate the poincar map . however ,upward intersections with have the practical merit to yield 1d - like objects which may be parameterized by simply using the coordinates .[ hlzfig1 ] shows a poincar section at .the dashed curves and diamonds represent one of the fcfs and its tangent points with the bcf .these will be used later in sec .v. the 1d - like structure of the attractor is apparent . only the thickening in some part of the attractor hints on 2d structures . ignoring the thickening for the time being, the 1d attractor may be parameterized by the coordinates only .collecting successive , we construct a first return map as shown in fig .[ hlzfig2 ] .it consists of four symmetrically located pieces with gaps on the mapping interval . for a first return map a gap belonging to both and plays no role in the dynamics . if necessary, we can use this specificity of return maps to squeeze some gaps in .furthermore , we can interchange the left subinterval with the right one by defining , e.g. , the precise value of the numerical constant is not essential ; it may be estimated from the upper bound of and is so chosen as to make the final figure look nicer . the swapped first return map , as we call it , is shown in fig . [ hlzfig3 ] .the corresponding tangent points between fcf and bcf ( the diamonds ) are also drawn on these return maps for later use .it is crucial that the parameterization and swapping do keep the 2d features present in the poincar map .this is important when it comes to take into account the 2d nature of the poincar maps . in fig .[ hlzfig4 ] poincar maps at 9 different values from to 203 are shown .the corresponding swapped return maps are shown in fig .[ hlzfig5 ] . generally speaking , as varies from small to greater values , these maps undergo transitions from 1d - like to 2d - like , and then to 1d - like again . even in the 2d - like rangethe 1d backbones still dominate .this partly explains our early success in applying purely 1d symbolic dynamics to the lorenz model .we will learn how to judge this success later on .some qualitative changes at varying will be discussed in sec .we note also that the return map at complies with what follows from the geometric lorenz model .the symbolic dynamics of this lorenz - like map has been completely constructed .all the return maps shown in fig . [ hlzfig5 ]fit into the family of lorenz - sparrow map .therefore , we take a general map from the family and construct the symbolic dynamics . there is no need to have analytical expression for the map .suffice it to define a map by the shape shown in fig .[ hlzfig6 ] .this map has four monotone branches , defined on four subintervals labeled by the letters , , , and , respectively .we will also use these same letters to denote the monotone branches themselves , although we do not have an expression for the mapping function . among these branches and increasing ; we say and have an even or _ parity_. the decreasing branches and have odd or parity . between the monotone branchesthere are `` turning points '' ( `` critical points '' ) and as well as `` breaking point '' , where a discontinuity is present . any numerical trajectory in this map corresponds to a symbolic sequence where , depending on where the point falls in .all symbolic sequences made of these letters may be ordered in the following way .first , there is a natural order next , if two symbolic sequences and have a common leading string , i.e. , where .since and are different , they must have ordered according to ( [ order1 ] ) .the _ ordering rule _ is : if is even , i.e. , it contains an even number of and , the order of and is given by that of and ; if is odd , the order is the opposite to that of and .the ordering rule may be put in the following form : where ( ) represents a finite string of , , , and containing an _ even _( _ odd _ ) number of letters and .we call and even and odd string , respectively . in order to incorporate the discrete symmetry ,we define a transformation of symbols : keeping unchanged .sometimes we distinguish the left and right limit of , then we add .we often denote by and say and are mirror images to each other .symbolic sequences that start from the next iterate of the turning or breaking points play a key role in symbolic dynamics .they are called _ kneading sequences _ . naming a symbolic sequence by the initial number which corresponds to its first symbol , we have two kneading sequences from the turning points : being mirror images to each other , we take as the independent one . for first return maps the rightmost point in the highest point after the mapping .therefore , and , see fig .[ hlzfig6 ] .we take as another kneading sequence .note that and are not necessarily the left and right limit of the breaking point ; a finite gap may exist in between .this is associated with the flexibility of choosing the shift constant , e.g. , the number 36 in ( [ swap ] ) .since a kneading sequence starts from the first iterate of a turning or breaking point , we have a 1d map with multiple critical points is best parameterized by its kneading sequences .the dynamical behavior of the lorenz - sparrow map is entirely determined by a _ kneading pair _ .given a kneading pair , not all symbolic sequences are allowed in the dynamics . in order to formulate the admissibility conditions we need a new notion .take a symbolic sequence and inspect its symbols one by one .whenever a letter is encountered , we collect the subsequent sequence that follows this .the set of all such sequences is denoted by and is called a -shift set of .similarly , we define , and . the _ admissibility conditions _ , based on the ordering rule ( [ forder1 ] ) , follow from ( [ knead ] ) : here in the two middle relations we have canceled the leading or .the twofold meaning of the admissibility conditions should be emphasized . on one hand , for a given kneading pair these conditions select those symbolic sequences which may occur in the dynamics . on the other hand , a kneading pair , being symbolic sequences themselves , must also satisfy conditions ( [ admis ] ) with replaced by and .such is called a _compatible _ kneading pair .the first meaning concerns admissible sequences in the phase space at a fixed parameter set while the second deals with compatible kneading pairs in the parameter space . in accordance with these two aspectsthere are two pieces of work to be done .first , generate all compatible kneading pairs up to a given length .this is treated in appendix a. second , generate all admissible symbolic sequences up to a certain length for a given kneading pair .the procedure is described in appendix b. it is convenient to introduce a metric representation of symbolic sequences by associating a real number to each sequence .to do so let us look at the piecewise linear map shown in fig .[ hlzfig7 ] .it is an analog of the surjective tent map in the sense that all symbolic sequences made of the four letters , , , and are allowed .it is obvious that the maximal sequence is while the minimal one being . for this map one may further write to introduce the metric representation we first use to mark the even parity of and , and to mark the odd parity of and . next , the number is defined for a sequence as where or , it is easy to check that the following relations hold for any symbolic sequence : one may also formulate the admissibility conditions in terms of the metric representations .the family of the lorenz - sparrow map includes some limiting cases .+ . the branch may disappear , and the minimal point of the branch moves to the left end of the interval .this may be described as it defines the only kneading sequence from the next iterate of .the minimum at may rise above the horizontal axis , as it is evident in fig .[ hlzfig5 ] at .the second iterate of either the left or right subinterval then retains in the same subinterval .consequently , the two kneading sequences are no longer independent and they are bound by the relation both one - parameter limits appear in the lorenz equations as we shall see in the next section .now we are well prepared to carry out a 1d symbolic dynamics analysis of the lorenz equations using the swapped return maps shown in fig .[ hlzfig5 ] .we take as an working example .the rightmost point in and the minimum at determine the two kneading sequences : indeed , they satisfy ( [ admis ] ) and form a compatible kneading pair . using the propositions formulated in appendix b , all admissible periodic sequences up to period6 are generated .they are , , , , , , , , , and . herethe letter is used to denote both and .therefore , there are altogether 17 unstable periodic orbits with period equal or less than 6 . relying on the ordering of symbolic sequences and using a bisection method ,these unstable periodic orbits may be quickly located in the phase plane .it should be emphasized that we are dealing with unstable periodic orbits at a fixed parameter set .there is no such thing as superstable periodic sequence or periodic window which would appear when one considers kneading sequences with varying parameters .consequently , the existence of and does not necessarily imply the existence of .similar analysis may be carried out for other . in table[ table1 ] we collect some kneading sequences at different -values .their corresponding metric representations are also included .we first note that they do satisfy the admissibility conditions ( [ admis ] ) , i.e. , and at each make a compatible kneading pair . an instructive way of presenting the data consists in drawing the plane of metric representation for both and , see fig .[ hlzfig8 ] .the compatibility conditions require , in particular , , therefore only the upper left triangular region is accessible .
recent progress of symbolic dynamics of one- and especially two - dimensional maps has enabled us to construct symbolic dynamics for systems of ordinary differential equations ( odes ) . numerical study under the guidance of symbolic dynamics is capable to yield global results on chaotic and periodic regimes in systems of dissipative odes which can not be obtained neither by purely analytical means nor by numerical work alone . by constructing symbolic dynamics of 1d and 2d maps from the poincar sections all unstable periodic orbits up to a given length at a fixed parameter set may be located and all stable periodic orbits up to a given length may be found in a wide parameter range . this knowledge , in turn , tells much about the nature of the chaotic limits . applied to the lorenz equations , this approach has led to a nomenclature , i.e. , absolute periods and symbolic names , of stable and unstable periodic orbits for an autonomous system . symmetry breakings and restorations as well as coexistence of different regimes are also analyzed by using symbolic dynamics .
a random -satisfiability ( -sat ) formula is constructed by adding constraints on variables .each variable has a spin state , and each constraint applies to different variables that are randomly chosen from the whole set of variables .the energy of constraint is expressed as \ , \ ] ] where denotes the set of variables involved in constraint ( the size of is ) , is the preferred spin state of constraint on variable .each of the preferred spin values of a constraint are randomly and independently assigned a value or with equal probability .the whole set of preferred spin values are then fixed , but the actual spin state of each variable is allowed to change .the constraint energy is zero if at least one of the variables takes the spin value , otherwise .given a random -sat formula , the task is to construct at least one spin configuration that satisfies all the constraint ( i.e. , makes all the constraint energy to be zero ) , or to prove that no such solutions ( i.e. , satisfying spin configurations ) exist . when is large , rigorous mathematical proofs ( see , e.g. , review article ) and numerical simulations revealed that whether a random -sat formula is satisfiable or not depends on the constraint density ( ) .as increases beyond certain satisfiability threshold , the probability of a randomly constructed -sat formula to be satisfiable quickly drops from being close to unity to being close to zero . among the whole ensemble of random -sat formulas ,the satisfiability of those instances with constraint density in the vicinity of is the hardest to determine .this empirical observation has stimulated a lot of investigations .the random -sat problem was intensively studied in the statistical physics community during the last decade .the number of solutions for the random -sat problem as a function of constraint density was calculated in ref . by the replica method of spin - glass physics .later the satisfiability threshold as a function of was calculated by the first - step replica - symmetry - breaking ( 1rsb ) energetic cavity method .the evolution of the solution space of the random -sat problem with constraint density was studied in refs . using the 1rsb entropic cavity method .before the satisfiability threshold is reached , the solution space experiences a clustering transition at , where exponentially many solution clusters ( gibbs states ) form and the ergodicity of the solution space is broken .this clustering transition is followed by a condensation transition at , where a sub - exponential number of solution clusters begin to dominate the solution space . within a solution cluster ,the spin states of a large fraction of variables start to be frozen to the same value as exceeds certain threshold value that may be different for different clusters .the freezing transition was investigated in refs .some of these phase transitions ( i.e. , the clustering and the condensation transition ) were earlier found to occur in mean - field -body - interaction spin glasses , with temperature ( instead of the constraint density ) being the control parameter . in the present paper ,we review our recent efforts on the solution space fine structures of the random -sat problem .a heterogeneity transition is predicted to occur in the solution space as the constraint density reaches a critical value which is smaller than .this transition marks the emergency of exponentially many solution communities in the solution space . for ,the heterogeneous solution space is ergodic ; at the solution communities will turn into different solution clusters as an ergodicity - breaking transition occurs .the existence of solution communities in the solution space is confirmed by numerical simulations on single -sat formulas , and the effect of solution space heterogeneity on a stochastic local search algorithm seqsat , which performs a random walk of single - spin flips , is investigated . beyond the clustering transition point ,our numerical simulation results suggested that the individual solution clusters of the solution space also have rich internal structures .the replica - symmetric cavity method is used in the next section to calculate the value of for the onset of solution space structural heterogeneity .section [ sec : solution - sampling ] presents the data - clustering results on sampled solutions of single random -sat formulas ; section [ sec : seqsat ] reports the simulation results of the stochastic search algorithm seqsat .we conclude this work in sec .[ sec : conclusion ] and point out some possible links with the phenomena of two - step relaxation and dynamical heterogeneity in supercooled liquids .given a random -sat formula , the total energy is equal to the number of violated constraints by the spin configuration .the whole set of spin configurations with form the solution space of this -sat formula .the similarity between any two solutions and of the space can be measured by an overlap parameter defined as to characterize the statistical property of the solution space , we count the total number of solution - pairs with overlap value and denote this number as . for a random -sat formula of size and constraint density , the size of is exponential in , and is also exponential in .it is helpful to define an entropy density as ] ( fig .[ fig : community ] , left panel ) , where is the most probable solution - pair overlap value , then for each there is only one mean overlap , and is a continuous function of . on the other hand , if is non - concave in $ ] ( fig .[ fig : community ] , middle and right panel ) , then the value of changes discontinuously at certain value of .we have exploited this correspondence between the non - concavity of and the discontinuity of to determine the threshold constraint density at which starts to be non - concave .we regard as the point at which the solution space of the random -sat problem transits into structural heterogeneity .this is because , as schematically shown in fig .[ fig : community ] , at it starts to make sense to distinguish between intra - community overlap values and inter - community overlap values . as shown in sec .[ sec : solution - sampling ] , many solution communities can be identified in a heterogeneous solution space .each solution community contains a set of solutions which are more similar with each other than with the solutions of other communities .these differences of intra- and inter - community overlap values and the relative sparseness of solutions at the boundaries between solution communities cause the non - concavity of . for the random -sat problem ,we use the replica - symmetric cavity method of statistical mechanics to calculate the mean overlap values at each value of . as the partition function eq .( [ eq : partition_function ] ) is a summation over pairs of solutions , the state of each vertex is a pair of spins . details of this calculation can be found in ref . and here we cite the main results . the mean overlap and the susceptibility at constraint density value for the random -sat problem . in ( a ) increases from to ( right to left ) with step size .the insets of ( b ) demonstrate that the peak value of diverges inverse linearly with and as the critical point is approached ., title="fig:",scaledwidth=49.8% ] the mean overlap and the susceptibility at constraint density value for the random -sat problem . in ( a ) increases from to ( right to left ) with step size .the insets of ( b ) demonstrate that the peak value of diverges inverse linearly with and as the critical point is approached ., title="fig:",scaledwidth=49.0% ] figure [ fig:3sat - population]a shows , for the random -sat problem , the form of the function at several different values .when , the mean overlap increases with the binding field smoothly , indicating that the entropy density function is concave in shape ( fig .[ fig : community ] , left panel ) .the solution space of the random -sat problem is then regarded as homogeneous .when , there is a hysteresis loop in the curve as the binding field increases and then decreases around certain threshold value .this behavior is typical of a first - order phase - transition . at the partition function contributed mainly by intra - community solution pairs , while at it is contributed mainly by inter - community solution pairs . to determine precisely the critical value , we investigate the overlap susceptibility , which is a measures of the overlap fluctuations , \ , \ ] ] where means averaging over solution - pairs under the binding field . from the divergence of the peak value of as shown in fig .[ fig:3sat - population]b , we obtain that for the random -sat problem .this value is much below the value of . for the random -sat problem, we find that .this value is again much below the clustering transition point .the difference appears to be an increasing function of .for single random -sat formulas , the solution space structural heterogeneity can also be detected by performing a long - time random walking in the corresponding solution graphs .the hamming distance between two solutions and of the solution space is defined as \ ] ] where if and if .this distance counts the number of different spins between the two solutions .the hamming distance is related to the overlap [ eq . ( [ eq : q - def ] ) ] through in the solution graph of a satisfiable random -sat formula , a solution is linked to other solutions , all of which have unit hamming distance with .it was empirically found that the degrees of the solutions are narrowly distributed with a mean much less than .the solutions can therefore be regarded as equally important in terms of connectivity .however , the connection pattern of the solution graph can be highly heterogeneous .even when the whole graph is connected , solutions may still form different communities such that the edge density of a community is much larger than that of the whole graph .the communities may even further organize into super - communities .consider two solutions and of the solution graph .the shortest - path length between these two solutions in the solution graph satisfies the inequality .if and belong to the same solution community , may be equal to or just slightly greater than the hamming distance .on the other hand , if and belong to two different solution communities , an extensive spin rearrangement may have to be made to change from to by single - spin flips , and then is much greater than .if solutions are sampled by a random walking process at equal time interval , the sampled solutions should contain useful information about the community structure of the solution space , with a resolution level depending on .this is because that , when the edges are followed randomly by a random walker , the walker will be trapped in different communities most of the time , and the sampled solutions will then form different similarity groups .starting from an initial solution , a sequences of solutions are generated by solution graph random walking .two different random walking processes are used in the simulation . in the _ unbiased _ random walking process ,the solution is a nearest neighbor of solution for . under this simple dynamics , if the generated solution sequence is infinitely long , each solution in a connected component of the solution graphwill appear in the sequence with frequency proportional to its connectivity .on the other hand , in the _ uniform _ random walking process , with probability , the solution ( ) is a nearest neighbor of solution , and with the remaining probability , solution is identical to . under this later dynamical rule ,each solution in a connected component of the solution graph will have the same frequency to appear in the generated solution sequence .both these two random walking processes were used to sample solutions , and the clustering analysis performed on these two sets of data gave qualitatively the same results . the simulation results shown in fig .[ fig : transition ] were obtained by the unbiased random walking process .the matrix of hamming distances of sampled solutions for a random -sat formula of variables , and the corresponding overlap distribution of these sampled solutions .the solutions were obtained by an unbiased random walking process ( solution sampling began after running the random walking process for steps starting from an initial solution , and two consecutively sampled solutions are separated by random walking steps ) .the left panel corresponds to and the right panel to . in the hammingdistance matrices , the solutions are ordered according to the results of the minimal - variance clustering analysis . ] a set of solutions are sampled ( with equal time interval ) from the generated long sequence of solutions for clustering analysis . ( to avoid strong dependence on the input solution , the initial part of the solution sequence was not used for sampling . ) by calculating the overlap values between the sampled solutions , we obtain an overlap histogram as shown in fig .[ fig : transition ] .a hierarchical minimum - variance clustering analysis ( see also ref . ) is performed on the sampled solutions .initially each solution is regarded as a group , and the distance between two groups is the hamming distance . at each step of the clustering , two groups and that have the smallest distanceare merged into a single group .the distance between the merged group and another group is calculated by where is the number of solutions in group .after the sampled solutions are listed in the order as reported by the minimal - variance clustering algorithm , the matrix of hamming distances between these solutions are represented in a graphical form as shown in fig .[ fig : transition ] ( upper row ) . for a random -sat formula with variables , the overlap histogram and the hamming distance matrix of sampled solutionsare shown in fig .[ fig : transition ] for two different constraint density values and in the ergodic phase . at ,the hamming distance matrix has a weak signature of the existence of many solution communities , and the overlap histogram is slightly non - concave . at ,the hamming distance matrix has a very clear block structure and the overlap histogram is non - monotonic .these observations suggest that the explored solution spaces are heterogeneous both at and ; the entropic trapping effect of the solution communities becomes much stronger as increases from to .the simulation results are consistent with the analytical results of the preceding section , they are also in agreement with the expectation that , as is approached from below , a random walker becomes more and more trapped in a single solution community and finally becomes impossible to escape from this single community at .we have obtained similar simulation results on single random -sat formulas .the solution space random walking simulations confirm that the solution space of the random -sat problem is already very heterogeneous before the clustering transition point is reached .the simulation results reported in ref . also suggested that , at , the single solution clusters of the solution space of the random -sat problem are themselves quite heterogeneous in internal structure , which may be the reason underlying the non - convergence of the belief - propagation iteration process within a single solution cluster .after a slight change , the solution graph random walking process of the preceding section was turned into a stochastic local search algorithm .this algorithm , referred to as seqsat , satisfies sequentially the constraints of a random -sat formula .we denote by the sub - formula containing the first constraints of .suppose a configuration that satisfies is reached at time .the -th constraint of formula is added to to obtain the enlarged sub - formula .then , starting from , an unbiased random walk process is running on the solution graph of until a spin configuration that satisfies is first reached at time after single - spin flips , being the total number of variables in formula .the waiting time of satisfying the -th constraint is ( this time is zero if already satisfies ) .starting from a completely random initial spin configuration and an empty sub - formula , every constraint of the formula are satisfied by seqsat in this way .notice that if a constraint was satisfied by seqsat , it remains to be satisfied as new constraints are added ( energy barrier crossing is not allowed ) .figure [ fig:34sat ] shows the simulation results of seqsat on a random -sat formula and a random -sat formula , .when the constraint density of the satisfied sub - formula is low , the waiting time needed to satisfy a constraint is very close to zero .as the heterogeneity transition point is reached , however , the waiting time increases quickly and it starts to take more than single - spin flips to satisfy a constraint . the search process becomes more and more slow as further increases .as approaches another threshold value the waiting time is so long ( exceeding single - spin flips ) that essentially stops to satisfy a newly added constraint .the parameter is regarded as the jamming point of the random walk search algorithm . in the range of seqsat is performing an increasingly viscous diffusion in the solution space of the random -sat formula .the simulation results of fig .[ fig:34sat ] and those reported in ref . for single random -sat and -sat formulas clearly demonstrate that , the solution space heterogeneity transition at has significant dynamical consequences for stochastic local search processes . for the random -sat problem ,the jamming point is significantly larger than the clustering transition point . at the solution space of a large random -sat formulais dominated by a sub - exponential number of solution clusters .a dominating solution cluster also has internal community structures . as further increases , a dominating cluster shrinks in size , and it break into many sub - clusters .figure [ fig:34sat ] ( left panel ) indicates that , the random walker of seqsat is residing on one of the dominating clusters at and it continue to be residing on one of the dominating sub - clusters of the visited cluster as increases .if the spin values of a large fraction of variables become frozen in the residing solution cluster , a jamming transition then occurs .the value of is predicted to be by a long - range frustration mean - field theory , in agreement with the simulation results .the total search time needed to satisfy the first constraints of a random -sat formula with variables ( , left panel ; , right panel ) .the performances of an unbiased random - walk search process and a biased random - walk search process are compared .the red dashed lines correspond to the jamming transition points as predicted by a long - range - frustration mean - field theory , the green dotted lines correspond to the clustering transition points , and the black solid lines correspond to the satisfiability threshold value . , title="fig:",scaledwidth=45.0% ]the total search time needed to satisfy the first constraints of a random -sat formula with variables ( , left panel ; , right panel ) .the performances of an unbiased random - walk search process and a biased random - walk search process are compared .the red dashed lines correspond to the jamming transition points as predicted by a long - range - frustration mean - field theory , the green dotted lines correspond to the clustering transition points , and the black solid lines correspond to the satisfiability threshold value ., title="fig:",scaledwidth=45.0% ] for the random -sat problem with , the simulation results shown in fig .[ fig:34sat ] ( right panel ) and in ref . suggest that the jamming transition point is identical to or very close to the clustering transition point .we notice that for the random -sat problem with , at , the union of an exponential number of small solution clusters is contributing predominantly to the solution space .it is expected that , as approaches from below , the ergodic solution space of a large -sat formula is dominated by an exponential number of small solution communities . the solutions reached by seqsatprobably are residing on one of these small communities .as , each of these small ( but statistically relevant ) solution communities probably contains a large fraction of variables that are almost frozen .figure [ fig:34sat ] also shows the performances of a biased random walk search algorithm .the biased random walk process differs from the unbiased random walk process in that , in each single - spin flip , a variable that is flippable but not yet being flipped is flipped with priority .this biased random walk seqsat algorithm was implemented with the hope of escaping from a solution community more quickly . for the random -sat and -sat problem, the biased seqsat algorithm indeed is more efficient than the unbiased algorithm .but for the random -sat problem with , the biased algorithm diverges earlier than the unbiased algorithm .the solution space structure of the random -sat problem evolves with the constraint density .several qualitative transitions occurs in the solution space as becomes relatively large .we demonstrated , both theoretically and by computer simulations , that the first qualitative structural transition is the heterogeneity transition at , where exponentially many solution communities start to form in the ( still ergodic ) solution space .the dynamic behavior of a stochastic search algorithm was investigated .this simple algorithm seqsat constructs satisfying spin configurations for a single -sat formula by performing a random walk of single - spin flips . due to the entropic trapping effect of solution communities ,the solution space random walking process starts to be very viscous as goes beyond .seqsat is able to find solutions for a random -sat formula with constraint density less than a threshold value . for ,the jamming point is larger than the solution space clustering transition point .but for , it appears that is very close to .when the constraint density of a large random -sat formula is in the region of , the solution space of the formula is heterogeneous but ergodic .if a random walking process is running on the solution space starting from an initial solution , a two - step relaxation behavior can be observed , corresponding to a quick slipping into a solution community , a relatively long wandering within this community , and finally the viscous diffusing in the whole solution space .similar two - step relaxation behaviors were observed in glassy dynamics studies of supercooled liquids ( see , e.g. , the review article ) both in the non - activated dynamics region and in the activated dynamics region .the existence of solution communities in the solution space may lead to a phenomenon that is similar to the dynamical heterogeneity of supercooled liquids ( see , e.g. , review articles ) . for a random walking process on a heterogeneous solution space ,if the observation time is less than the typical relaxation time of escaping from the solution communities , one may find that the spin values of a large fraction of variables change only very infrequently among the visited solutions , while the spin values of the remaining variables are flipped much more frequently .the variables of the -sat formula then divides into an active group and an inactive group in this observation time window .one may further observe that the active variables are clustered into many distantly separated sub - groups , each of which containing variables that are very close to each other .numerical simulations need to be performed to investigate in more detail the solution space random walking processes .the constraint density of the random -sat problem corresponds to the particle density in supercooled liquid . in supercooled liquids ,another important control parameter is the temperature .we can also introduce a positive temperature to the random -sat problem and investigate how the configuration space evolves with .it is anticipated that , as is lowered to a threshold value the configuration space will also experience a heterogeneity transition .detailed theoretical and simulation results will be reported in a later work .for the fully - connected -spin spherical model of glasses , it has already been shown that a quantity called the franz - parisi potential has a change in its concavity property as is lowered ( see for more recent extended studies ) .another related problem , the weight space property of the ising perceptron , was studied in . for this fully connected model ,a change of concavity property was also predicted for its characteristic function .the solution space heterogeneity transition also exists in the random -xorsat problem . it may be a general feature of the configuration spaces of spin glass models on finite - connectivity random graphs .the author thanks kang li , hui ma and ying zeng for collaborations .this work was partially supported by the national science foundation of china ( grant numbers 10774150 and 10834014 ) and the china 973-program ( grant number 2007cb935903 ) .10 url # 1#1urlprefix[2][]#2 kirkpatrick s and selman b 1994 _ science _ * 264 * 12971301 a random -sat formula can be represented by a bipartite graph ( see , e.g. , ref . ) . in this bipartite graph ,a circular node represents a variable and a square node represents a constraint , an edge between a variable node and a constraint node means that variable is involved in constraint . in this bipartite graph ,the distance between two variable nodes and is defined as the number of constraint nodes on a shortest - length path between and .for example , if and both are constrained by the same constraint , then ; if and are not constrained by the same constraint but and a variable are involved in constraint and and are involved in constraint , then .
the random -satisfiability ( -sat ) problem is an important problem for studying typical - case complexity of np - complete combinatorial satisfaction ; it is also a representative model of finite - connectivity spin - glasses . in this paper we review our recent efforts on the solution space fine structures of the random -sat problem . a heterogeneity transition is predicted to occur in the solution space as the constraint density reaches a critical value . this transition marks the emergency of exponentially many solution communities in the solution space . after the heterogeneity transition the solution space is still ergodic until reaches a larger threshold value , at which the solution communities disconnect from each other to become different solution clusters ( ergodicity - breaking ) . the existence of solution communities in the solution space is confirmed by numerical simulations of solution space random walking , and the effect of solution space heterogeneity on a stochastic local search algorithm seqsat , which performs a random walk of single - spin flips , is investigated . the relevance of this work to glassy dynamics studies is briefly mentioned .
understanding the dynamics of an evolving population structure has long been the goal of population genetics .several authors have constructed probabilistic models to study allele frequency distributions in populations subject to mutation , selection , and genetic drift .the mathematical analysis of these models leads to an improved understanding of the underlying system , and has been crucial for the interpretation of the laws of evolution .this is most evident in the quantitative analysis of cancer , which has seen numerous studies throughout the 20th century that addressed the kinetics of cancer initiation and progression . due to these and other studies ( see for a review ) , we now know that human cancer initiates when cells within a proliferating tissue accumulate a certain number and type of genetic and/or epigenetic alterations. these alterations can be point mutations , amplification and deletion of genomic material , structural changes such as translocations , loss or gain of dna methylation and histone modifications , and others .the dynamics of mutation acquisition is governed by evolutionary parameters such as the rate at which alterations arise , the selection effect that these alterations confer to cells , and the size of the population of cells that proliferate within a tissue .much effort has been devoted to model these processes mathematically and computationally , and to analyze the rates at which mutations arise within pre - cancerous tissues . in particular ,several investigators have studied the dynamics of two mutations arising sequentially in a population of a fixed finite number of cells .this scenario describes the inactivation of a tumor - suppressor gene ( tsg ) which directly regulates the growth and differentiation pathways of the cells .this may or may not lead directly to cancer .cells in which the tsg is inactivated can take a variety of fitness values .for instance embryonic retina cells with an inactivated rb1 gene can proliferate uncontrollably and create retinoblastomas . by definitionthese cells have a higher fitness than the wild - type cells . alternatively ,if chromosomal instability ( cin ) is taken into account , cells with deactivated tsg can have a lower fitness than the wildtype .empirical evidence for the exact fitness ( dis)advantage conferred to cells as a result of accumulating mutations is in general difficult to obtain , since _ in vitro _growth assays of non - transformed cells are challenging .for this reason and in order to provide general methods , the modeling literature has addressed a range of fitness values for single- and double - mutant cells ( e.g. ) . subsequent modeling work on mutation acquisition has revealed a more detailed picture ; a homogeneous population harboring no mutations can move to a homogeneous state in which all cells carry two mutations without ever visiting a homogeneous state in which all cells harbor just one mutation .this phenomenon is referred to as ` stochastic tunneling ' and represents an additional route to the homogeneous state with two mutations ; the sequential route is still available to the system , but it becomes less likely in certain parameter regimes . in this contextthe term ` tunneling ' refers only to overlapping transitions between the homogeneous states , it does not imply a statement about the structure of the underlying fitness landscape .the process we refer to as ` tunneling ' is not limited to valley - crossing scenarios .[ fig : fig1]a provides a schematic illustration of the tunneling process . as with much of the existing literature on the stochastic tunneling, our work is not just limited to the case of cancer initiation .instead our results are related and applicable to more general scenarios in population genetics , including situations in which a heterogeneous population is maintained through mutation - selection balance , or the case of muller s ratchet when increasingly deleterious mutations become fixed .so far , most analytical investigations of stochastic tunneling have been limited to considering transitions between homogeneous ( or monomorphic ) states of the population , as indicated in fig .[ fig : fig1]a .these investigations were performed assuming that cells proliferate according to the moran process - a stochastic model of overlapping generations in which one cell division and one death event occur during each time step . analyzed the effect of the population size and mutation rates on the rate of appearance of a single cell with two mutations .these authors noted that for small , intermediate , and large populations , it takes two hits , one hit and zero hits , respectively , for a cell to accumulate two mutations . herea hit is defined as a rate - limiting step , such as the appearance of an alteration when mutation rates are small .considering fixation of cells with two mutations , , and obtained explicit expressions for the tunneling rate .subsequently , used the assumption of independent lineages ( i.e. , individual lineages of cells harboring one mutation were considered to behave independently from each other ) to compute the probability distribution for the time of emergence of a single second mutant in intermediate or large populations .used a similar branching - process approach to derive a further tunneling rate .other types of dynamics such as the wright - fisher process have been studied as well , see e.g. . in the wright - fisher process , cell generationsare assumed to be non - overlapping , so that many birth and death events occur during each time step . using this process , determined the critical population sizes for sequential fixation or stochastic tunneling , and calculated the rate of tunneling as a function of the mutation rates , population size , and relative fitness of cells harboring one mutation .finally , these results were extended to investigate the effects of recombination , or sexual reproduction , on the rate of stochastic tunneling .these authors found that the time to establishment of the double - mutant cells can be reduced by several orders of magnitude when sexual reproduction is considered .recently , studied stochastic tunneling in a model which was not built upon the homogeneous - state approach .the author constructed a mutational network to study gene duplication .although the setting of the model is very different from the setting we consider here , the underlying principles are similar .the existing approaches for the moran model in an asexual population provide accurate analytical approximations for a subset of the parameter space .we present a systematic overview of the scope of existing quantitative work .there are extensive regions of parameter space which , up to date , have been left unexplored by analytical approaches .these are predominantly situations in which the double mutant is not the most advantageous in the sequence . before the double mutant reaches fixation , the population has to travel across a fitness hill or move constantly downhill in fitness space , as illustrated in fig .[ fig : fig1]b .the dynamics can then become trapped in quasi - equilibria a consequence of the mutation - selection balance . in these long - lived equilibria ,the population is heterogeneous , and so previous approaches are not justified . throughout our paper , these states are referred to as ` metastable ' .when these states exist , fixation is driven purely by demographic fluctuations .we address this regime based on ideas from mathematical statistical physics .specifically we employ the wentzel - kramers - brillouin ( wkb ) method to derive quantitative predictions for fixation times .examples of using the wkb method to describe the escape from metastable states include the computation of mixing times in evolutionary games , investigating extinction processes in coexisting bacteria or predator - prey systems , and investigating epidemic models . in the presence of recombination , have shown that metastable states can appear when recombination rates are large , even if the double mutant is advantageous .the authors note that the time to escape across this ` recombination barrier ' increases exponentially with system size , as explained based on a wkb approach . in this paper , we first classified the generic types of behavior that can occur in a population of cells which acquire two subsequent mutations in a moran process : we determined when metastable states occur , when fixation is driven by intrinsic noise as opposed to deterministic flow , and where in parameter space fixation occurs in several subsequent hits .this classification was achieved by systematically studying the underlying deterministic dynamics of the process .we then obtained expressions for fixation times in parameter regimes which could not be captured by previous methods , i.e. regimes in which metastable states are found .we thus employed the wkb method to provide a more complete analytical description of the fixation dynamics in these parameter regimes .our work fills the gap left by the existing literature and leads to a more comprehensive understanding of mutation acquisition and stochastic tunneling in evolving populations .we considered a well - mixed , finite population of cells .each cell can be of one of three possible types , labeled type 0 a wild - type cell harboring no mutations , type 1 a cell harboring one mutation , and type 2 a cell harboring two mutations .initially , all cells are of type 0 . the evolution of the population is determined by a moran process . during each elementary timestep of this process , a cell is randomly chosen to reproduce proportional to its fitness . in the same timestep another cell is randomly removed , such that the total population size remains constant .the daughter cell can either inherit its type from the parent , or acquire a mutation during division .the relative fitness values of type-0 , type-1 and type-2 cells are denoted by , and . without loss of generality, we use throughout .the mutation rates and denote the probability that the daughter of a type-0 cell is of type-1 , and the probability that the daughter of a type-1 cell is of type-2 , respectively .we neglect all other combinations of mutations .the assumption of no back - mutation is commonly used in the population genetics literature .it is justifiable since the human genome is very large , base pairs , and the probability of mutating a specific base per cell division very small , to .therefore the chance of undoing a specific point mutation is vanishingly small .the probability that a second critical alteration occurs at a different locus is much higher . in our model, finite populations will eventually reach a state in which all cells have acquired two mutations .this state is ` absorbing ' , i.e. once this state has been reached , no further dynamics can occur .there are of course physical processes beyond the second mutation . in pre - cancerous tissues for example , there will be a finite probability that cells progress from this state to accumulate further changes ( see discussion ) .these processes are not the focus of our work though , and so are not included in the model . and . the second example has , and as a result this landscape permits a metastable state ( a fixed point in the deterministic dynamics as discussed below).,scaledwidth=80.0% ] let us denote the number of type-0 , type-1 , and type-2 cells by , and , respectively ; we have .the transition rates for the moran process are given by the quantity is the average fitness in the population .the transition labeled ` ' represents a process in which a cell of type is replaced by a cell of type . in a process labeled` ' , for example , the state of the population changes from to state . as an example , the first reaction rate , , in eq .( [ eq : transitionrates ] ) can be broken down as follows : a type-0 cell is chosen to reproduce at rate .the offspring does not mutate with probability .finally , a type-1 cell is chosen to be removed at rate .the rates for the other processes can be interpreted analogously .we choose a continuous - time setup , and correspondingly all rates in eq .( [ eq : transitionrates ] ) scale linearly in the population size .simulations are carried out using a standard gillespie algorithm , and times are measured in cellular generations .this process is described exactly by a master equation , which governs the behavior of the probability , , that the population is in state at time , and is given by .\label{eq : masterequation}\ ] ] the vector indicates a change in the composition of the population due to the corresponding reaction , and represents the partial derivative of with respect to time .this equation states that the probability that the population is in state at time increases due to transitions into the state and decreases due to transitions out of the state . the master equation contains full information about the stochastic population dynamics .in particular , the detailed statistics of the population at any time can be derived from it , and it captures effects driven by intrinsic noise , such as extinction and fixation . obtaining a full solution of the master equationis difficult or impossible though in all but the simplest of cases . as a starting point, it is often useful to first consider the deterministic limit of infinite populations . in this limit, the distribution is sharply peaked around its average , and so the dynamics reduces to a set of equations for this mean .this approach does not capture any of the stochastic effects .however , the types of stochastic trajectories that can be observed for different parameter sets are , to some extent , set by the underlying deterministic dynamics .we thus first analyze the deterministic limit of the model .in the limit , the population evolves according to a deterministic set of equations . writing , we have the relation , and the average fitness is given by .the equations governing the dynamics of the population are then , \nonumber\\ { \overline{r}}\dot{x}_1 & = & u_1 r_0 x_0 + \bigl[(1-u_2)r_1 -{\overline{r}}\bigr]x_1 , \nonumber\\ { \overline{r}}\dot{x}_2 & = & u_2 r_1 x_1 + ( r_2-{\overline{r}})x_2 .\label{eq : determinsticequations}\end{aligned}\ ] ] these equations can be derived systematically using a system - size expansion of the master equation ( [ eq : masterequation ] ) , see e.g. .note that refers to the relative concentration of cells of type in the population ( not to be confused with the probability of being found in a homogeneous state of type- cells as studied in and ) .for example , would indicate that the population is in a state in which all three types are present with equal numbers .given the relation , the dynamics only has two independent degrees of freedom .time courses of the system can hence be thought of as a trajectory in a ` concentration simplex ' , as depicted in the satellite diagrams of fig .[ fig : fig2 ] .each point in the simplex represents one particular state of the population . at points in the interior of the simplexall three types of cells are present in the population ( for ) .points on the edges of the simplex represent states in which one of the three types is not present , for example for points along the edge connecting the lower - right corner of the simplex with the upper corner .we will refer to this as the 12 edge in the following , and similarly for the other edges .the three corners of the simplex represent the homogeneous states , i.e. ( lower left corner ) , ( lower right ) and ( upper corner ) .the deterministic equations ( [ eq : determinsticequations ] ) have a trivial fixed point ( a point at which for all ) at , corresponding to the absorbing state .the equations can have a further zero , one , or two non - trivial fixed points , depending on the values of the fitness parameters and the mutation rates .these fixed points correspond to points at which mutation and selection balance .each fixed point can either be stable ( i.e. attracting from all directions ) or a saddle ( attracting from some directions , repelling in others ) . the system is found not to contain any fully repelling fixed points ( appendix [ app : fp ] ) .[ fig : fig2 ] shows the deterministic dynamics in different parameter regimes , indicated as regions i to v. below we discuss the stochastic behavior in each of these parameter regimes .( regions i and ii , mutation - selection balance between types and ) .stable interior fixed points occur when and ( regions ii and iii , mutation - selection balance between all three types ) .no fixed points are found in regions iv and v ( beneficial type- mutation ) .the satellite diagrams show the deterministic flow , eq .( [ eq : determinsticequations ] ) .thick ( red ) lines show the deterministic flow from the all - wild - type initial condition .solid circles indicate stable fixed points , and the empty circle for region ii corresponds to a saddle point which is stable along the 12 boundary . in all cases ,the point is an absorbing state and is therefore a fixed point as well . below each simplexwe illustrate the shape of the fitness landscape which generates each type of behavior . ]_ region i ( mutation - selection balance between type- and type- cells ) : _ + in region i , the deterministic dynamics flows towards a fixed point on the 12 edge of the concentration simplex ( ) .the type-0 cells have the lowest fitness , and are deterministically lost by selection .the fixed point is a consequence of mutation - selection balance between type-1 and type-2 cells .writing , the existence condition for the equilibrium ( ) reduces to .it is a well - known result from population genetics that this condition prevents the deterministic loss of the type-1 cells .the deterministic system gets stuck at this fixed point , but a finite population will eventually reach fixation in the all-2 state . at large but finite population sizes ,the stochastic dynamics are expected to approximately follow the deterministic path shown in fig .[ fig : fig2 ] such that type-0 cells quickly become extinct .the lack of backwards mutations means the population can not depart from this edge and the problem reduces to one degree of freedom .the mutation - selection balance maintains the heterogeneous population state of type-1 and type-2 cells .the intrinsic noise then has to drive the system from this metastable state into the absorbing all-2 state against the direction of selection .fixation times are expected to grow exponentially with the population size ._ region ii ( mutation - selection balance between all three types , and , separately , between types and ) : _ + in region ii , the deterministic flow from the all - wild - type state is towards a stable fixed point in the interior of the simplex .this point corresponds to the mutation - selection balance point of all three species .there is a second fixed point located on the 12 edge , which corresponds to mutation - selection balance between types 1 and 2 in the absence of type-0 cells ( analogous to region i ) .as type-0 cells have the highest fitness in this regime , selection is directed away from the 12 edge . thus the fixed point on this edgeis a saddle . as before the stochastic dynamics in finite populationswill reach the all-2 state eventually .the population will closely follow the deterministic trajectory ( see fig . [fig : fig2 ] ) before reaching the metastable state about the stable interior fixed point . herethe mutation - selection balance maintains the heterogeneous state with all three species present .the population will fluctuate about this fixed point until it eventually overcomes the adverse selection and escapes .there are two possibilities for the subsequent behavior : ( i ) type-0 cells become extinct and the population settles into the metastable state on the 12 edge .intrinsic fluctuations enable the population to overcome the adverse selection along the edge and reach the absorbing all-2 state .this corresponds to sequential extinction , first of type-0 cells , then of type-1 cells .this process is equivalent to a minimal model of muller s ratchet , in which the most advantageous phenotypes are sequentially lost .a trajectory of this type is illustrated in figs .[ fig : fig3]a and [ fig : fig3]c ; ( ii ) cells of type 0 and type 1 can in principle go extinct ( almost ) simultaneously .the trajectory of the system then hits the 12 edge infinitesimally close to the all-2 corner of the simplex ( here ` infinitesimally close ' means a distance of order away from the upper corner ) .it does not become trapped in the metastable state located on the 12 edge . in numerical simulations ( data not shown )we find that this second path is realized only very rarely , and so our mathematical analysis of region ii below focuses on sequential extinction . , , , ) .the red line is the dominant path for sequential extinction , as obtained from the wkb analysis ( see text ) .the thick purple line indicates the dominant path for simultaneous extinction of types 0 and 1 , which is rare .faint lines indicate the deterministic flow [ eq .( [ eq : determinsticequations ] ) ] .the thin orange line represents the trajectory of a single stochastic simulation ( ) . * b * dominant trajectory , flow lines , and stochastic trajectory for a combination of parameters in region iii ( , , , ) .cells of types 0 and 1 go extinct ( essentially ) at the same time .the dominant trajectory as obtained from the wkb calculation ( thick purple line ) runs directly into the all-2 corner of the concentration simplex .* c * the concentrations of the three types of cells as a function of time .these are obtained from the same stochastic simulation as shown in the simplex * a*. a moving average has been taken to improve clarity . as seen from the data , cells go extinct sequentially : initially all three types are present , cells of type 0 go extinct at time , cells of type 1 go extinct at time . *d * the concentrations ( moving average ) of the three types of cells as a function of time for the stochastic simulation shown in the simplex * b * , showing the simultaneous extinction of types 0 and 1 . , scaledwidth=80.0% ] _ region iii ( mutation - selection balance between all three types ) : _ + in region iii the deterministic dynamics has a single stable fixed point in the interior of the concentration simplex .this point again corresponds to the mutation - selection balance point of all three species .large , but finite populations will behave as discussed in case ( ii ) for region ii .they will initially become trapped in the metastable state about the mutation - selection balance point , before intrinsic fluctuations eventually drive the system to the absorbing all-2 state . in region iii , type-0 cells and type-1 cells go extinct at essentially the same time .if the type-0 cells become extinct first , then type-1 cells quickly become extinct as selection along the 12 edge is directed towards the absorbing state ( ) .this is illustrated in figs .[ fig : fig3]b and [ fig : fig3]d ._ regions iv and v ( beneficial type- mutation ) : _ + in a subset of the parameter space , shown as regions iv and v in fig .[ fig : fig2 ] , the deterministic flow from the all - wild - type state is directly to the absorbing all-2 state . for such model parameters we expect that fixation in finite populations will be quick as type-2 cells are favored by selection ( and mutation ) .these scenarios agree with the theory of natural selection , in which the populations fitness increases over time . in regioniv this is achieved by crossing a fitness valley , and in region v it is achieved by sequentially selecting the most advantageous phenotype .[ fig : fig4 ] illustrates which parameter regimes have previously been studied in the stochastic tunneling literature .these existing studies almost exclusively focus on regions iv and v , i.e. cases in which fixation is driven not primarily by demographic noise , but by the underlying deterministic flow . as mentioned above fixation is typically fast in regions iv and v. based on similar studies in evolutionary game theory one would expect the fixation time to grow logarithmically with the population size , , and this is indeed what we find in simulations ( data not shown ) .the regions containing non - trivial fixed points are largely unexplored by previous investigations .fixation is controlled by stochastic effects so that fixation times are large and broadly distributed . as we will discuss below, fixation times grow exponentially with the population size in such cases .this is perfectly in - line with the findings of , who point out that fixation in these regions takes a very long time .efficient measurements of fixation time in simulations are hence difficult .methods which require the numerical solutions of , for example , the backward fokker - planck equation or a backward master equation reach their limits here as well . the contribution of our work is to analyze precisely these previously inaccessible cases . we compute the fixation properties of systems in which the underlying deterministic flow has one or more attracting fixed points away from the absorbing all-2 state .deleterious or slightly advantageous , and very advantageous , is the approximate region of interest of the studies , , and .these studies focused on the time to emergence of a single type-2 cell .the northwest - southeast striped region , with neutral or deleterious , and advantageous , is approximately the region of interest of and .these studies were concerned with computing fixation times of the advantageous type-2 cells and rely on the assumption that the number of type-1 cells is small . finally , the horizontal striped region approximately corresponds to the literature of crossing fitness valleys , notably , , , , and .these studies are concerned with .,scaledwidth=80.0% ]let us now analyze the dynamics in regions i , ii and iii , i.e. situations in which the deterministic dynamics has one or two non - trivial fixed points .in large but finite populations these fixed points correspond to metastable states in which the effect of mutation and selection balance .the aim of the following analysis is to calculate the rate at which the population will escape this state and enter the absorbing state in which all cells harbor two mutations . to proceed with the analysis we make the following simplifying assumptions , which are justified by the previous deterministic analysis : 1 .we assume that the population first settles into a distribution about the mutation - selection balance point .this distribution is calculated below .we assume that the population will ` leak ' into the absorbing state on a very long timescale from this distribution .with this assumption we can also say that the time taken for the population to reach the metastable state is negligible when compared to the escape time . with these assumptions we can compute ( from the master equation ( [ eq : masterequation ] ) ) the distribution about the mutation - selection balance point and the escape rate .these assumptions ( and hence the subsequent analysis ) require the selective pressure to be greater than the effect of noise , such that the metastable states are long - lived .for this reason , the approach described here is only valid for large which satisfy this condition ( the minimum value of for which our analysis is valid is dependent on the remaining model parameters , but comparisons with simulation results show it is accurate for ) .mathematically we formulate the problem as follows : the shape of the distribution about the mutation - selection balance point is given by , and is henceforth referred to as the quasi - stationary distribution ( qsd ) in line with existing literature . the mean time taken to escape from the metastable state , ,is much greater than the time taken to initially reach the metastable state , i.e. .provided this condition holds , we can assume that after a short time the probability to find the population in state is given by the exponential decay factor , , describes the ` leaking ' process from the metastable state into the absorbing state , .the second equation follows from normalization . to find the mean fixation time of the type-2 cells, we substitute eq .( [ eq : quasistationaryapprox ] ) into the master equation ( [ eq : masterequation ] ) to obtain the quasi - stationary master equation ( qsme ) for ( the absorbing state ) we have where we have used .hence if we find the qsd by solving the qsme ( [ eq : qsme ] ) , we can determine the mean fixation time , , and the probability to have reached fixation by time , . by separating variables in eq .( [ eq : quasistationaryapprox ] ) , we have reduced the complexity of the master equation ( [ eq : masterequation ] ) ( time does not feature in the qsme ( [ eq : qsme ] ) ) . if we now replace the discrete variables with continuous variables , we further reduce the complexity .this continuous approximation is valid as we have already stated that we require to be large .we now employ the wkb ansatz to represent the qsd as , \label{eq : wkbansatz}\ ] ] where is known as the _ action _ , and ] .this very technical analysis is tedious and beyond the scope of the present paper .we analyze the results separately for each region ( i , ii and iii ) of parameter space .in particular we discuss the implications the model parameters have for the probability with which tunneling occurs , and for the fixation time of type-2 cells . in this region ,type-1 cells have a fitness advantage over both type-0 and type-2 cells , such that the fitness landscape has an intermediate maximum ( a fitness ` hill ' ; see fig . [fig : fig2 ] ) . as a result ,type-0 cells are deterministically lost and the population relaxes to the qsd ( i.e. the mutation - selection equilibrium ) on the 12 edge , as described above , and probability slowly leaks into the absorbing state . to test the accuracy of this approach , in fig . [fig : fig5 ] we compare the qsd measured in simulations with the theoretical approximation .the data in the figure reveals good agreement between theory and model simulations for , and for . in the region just above the agreement between the theoretical result for and simulation data breaks down ; here the theoretical value from eq .( [ eq : wkbansatz ] ) diverges .this is a result of taking the limit in the qsme ( [ eq : qsme ] ) , in particular for the case which corresponds to .the value of is crucial to determining the value of . when calculating the mean fixation time , , we circumvented this known problem by considering a so - called ` boundary - layer ' approach . the boundary - layer solutions ( dashed lines in fig .[ fig : fig5 ] ; for details of the calculation see appendix [ app : sol ] ) show better agreement with simulation results close to than the qsd obtained from the wkb ansatz ( solid lines ) . )( solid lines ; filled bars for ) against the distribution of states obtained from an ensemble of simulation runs ( symbols ) . herethe metastable state is located along the 12 boundary ( region i : here the solution is obtained from the theory with and given by eq .( [ eq : ss1 ] ) , and the normalization given by eq .( [ eq : norm ] ) ) .dashed lines correspond to the boundary - layer solution of the master equation , valid for [ eq . ( [ eq : recursive ] ) ] .note that the distributions away from have been re - scaled by factors of for and for for optical convenience .the arrow indicates the location of the fixed point .parameters are , , , , , .,scaledwidth=70.0% ] results for the mean fixation time in region i are shown in fig .[ fig : fig6]a . in fig .[ fig : fig6]b , we plot the probability that type-2 cells have reached fixation by time ( including fixation earlier than that ) .we refer to this quantity as the fixation probability .fixation times are shown to increase exponentially with .this is a consequence of the increasing height of the selection ` barrier ' which must be overcome for type-2 cells to reach fixation .also , increasing pushes the mutation - selection balance point towards the all-1 state .this results in a further increase in fixation time ( or decrease in fixation probability ) . as the metastable state approaches the all-1 state , the probability of the population reaching the all-1 state due to demographic fluctuations increases . thus increasing decreases the probability of tunneling .increasing the fitness of type-2 cells , on the other hand , pushes the metastable state closer to the absorbing state .this leads to a significant reduction in the fixation time ( increase in fixation probability ) as also shown in fig .[ fig : fig6]a ( [ fig : fig6]b ) .increasing the mutation rate has a similar effect to increasing ; the mutation - selection balance point approaches the absorbing state , and the net - effect of selection away from the absorbing state is reduced , leading to a decrease in the fixation time . in line with the previous literature ,increasing the mutation rate increases the probability of tunneling . in both panels of fig .[ fig : fig6 ] the theoretical predictions from the wkb method are in excellent agreement with simulation results .this is the case even at the moderate population size of .small deviations occur when mutation rates are low ( dashed lines and open symbols in fig .[ fig : fig6 ] ) .the theory then slightly underestimates the fixation time ( overestimates the fixation probability ) .this is a consequence of assuming that the population approaches the metastable state in a negligible amount of time .for very small mutation rates , it takes an increasing period of time for successful ( i.e. non - vanishing ) mutant lineages to appear .deviations between theory and simulation results occur when . at this pointthe theory breaks down as the fixed point on the 12 edge approaches the absorbing state .the barrier associated with adverse selection is then negligible and the assumptions underlying the wkb - approximation are no longer justified . samples ) initiated in the all - wild - type state .shape of symbol indicates fitness of type-2 cells ( see legend ) ; filled symbols are for , empty symbols are for .solid lines ( high mutation ) and dashed lines ( low mutation ) are the wkb prediction for fixation time , eq . ( [ eq : fixationtime ] ) .the approximation breaks down when , which is when the fixed point approaches the absorbing state .* b * fixation probability of type-2 cells in region i , evaluated at time .lines correspond to the wkb prediction eqs .( [ eq : quasistationaryapprox ] ) and ( [ eq : fixationtime ] ) , and colors and symbols follow the same convention as in panel * a*. remaining parameters are and . ] to reach fixation in this region the population must accumulate successive mutations of lower fitness ( ) .the population first approaches a metastable state corresponding to the mutation - selection balance point of all three species . from herethere are two possible routes to fixation , sequential or ( almost ) simultaneous extinction of types 0 and 1 , as described previously and shown in fig .[ fig : fig3 ] . by computing the action accumulated along both routes, we have shown that the path of least action the most probable path to fixation corresponds to the path of sequential extinction .we treat this two - hit process as two separate problems : ( i ) escape from the interior metastable state to the boundary ( loss of the advantageous type-0 phenotype ) ; ( ii ) escape from the boundary metastable state to the absorbing all-2 state ( analogous to region i ) . a typical realization of this sequence of events is shown in fig .[ fig : fig3]c .as in region i , the probability of tunneling decreases as the fitness advantage of type-1 cells over type-2 cells increases .this is because the fixed point on the 12 edge approaches the state .for the same reason , the tunneling rate decreases as the mutation rates decrease . following the convention used by , we labeled the time to reach the 12 boundary as , indicating that the 3-species system turns into a 2-species system when the wild - type cell goes extinct .similarly , the time to travel from the boundary fixed point ( 2 species present in the population ) to the absorbing state ( 1 species ) is denoted by . with this notationwe also labeled the action accumulated along each segment as and .as is given by eq .( [ eq : fixationtime ] ) , we express the mean fixation time of type-2 cells as the coefficient is found by fitting to simulation data for the time taken to reach the 12 boundary as a function of the population size .small changes to the parameters now have significant effects on the fixation time , as shown in fig .[ fig : fig7 ] ( filled symbols / solid lines ) . increasing the fitness of the type-2 cells moves both the interior fixed point and the boundary fixed point towards the absorbing all-2 state .it also reduces the strength of selection away from the absorbing state .these combined effects dramatically reduce the mean fixation time , and its rate of increase with the population size . as a function of system size from simulations , averaged over realizations .lines are from the theory , see eq .( [ eq : taufitii ] ) for region ii ( solid lines ) and eq .( [ eq : taufit ] ) for region iii ( dashed lines ) .remaining parameters are and .,scaledwidth=50.0% ] in this region fixation is controlled by the escape from the interior metastable state ; it is a one - hit process .the stable interior fixed point , corresponding to the mutation - selection balance point of the three species , is located close to the all - wild - type state as type-0 cells are the most advantageous .the direct path from the metastable state to the all-2 state is the dominant ( least action ) path , as shown in fig .[ fig : fig3]b . as a resultthe probability of tunneling is higher than in the previous cases .it increases as the fitness of type-2 cells and the mutation rates increase as the stable interior fixed point moves to lower numbers of type-1 cells ( away from the all-1 state ) .the fixation time is computed from eq .( [ eq : taufit ] ) , where is the action accumulated along the direct path from the stable interior fixed point to the all-2 state .the coefficient is again found by fitting to simulation data .we see in fig .[ fig : fig7 ] ( empty symbols / dashed lines ) that varying the model parameters has a lesser effect on fixation times than in region ii . in regioniii , fixation is a one - hit process the population only has to escape the stable fixed point and not a two - hit process as in region ii where the effects of the two steps are compounded .contrary to the results for region i , the mean fixation time is a decreasing function of in region iii .this can be explained as follows : by increasing , the selection strength away from the 12 boundary decreases and the stable state moves to higher type-1 numbers , such that the population has an improved chance of reaching the 12 boundary . from there selectionis directed towards the absorbing state , and the time spent on the 12 boundary is negligible compared to the time to reach this edge .hence , the fixation time reduces as type-1 cells become more fit .the rate of increase of the fixation time with the population size reduces as well ( see fig .[ fig : fig7 ] ) .note that there are systematic deviations between theory and simulation results in the data set shown as open triangles in fig .[ fig : fig7 ] , and to a lesser extent also for the data shown as open diamonds .this is attributed to the fact that the fitness parameters and are very similar to each other or equal for these instances , and they are also close to the fitness of the wild - type .selection is then close neutral and the metastable state is only weakly attracting .the wkb approach then reaches its limits as the assumption of a long - lived metastable state begins to break down .in this paper we investigated the fixation of two successive mutations in a finite population of individuals proliferating according to the moran process .we discussed this in the context of the somatic evolution of a compartment of cells .the accumulation of two mutations can correspond to the inactivation of a tumor suppressor gene or alteration of genes causing chromosomal instability ( cin ) . if the cell carrying two mutations is deleterious , as can be the case with recessive cin genes , it will generally have low concentrations within the tissue. then the chance of a cancerous phenotype emerging ( further mutation ) is very low .demographic fluctuations can drive the double mutant to higher numbers , but these states are short - lived . if the double - mutant reaches fixation , the state is maintained until a further mutation occurs and hence the chance of a cancerous phenotype emerging is much greater .we first analyzed the deterministic limit of the evolutionary dynamics .we identified parameter regimes in which mutation and selection balance .these are regimes in which the double mutant is not the most advantageous in the sequence . in finite populations, this mutation - selection balance gives rise to long - lived metastable states .our analysis identified the escape from these metastable states as the key bottleneck to fixation of cells with two mutations .for parameter values for which there is no mutation - selection balance ( i.e. type-2 cells have the highest fitness ) , the fixation dynamics is largely governed by the deterministic flow .the rate - limiting steps are then the appearance of successful mutant lineages , and the subsequent fixation of cells with two mutations is a zero - hit process .as such the progression from healthy tissue ( all wild type ) to susceptible tissue ( all type 2 ; inactivated tsg ) will be fast relative to the cases in which a mutation - selection balance exists . if there is one stable fixed point in the deterministic dynamics , the process becomes a one - hit phenomenon limited by the escape from the corresponding metastable state . in regions with two fixed pointsone observes a two - hit process .the population becomes trapped in a first metastable state , escapes to a second metastable state , and then reaches full fixation .in addition to this qualitative classification , we calculated fixation times in parameter regimes previously inaccessible to existing analytical approaches .these are precisely the regions of parameter space in which mutation - selection balance exists .we used the wkb - method to calculate the mean escape time from the corresponding metastable states to the absorbing all - type-2 state .this escape time is identified as the fixation time of type-2 cells .we tested the analytical expressions and numerical results obtained from the wkb approach against individual - based simulations of the population dynamics .our theoretical predictions in principle rely on a limit of large but finite populations , and so they can be expected to be valid only for large enough populations .the comparison against simulations demonstrates the accuracy of our theory even at moderate population sizes of cells . for populations much smaller than thisthe assumptions of the wkb method break down .the rate - limiting step is then the occurrence of a successful lineage of mutants and not the escape from metastable states .the expressions obtained from the wkb approach become more accurate as the population size increases . this analysis allowed us to classify how changes to the fitness landscape , mutation rates , and population size affect the probability of tunneling and the time - to - fixation of cells harboring two mutations . in terms of the development of tumors ,our analysis shows that the path to accumulating mutations is not simply limited by the mutation rates , but also by escape from metastable states .populations can exist in a heterogeneous state for very long periods of time before fluctuations eventually drive the second mutation to fixation .the probability with which stochastic tunneling occurs is , in part , determined by the location of these metastable states .if they are located close to the all - type-1 state , then the probability of tunneling is low .this occurs when cells with one mutation have a higher fitness than those with two mutations ( regions i and ii ) .the probability of tunneling decreases as the fitness gap between these two types of cell increases or as mutation rates decrease .the mean fixation time increases exponentially as the fitness gap increases .cells with one and two mutations are present in the tissue compartment for long periods of time ; their numbers are maintained by the mutation - selection balance .cell types are lost sequentially .wild - type cells can be driven to extinction by selection ( region i ) or by demographic fluctuations ( region ii ) .the extinction of type-1 cells is driven exclusively by fluctuations .when type-2 cells have a higher fitness than type-1 cells , and when both are less fit than the wild - type ( region iii ) , selection is always against cells of type-1 .mutation - selection balance maintains a low concentration of type-1 cells in the tissue , and hence the probability of tunneling is high . in this regimethe mean fixation time is a decreasing function of the fitness of type-1 cells . as for all escape problems from metastable states ,the fixation time scales exponentially with the size of the population .fixation is noise - driven , and as the population size is increased the noise strength decreases , and hence fixation takes longer .although our theory is aimed at large population sizes and exponentially growing fixation times , we have shown that it can also make accurate predictions on biologically relevant timescales .assuming a cell generation lasts for one day , our theory can capture fixation times of around 3 years or more ( generations ) .related studies on the progression of cancer suggest a typical timescale on the order of years to accumulate a sufficient number of mutations , which is well within the scope of our theory . however , the times predicted by our theory are extremely sensitive to parameter variation .this limits the parameter ranges for which biologically relevant timescales can be generated .specifically selective ( dis)advantages need to be small ( ) .this is in agreement with selection coefficients in related studies .of course the length of a cellular generation can vary by an order of magnitude or so , depending on the specific cell type .our results do , however , allow an extrapolation to situations when fixation times become very long , for instance for very large populations and/or when selection is strongly against the invading mutants .in these scenarios , stochastic simulations can become too expensive computationally to provide meaningful measurements .analytical methods based on backward master equations or backward fokker - planck equations suffer from computational limitations as well in such cases .our mathematical work complements existing analytical approaches to the moran model of cells acquiring two successive mutations .previous work has provided an appropriate machinery with which to compute the time - to - fixation of the second mutation in situations without metastable states .the present paper specifically addresses cases in which fixation is limited by the escape from long - lived states .this contribution closes a gap in the analytical characterization of fixation in this model and a more complete picture is now available .we have added a new method to the toolbox used to study stochastic tunneling .our deterministic analysis provides a systematic procedure to determine which tool to use for a given set of parameters .this accomplishment removes the need for stochastic simulations altogether , or at the very least it limits the circumstances under which they are needed .the present work has clear limitations in that it focuses on the moran model with only two successive mutations .we have not considered any processes beyond the second mutation , however such cases can exist in physical systems .if the type-2 cells are not cancerous , one would be interested in , for example , calculating when a metastatic cell ( three mutations ) first arises . in general thisdoes not require the fixation of type-2 cells , and is related to the total number of type-2 cells over time . if metastable states are present , the cumulative number of type-2 cells prior to fixation is small , as described above .while we do not analyze this further , the typical number of type-2 cells at any time can in principle be computed from the quasi - stationary distribution , eq .( [ eq : quasistationaryapprox ] ) .our systematic approach , along with the combined theoretical apparatus of previous work and the wkb method are readily transferable to more complex models of cancer initiation and progression .one possible extension to this study is the generalization to more than two mutations .if a cell can accumulate possible mutations , metastable states are found provided .a similar analysis can then be carried out . if the fitness landscape is arranged such that , the problem is analogous to muller s ratchet , which describes the accumulation of successive maladaptive mutations .recently studied a special case of this problem using a wkb approach . finally studied valley crossing dynamics with possible mutations .they have shown that allowing multiple paths to accumulate the mutations reduces the fixation time .as such allowing multiple paths in our model could reduce the fixation times we have measured .work along both of these lines is in progress .p.a . acknowledges support from the engineering and physical sciences council ( uk ) , epsrc . f.m . acknowledges support from the dana - farber cancer institute physical sciences - oncology center ( nih u54ca143798 ) .t.g . acknowledges support from the epsrc , grant reference ep / k037145/1 .the published article is available at http://www.genetics.org[www.genetics.org ] .altland , a. , a. fischer , j. krug , and i. g. szendro , 2011 rare events in population genetics : stochastic tunneling in a two - locus model with recombination .* 106 * * * : * * 088101 .antal , t. and i. scheuring , 2006 fixation of strategies for an evolutionary game in finite populations .* 68 * * * : * * 19231944 .armitage , p. and r. doll , 1954 the age distribution of cancer and a multi - stage theory of carcinogenesis .j. cancer * 8 * * * : * * 112 .assaf , m. and b. meerson , 2010 extinction of metastable stochastic populations .rev . e * 81 * * * : * * 021116 .beerenwinkel , n. , t. antal , d. dingli , a. traulsen , k. w. kinzler , v. e. velculescu , b. vogelstein , and m. a. nowak , 2007 genetic progression and the waiting time to cancer .plos comput .* 3 * * * : * * e225 .billings , l. , l. mier - y - teran - romero , b. lindley , and i. b. schwartz , 2013 intervention - based stochastic disease eradication .plos one * 8 * * * : * * e70211 .black , a. j. and a. j. mckane , 2011 wkb calculation of an epidemic outbreak distribution .* 2011 * * * : * * p12006 .black , a. j. , a. traulsen , and t. galla , 2012 mixing times in evolutionary game dynamics .* 109 * * * : * * 028101 .bozic , i. , t. antal , h. ohtsuki , h. carter , d. kim , s. chen , r. karchin , k. w. kinzler , b. vogelstein , and m. a. nowak , 2010 accumulation of driver and passenger mutations during tumor progression .u.s.a . *107 * * * : * * 1854518550 .crow , j. f. and m. kimura , 1970 _ an introduction to population genetics theory ._ harper and row , new york .dykman , m. i. , i. b. schwartz , and a. s. landsman , 2008 disease extinction in the presence of random vaccination .* 101 * * * : * * 078101 .ewens , w. j. , 2004 _ mathematical population genetics . i. theoretical introduction _ ( 2nd ed . ) .springer - verlag , new york .fisher , j. c. , 1958 multiple - mutation theory of carcinogenesis . nature * 181 * * * : * * 651652 .fisher , r. a. , 1930 _ the genetical theory of natural selection_. clarendon press , oxford .gatenby , r. a. and t. l. vincent , 2003 an evolutionary model of carcinogenesis .cancer res . *63 * * * : ** 62126220 . gillespie , d. t. , 1977 exact stochastic simulation of coupled chemical reactions .* 81 * * * : * * 23402361 .gokhale , c. s. , y. iwasa , m. a. nowak , and a. traulsen , 2009 the pace of evolution across fitness valleys . j. theor* 259 * * * : * * 613620 .gottesman , o. and b. meerson , 2012 multiple extinction routes in stochastic population models .phys . rev . e * 85 * * * : * * 021140 .haeno , h. , r. l. levine , d. g. gilliland , and f. michor , 2009 a progenitor cell origin of myeloid malignancies .* 106 * * * : * * 1661616621 .haeno , h. , y. e. maruvka , y. iwasa , and f. michor , 2013 stochastic tunneling of two mutations in a population of cancer cells .plos one * 8 * * * : * * e65724 .iwasa , y. , f. michor , n. l. komarova , and m. a. nowak , 2005 population genetics of tumor suppressor genes . j. theor .* 233 * * * : * * 1523 .iwasa , y. , f. michor , and m. a. nowak , 2004 stochastic tunnels in evolutionary dynamics .genetics * 166 * * * : * * 15711579 .kamenev , a. and b. meerson , 2008 extinction of an infectious disease : a large fluctuation in a nonequilibrium system .rev . e * 77 * * * : * * 061107 .knudson , a. g. , 1971 mutation and cancer : statistical study of retinoblastoma .* 68 * * * : * * 820823 .komarova , n. l. , a. sengupta , and m. a. nowak , 2003 mutation selection networks of cancer initiation : tumor suppressor genes and chromosomal instability . j. theor* 223 * * * : * * 433450 .kunkel , t. a. and k. bebenek , 2000 dna replication fidelity .biochem .* 69 * * * : * * 497529 .lohmar , i. and b. meerson , 2011 switching between phenotypes and population extinction .rev . e * 84 * * * : * * 051901 .lynch , m. , 2010 scaling expectations for the time to establishment of complex adaptations .* 107 * * * : * * 1657716582 .ma , j. , a. ratan , b. j. raney , b. b. suh , w. miller , and d. haussler , 2008 the infinite sites model of genome evolution .* 105 * * * : * * 1425414261 .metzger , j. j. and s. eule , 2013 distribution of the fittest individuals and the rate of muller s ratchet in a model with overlapping generations .plos comput .* 9 * * * : * * e1003303 .michor , f. , y. iwasa , and m. a. nowak , 2004 dynamics of cancer progression .cancer * 4 * * * : * * 197205 .michor , f. , y. iwasa , b. vogelstein , c. lengauer , and m. a. nowak , 2005 can chromosomal instability initiate tumorigenesis ?cancer biol .* 15 * * * : * * 4349 .mobilia , m. , 2010 oscillatory dynamics in rock paper scissors games with mutations .biol . * 264 * * * : * * 110 .moolgavkar , s. h. , 1978 the multistage theory of carcinogenesis and the age distribution of cancer in man . j. natl .cancer inst .* 61 * * * : * * 4952 .moolgavkar , s. h. and e. g. luebeck , 1992 multistage carcinogenesis : population - based model for colon cancer . j. natl .cancer inst .* 84 * * * : * * 610618 .moran , p. a. p. , 1962_ the statistical processes of evolutionary theory . _ clarendon press , oxford .muller , h. j. , 1964 the relation of recombination to mutational advance .res . * 1 * * * : * * 29. nordling , c. o. , 1953 a new theory on the cancer - inducing mechanism .j. cancer * 7 * * * : * * 6872 .nowak , m. a. , f. michor , n. l. komarova , and y. iwasa , 2004 evolutionary dynamics of tumor suppressor gene inactivation .u.s.a . * 101 * * * : * * 1063510638 .nunney , l. , 1999 lineage selection and the evolution of multistage carcinogenesis .lond . b * 266 * * * : * * 493498 .proulx , s. r. , 2011 the rate of multi - step evolution in moran and wright fisher populations .* 80 * * * : * * 197207 .proulx , s. r. , 2012 multiple routes to subfunctionalization and gene duplicate specialization .genetics * 190 * * * : * * 737751 .van herwaarden , o. a. and j. grasman , 1995 stochastic epidemics : major outbreaks and the duration of the endemic period .* 33 * * * : * * 581601 .van kampen , n. g. , 2007 _ stochastic processes in physics and chemistry _( 3rd ed . ) .elsevier , amsterdam .weinberg , r. a. , 2013 _ the biology of cancer _ ( 2nd ed . ) .garland science , new york .weinreich , d. m. and l. chao , 2005 rapid evolutionary escape by large populations from local fitness peaks is likely in nature .evolution * 59 * * * : * * 11751182 .weissman , d. b. , m. m. desai , d. s. fisher , and m. w. feldman , 2009 the rate at which asexual populations cross fitness valleys . theor .* 75 * * * : * * 286300 .weissman , d. b. , m. w. feldman , and d. s. fisher , 2010 the rate of fitness - valley crossing in sexual populations .genetics * 186 * * * : * * 13891410 .wright , s. , 1931 evolution in mendelian populations .genetics * 16 * * * : * * 97 . seccntformat#1 the#1 :from the deterministic equations ( [ eq : determinsticequations ] ) it can be seen that the state is a fixed point , i.e. at this point for .this is the absorbing state , so this result is rather obvious .non - trivial fixed points exist away from the absorbing state in some parameter regions .the stability of a fixed point is determined by the eigenvalues of the jacobian of the deterministic equations ( [ eq : determinsticequations ] ) .due to the overall constraint , the system is effectively two - dimensional .we can write the jacobian in terms of two variables , and , as along the 12 boundary of the concentration simplex , eqs .( [ eq : determinsticequations ] ) can be expressed in terms of a single variable , .a fixed point , , on this boundary satisfies the equations ^*=0 \nonumber\\ \dot{x}_2 & = & u_2r_1x_1^*+(r_2-{\overline{r}})(1-x_1^*)=0,\end{aligned}\ ] ] where .these equalities are satisfied by the value ( along with ) .the parameter range in which this fixed point exists is determined by the condition , which we can write as .the fixed point on the 12 edge therefore exists when type-1 cells have a fitness advantage over type-2 cells , the factor accounts for effects of mutation . increasing this fitness advantage moves the fixed point towards , or equivalently away from the absorbing state at . for vanishing mutation rate ,the fixed point approaches the state .evaluating the eigenvalues of the jacobian in eq .( [ eq : jacobian ] ) at this fixed point , we find that the point is stable if , and that it is a saddle if .these two cases correspond to regions i and ii in fig .[ fig : fig2 ] . a fixed point of eqs .( [ eq : determinsticequations ] ) with is found as }{u_2r_1(r_0-r_2)+(r_0-r_1)[(1-u_1)r_0-r_2 ] } , \nonumber\\ x_2^ * & = & \frac{u_1u_2r_0r_1}{u_2r_1(r_0-r_2)+(r_0-r_1)[(1-u_1)r_0-r_2 ] } , \label{eq : fp}\end{aligned}\ ] ] provided the model parameters satisfy and . further analysis of the jacobian ( [ eq : jacobian ] ) at this point shows that the fixed point is stable whenever it exists .this is the region of parameter space in which cells with one and two mutations respectively are both less fit than the wild - type .this is the case in regions ii and iii in fig .[ fig : fig2 ] .the fixed point moves closer to when the fitness advantage of the wild type cells is increased ( e.g. by lowering the fitness of type-1 cells , ) . decreasing the mutation rates also moves the fixed point closer to .in terms of the variable , we can write the qsme ( [ eq : qsme ] ) for as , \label{eq : qsmex}\ ] ] where and . substituting the wkb ansatz , eq .( [ eq : wkbansatz ] ) , into eq .( [ eq : qsmex ] ) and expanding in powers of we arrive at -1 \right\}\nonumber\\ & & -\sum_{\vec{v } } e^{\vec{v}\cdot\vec{\nabla}s(\vec{x } ) } \frac{\vec{v}\cdot\vec{\nabla}}{n}w_{\vec{v}}(\vec{x } ) + \mathcal{o}(n^{-2 } ) , \label{eq : qsmeexpanded}\end{aligned}\ ] ] where we have ignored the term as this term is smaller than ( scales as ) .the leading - order terms of this equation are equivalent to the hamilton - jacobi equation , where is the so - called ` position ' variable , and is the so - called ` momentum ' variable .this equation is best solved using the method of characteristics , i.e. we look for parametric solutions , . these trajectories fulfill hamilton s equations , they satisfy the principle of least action , and correspond to the most likely path taken in the so - called phase - space , the space spanned by .the hamilton - jacobi equation has the trivial solution , which corresponds to the deterministic ` relaxation ' trajectory , for which the equation of motion is simply as we are interested in escape from a stable fixed point , we seek the non - trivial ` activation ' trajectory , for which in general .the relevant boundary condition is , where indicates the fixed point of the deterministic dynamics from which the trajectory emanates . in one - dimension ,i.e. in the case of a single fixed point on the 12 boundary ( region i of fig .[ fig : fig2 ] ) , the hamilton - jacobi equation ( [ eq : hj ] ) can be written as where is the concentration of cells of type 1 , , , and ( reaction rates as described in eq .( [ eq : transitionrates ] ) ) .this equation can be solved to obtain the activation trajectory , and hence .we can now substitute into the equation consisting of next - leading - order terms ( ) of eq .( [ eq : qsmeexpanded ] ) to find . following this procedurewe find , ~~ s_1(q)=\frac{1}{2}\ln\left[w_+(q)w_-(q)\right ] .\label{eq : ss1}\ ] ] the qsd is now determined up to a normalization factor .the qsd is peaked about the fixed point located at , see appendix [ app : fp ] .hence we can expand the qsd ( [ eq : wkbansatz ] ) about this fixed point such that -s_1(q^*)-\dots\right\ } , \label{eq : wkbsol}\ ] ] where we have used .normalizing to unity then determines the normalization coefficient , the qsd determined above breaks down when , or equivalently when , i.e. close to the absorbing state . in this region we consider a recursive solution of eq .( [ eq : qsme ] ) that does not rely on a specific form for the qsd , i.e. we do not use the wkb ansatz ( [ eq : wkbansatz ] ) .we expand eq .( [ eq : qsme ] ) about to obtain .\ ] ] this is to be solved for ( ) .using , we can write this as {n_1}-w'_+(0)f_{n_1 - 1},\ ] ] where .this recursive system can be solved to arrive at ^{n_1}\ } } { n_1[1-w'_+(0 ) ] } \simeq \frac{\pi_1 [ w'_+(0)]^{n_1 } } { n_1[w'_+(0)-1 ] } , \label{eq : recursive}\ ] ] where the second step follows from .using eq .( [ eq : timescaleansatz ] ) and expanding the relevant transition rate , , about we can write . by matching the recursivelyobtained boundary - layer solution of eq .( [ eq : recursive ] ) with the wkb solution in eq .( [ eq : wkbsol ] ) at , we obtain an expression for the fixation time , as shown in eq .( [ eq : fixationtime ] ) .we now address the case in which there is an internal stable fixed point of the deterministic dynamics .the problem then retains two degrees of freedom .we follow the initial steps of appendix [ app : sol ] to arrive at the hamilton - jacobi equation ( [ eq : hj ] ) .given that the original system is two - dimensional we now find four variables for the hamilton - jacobi problem , two position variables and ( equivalent to and ) , and two corresponding momenta and .these are defined by . as the ` energy 'is fixed ( ) we have three effective degrees - of - freedom and no obvious solution to .we consider again hamilton s equations ( [ eq : ham ] ) .these equations describe the trajectory that minimizes the action , and hence by solving these we can then determine the fixation time . to determine the boundary conditions we need to find the fixed points of eqs .( [ eq : ham ] ) .we first note that there are three zero - momentum fixed points , which correspond to the fixed points of the deterministic equations ( [ eq : determinsticequations ] ) . following ,we label these as for the absorbing state ( ) , for the 12 boundary fixed point , and for the stable interior fixed point defined by eq .( [ eq : fp ] ) .as we seek to determine the activation trajectory , we need to find fixed points of eqs .( [ eq : ham ] ) with non - zero momenta , but with positions corresponding to and ( the possible end points of the trajectories ) .these so - called ` fluctuational fixed points ' are labeled as and .the relevant trajectory is then found using an iterative method to solve the two - boundary problem .consider the scenario in which the stable interior fixed point is the only fixed point of the deterministic system for and , other than the absorbing state , i.e. for parameters in region iii of fig .[ fig : fig2 ] . herethe activation trajectory that leads to fixation starts at and finishes at . to start the iteration, we fix the momenta for all times to the values at , and then numerically integrate the equations of motion ( [ eq : ham ] ) for the position vector forward in time , starting at and keeping the momenta constant .this integration is carried out for a sufficient range of time to reach the vicinity of the fixed point , but not too long to avoid numerical errors building up . in the next step the relations for the momenta in eq .( [ eq : ham ] ) are then integrated backward in time using the trajectory found in the previous iteration .the momenta at the start of this backward integration are chosen as those corresponding to .this procedure is then iterated , with alternating forward and backward integration of hamilton s equation . at each step of the procedurethe action of the path is found as the iteration of alternating forward and backward integration is then repeated until has reached convergence .the action can then be used in eq .( [ eq : taufit ] ) to determine the fixation time .
tumors initiate when a population of proliferating cells accumulates a certain number and type of genetic and/or epigenetic alterations . the population dynamics of such sequential acquisition of ( epi)genetic alterations has been the topic of much investigation . the phenomenon of stochastic tunneling , where an intermediate mutant in a sequence does not reach fixation in a population before generating a double mutant , has been studied using a variety of computational and mathematical methods . however , the field still lacks a comprehensive analytical description since theoretical predictions of fixation times are only available for cases in which the second mutant is advantageous . here , we study stochastic tunneling in a moran model . analyzing the deterministic dynamics of large populations we systematically identify the parameter regimes captured by existing approaches . our analysis also reveals fitness landscapes and mutation rates for which finite populations are found in long - lived metastable states . these are landscapes in which the final mutant is not the most advantageous in the sequence , and resulting metastable states are a consequence of a mutation - selection balance . the escape from these states is driven by intrinsic noise , and their location affects the probability of tunneling . existing methods no longer apply . in these regimes it is the escape from the metastable states that is the key bottleneck ; fixation is no longer limited by the emergence of a successful mutant lineage . we used the so - called wentzel - kramers - brillouin method to compute fixation times in these parameter regimes , successfully validated by stochastic simulations . our work fills a gap left by previous approaches and provides a more comprehensive description of the acquisition of multiple mutations in populations of somatic cells . * keywords : * stochastic modeling , population genetics , cancer , moran process , wkb method
self - affine distributions are ubiquitous in many phenomena in nature , such as in growing surfaces and interfaces , fractured media , and graphs of two - dimensional turbulent flows .self - affine distributions have also been used as a tool to study scaling properties of two - dimensional statistical models by mapping these models to a two - dimensional coulomb gas .moreover , crystal growth , the growth of bacterial colonies , and the formation of clouds in the upper atmosphere are all examples of non - equilibrium phenomena which grow self - affine rough surfaces .the above applications on a fundamental level make the surface - growth problem as a paradigm for a broad class of problems in the context of non - equilibrium statistical mechanics .self - affine surfaces can be described by their height distribution function . from statistical point of view , it is necessary to explore topography of this kind of surfaces . in such surfaces , heights are invariant under re - scaling , namely , where is called the roughness exponent or the _ hurst _ exponent .it implies that in a self - affine surface , the variance of the surface height , i.e. , ^ 2\rangle} ] . for the general case we should replace ordinary derivative with the fractional one , that is , ] .it is also possible to find the scaling exponents of conformal curves by the above field theory . and ; by zooming in on the picture one can see many small loops .] since the height ensemble of a rough surface is not conformally invariant , rigorous investigating of their contour lines is more difficult than the coulomb gas case .indeed , one can not employ the powerful tools of conformal field theory ( cft ) to study this system . for a rough surface with a generic is no rigorous proof for results obtained by kh .nonetheless , it seems that the contour line ensemble shows scaling properties similar to the conformal curves encountered in some models such as the contour lines of tungsten oxide ( ) and kpz surfaces . in this paper , by using techniques which are common in the realm of coulomb gas field theory , we introduce new scaling laws for some properties of contour lines of self - affine rough surfaces .the scaling properties of the cumulative distribution of the number of contours versus the area of the contours and the size of the system are also obtained .in addition , we find a close relation between the cumulants of , the area of contour lines , and the eigenvalues of the fractional laplacian . finally , we introduce the scaling property of ranked contour lines versus both rank and system size ( zipf s law ) .numerical simulations are also provided to substantiate our analysis .to generate self - affine rough surfaces in our numerical simulations , we have used the successive random addition method . in our simulationswe have generated surfaces of size with . to investigate the effect of roughness exponent on the scaling relations we used several values of . in each case , all calculations have been averaged over 200 realizations . to generate the loops in the contour lines we used a contouring algorithm that treats the input matrix as a regularly spaced grid .the algorithm scans this matrix and compares the values of each block of four neighboring elements ( i.e. , a plaquette ) in the matrix to the contour level values .if a contour level falls within a cell , the algorithm performs a linear interpolation to locate the point at which the contour crosses the edges of the cell .the algorithm connects these points to produce a segment of a contour line . after generating the contours of a given surface , in order to eliminate the effect of the edges of the lattice we have excluded the contours crossing the edges of the lattice . for rough surfaces with size and different roughness exponents .] to show the goodness of the fits and consistency of our simulations with theory , we used the following three different methods for estimating the exponents : ( _ a _ ) we numerically calculated local slops of the curves by fourth - order numerical differentiation for non - uniform data points ; e.g. , in the case of eq .( [ cumulative ] ) , derivation of relative to .( _ b _ ) we present some of the curves by dividing both sides of a scaling relation to the claimed power law to show how seriously they are aligned or how they deviate from a horizontal line , e.g. , fig . 2 . and , ( _ c _ ) we used bayesian analysis without prior distribution , namely likelihood analysis to calculate the accuracy of the exponent generated from our numerical results .a key difference between the contour lines in coulomb gas field theory and the self - affine rough surfaces is in the fractal dimension of the set of all contour lines . for a given self - affine rough surface ,this fractal dimension is .it is well - known that many of the scaling relations in coulomb gas field theory remain unchanged just by substituting this as the dimension of our set . to give an example ,let us define the fractal dimension of a contour line as , where is the the perimeter of the contour and is the radius of gyration .moreover , the probability of finding a contour loop with length is .one can show that there is a hyperscaling relation between the scaling exponents and as follows : which is exactly the same as the hyperscaling relation for domain walls in statistical models .following kh , the cumulative distribution of the number of contours with area greater than has the following form : this gives the right answer for coulomb gas loops with zero roughness exponent .in the rest of the paper using new conjectures we will demonstrate some other evidences to support the above relation .this in turn leads us to several new scaling relations .we checked eq .( [ cumulative ] ) by using numerical simulations for different s , see fig .as is shown , we plot versus to show how seriously they follow eq .( [ cumulative ] ) .the straight horizontal curves exhibit that the proposed scaling relation is preserved up to orders of magnitude of .as is seen , in the case of we have a small deviation from the proposed exponent at large values of , which is related to finite - size effects . for a given lattice size and for small values of , there are not so many large contour lines , but we have many small ones .this is led by the nature of self - affinity at small _ hurst _ exponents .there are no deviations when we increase ( fig . 2 ) .in table [ tab1 ] , we report the best fit values calculated by the likelihood analysis at and confidence levels ..[tab1 ] the best fit values of exponent derived by using the likelihood method . denotes standard deviation of each calculated exponent . [cols="^,^,^,^",options="header " , ] for loops corresponding to surfaces with , using coulomb gas techniques , cardy and ziff showed that has the universal form as a function of the system size for different critical statistical physics models . to calculate ,cardy and ziff evaluated the total area inside all loops using two different methods , and then they found the universal form of .inspired by this method , we argue to give some new scaling relations for contour lines of self - affine rough surfaces . using eq .( [ cumulative ] ) it is straightforward to show that , for , and for it has a logarithmic form . let us consider a typical point with height above the horizon ( we cut our self - affine surface by a plane ) .if we draw a circle of radius around the point , since we are dealing with a rough surface , all points inside the circle will be above the horizon . in other words , inside the loop is a compact region with the fractal dimension 2 . since the fractal dimension of the clusters is 2, one could obtain the total area of the clusters proportional to the area of the system .this is just a lower bound for the interior areas of the loops see .in addition , it is also possible to see from simulation that by cutting the surface from the average height one could get always clusters of the order of the system size .thus one can get the following scaling relation for with respect to the system size : this indicates that the number of contour lines with area greater than per total length of all contours , i.e. , , is independent of the system size .it is also worth noting that the cumulative distribution of the contours with area is independent of , the ultraviolet cutoff , which is another length scale .our simulations confirm the validity of the scaling relation ( [ c ] ) for different values of , see fig . .the above result is also useful to get another nontrivial equation for contour lines . to calculate the total area we can use the formula in which is the minimum number of loops which must be crossed to connect to the edge of the lattice . since the total area inside the loops is proportional to the area of the system , we conclude this is reminiscent of the height correlation function in the self - affine rough surfaces . for relation is logarithmic and was proved explicitly in .as shown in the graph .slops of the curves from top to bottom are given by , , , , and . ]these results show that one may investigate contour loops of rough surfaces by defining currents for the loops . again , by analogy with the coulomb gas methods one can define as the current density of loops .this is a natural candidate if we imagine that the height function is extended to the two - dimensional manifold in such a way that it is constant within each plaquette .normalization with respect to width is necessary because we have a rough surface where width is changing by size .this definition for the current density means that we can map our height model to the contour lines or vice versa .since iso - height lines have the same role as the domain walls between the positive and negative heights , the directional derivative of along a contour must be zero and it must vary along a line normal to the contour . using the above function to parameterize the geometry of contour line , it is possible to write this equation is independent of our definition of currents . by using simple dimensional analysis , it is not difficult to find our special normalization , i.e. , .one can check that eq .( [ current ] ) gives .using eq .( [ current ] ) , inspired by cardy s argument , we find the generating function of the cumulants of area of contour loops .the argument for getting cumulants of area is as follows . for the simplicity, we use the dirichlet boundary condition , , on the boundary of the system , which means that loops do not cross the boundary .after integration by parts , eq .( [ current ] ) gives . in simulation and experimentthere are many curves emerging from the boundary and going back to another point in the boundary ; therefore , there will be no exact dirichlet boundary condition . of loop areas of rough surfaces versus system size ; for , and .here we have the exponents , , and .bottom : versus for surfaces with size with different roughness exponents as indicated on the graph.,title="fig : " ] of loop areas of rough surfaces versus system size ; for , and . here we have the exponents , , and .bottom : versus for surfaces with size with different roughness exponents as indicated on the graph.,title="fig : " ] however , as we will see in the simulations , many of our scaling relations , especially the distribution of contours , are independent of the boundary conditions . by using the real space representation of the height distribution and the gaussian integral , one can derive where is the stiffness and is an auxiliary field .one can write the above equation as an infinite sum by using fourier transform where are the eigenvalues of the fractional laplacian with dirichlet boundary conditions . expanding eq .( [ generating function11 ] ) gives the higher cumulants of , the sum is convergent for all values of and except , which is logarithmic with respect to . to check the above equation we calculated for surfaces with different roughness exponents and different sizes . for all of the surfaces have with . for higher moments one can write with . for surfaces with roughness exponent between and , the exponent varies from to .one can see in fig .4b that all of the s are linear with respect to .the deviation from could be related to our restriction in getting larger sizes in simulation .another interesting scaling relation is the universality of the distribution of the ranked loop perimeters , which is named zipf s law . following , the average perimeter of the largest clustercan be found by eq .( [ cumulative ] ) , which is called by mandelbrot the zipf distribution where is the fractal dimension of all loops and is the fractal dimension of one of loops .we should emphasize that we have normalized the equation with the appropriate power of total number of contour loops , so we ignore here the scaling of the total number of loops .we have numerically checked this scaling relation for self - affine surfaces , both with respect to rank and the system size . as shown in fig . , in three subfigures ( for ) for eight different realizations , we presented the log - log plot of versus . here, shows a scaling relation according to eq .( [ rank perimeter ] ) . for the case of ,the scaling relation is preserved for over 2 orders of magnitude of .since the number of small loops is few , in larger values of , we could see the agreement just for 1 order of magnitude . ) , right( ) , and bottom left ( ) : for system size of , the curves of ranked loop perimeters divided by vs rank numbers are shown for eight different realizations .bottom right : the squares stand for the averaged ( the numerical estimated exponent ) over realizations .the solid line shows the theoretical relation between and , i.e. , .,title="fig : " ] ) , right( ) , and bottom left ( ) : for system size of , the curves of ranked loop perimeters divided by vs rank numbers are shown for eight different realizations .bottom right : the squares stand for the averaged ( the numerical estimated exponent ) over realizations .the solid line shows the theoretical relation between and , i.e. , .,title="fig : " ] ) , right( ) , and bottom left ( ) : for system size of , the curves of ranked loop perimeters divided by vs rank numbers are shown for eight different realizations .bottom right : the squares stand for the averaged ( the numerical estimated exponent ) over realizations .the solid line shows the theoretical relation between and , i.e. , .,title="fig : " ] ) , right( ) , and bottom left ( ) : for system size of , the curves of ranked loop perimeters divided by vs rank numbers are shown for eight different realizations .bottom right : the squares stand for the averaged ( the numerical estimated exponent ) over realizations .the solid line shows the theoretical relation between and , i.e. , .,title="fig : " ] to calculate the exponent in our numerical results let us consider .we calculated the exponent ; fig . ( bottom right ) depicts the variation of versus ( the average is over different realizations ) . since in higher we have lower number of loops , thus of higher values of have lower accuracy .we also numerically checked the relation of versus . in the case of , we obtained a scaling relation with exponent , which is near theoretical value . in higher values of ,the estimated exponents are not sufficiently accurate because the number of loops in smaller system sizes is low . with the same method one can find the average area and the radius of gyration as a function of rank both of the above formulas are in good agreement with our numerical results . in these kinds of scaling relations the error of the estimated exponents for large system sizes are considerably small .we believe eqs ( [ rank perimeter ] ) and ( [ rank ] ) provide a good method to calculate the fractal dimension of a single contour as well as the fractal dimension of all contours .in summary , by using field theory of rough surfaces and considering current for the model , we confirmed the previously known scaling relation for cumulative distribution of area .in addition , we found a new scaling relation for this distribution with respect to system size . since the action is not translationally invariant and the small momenta are important , naturally scaling properties depend to the system size .it seems that large momenta do not contribute in the scaling properties . although system is not invariant under homogeneous translation , it is not difficult to see that it is invariant under , which means it is inhomogeneously translational invariant . using inhomogeneous translation one can define the currents corresponding to wilson loops of the theory and re - derive the results of sec .since we only investigated the scaling properties of the contour lines , these two different given currents lead to the same scaling relations . considering these currents for contour lineswe think that there may be a close relation between the statistics of these lines and the eigenvalues of fractional laplacian . in this paper , we discussed leading scaling behavior with respect to the system size , however , to see the effect of the eigenvalues of the fractional laplacian one needs more careful study of the amplitudes as well . since there is no conformal invariance in the height ensemble ,finding the exact values of and using the techniques of the coulomb gas is not tractable .we confirmed our proposed scaling relations by simulations through cutting a self - affine surface at different heights .we have only interpreted the results for the case of cutting the surface at its mean height .but we checked also all of the scaling relations for the cases of cutting the surface at heights , where is the height variance of the surface .we have not seen any meaningful deviation from what we obtained for the mean height .we also introduced new zipf - like scaling relations for the contour lines of self - affine rough surfaces , and verified them via simulations .we believe the same scaling relations are applied to the clusters of rough surfaces but with different exponents . the work of s.m.v.a . was supported in part by the research council of the university of tehran .we thank a. rezakhani tayefeh , m. habibi , h. mohseni sadjadi , a. naji , m. nouri - zonoz , and m. yaghoubi for useful discussions .we are grateful of n. abedpour , m. f. miri , and m. sadegh movahed for useful comments .we are also indebted anonymous referees for their enlightening comments .r. f. voss , in _ fundamental algorithms for computer graphics _ , edited by r. a. earnshaw ,_ nato advanced study institute , series e : applied science _( springer - verlag , heidelberg , 1985 ) , vol .17 , p. 805 ; there are many methods to generate self - affine surfaces but the accuracy of the above method was good enough for the range we worked .one can find another method for broader range of roughness exponents and other purposes in h. a. makse , s. havlin , m. schwartz , and h. e. stanley , phys .e * 53 * , 5445 ( 1996 ) ; and h. hamzehpour and m. sahimi , phys .e * 73 * , 056121 ( 2006 ) .r. colistete , j. c. fabris , s. v. b. goncalves , and p. e. de souza , int . j. modd * 13 * , 669 ( 2004 ) .j. cardy , _ les houches summer school 1994 _ ( north holland , holland , 1996 ) ; eprint cond - mat/9409094 .
equilibrium and non - equilibrium growth phenomena , e.g. , surface growth , generically yields self - affine distributions . analysis of statistical properties of these distributions appears essential in understanding statistical mechanics of underlying phenomena . here , we analyze scaling properties of the cumulative distribution of iso - height loops ( i.e. , contour lines ) of rough self - affine surfaces in terms of loop area and system size . inspired by the coulomb gas methods , we find the generating function of the area of the loops . interestingly , we find that , after sorting loops with respect to their perimeters , zipf - like scaling relations hold for ranked loops . numerical simulations are also provided in order to demonstrate the proposed scaling relations . _ keywords _ : rough surface , contour line , self affine , zipf s law . + pacs number(s ) : 47.27.eb ,
dense packings of hard particles have served as useful models to understand the structure of low - temperature states of matter , such as liquids , glasses , and crystals , granular media , heterogeneous materials , and biological systems ( e.g. , tissue structure , cell membranes , and phyllotaxis ) .much of what we know about dense packings concerns particles of spherical shape , and hence it is useful to summarize key aspects of the _ geometric - structure _ classification of sphere packings in order to place our results for dense packings of nonspherical particles in their proper context .the geometric - structure classification naturally emphasizes that there is a great diversity in the types of attainable jammed sphere packings with varying magnitudes of overall order ( characterized by scalar order metric that lies in the interval ] [ fig_ordermap ] in recent years , scientific attention has broadened from the study of dense packings of spheres ( the simplest shape that does not tile euclidean space ) to dense packings of disordered and ordered nonspherical particles .the focus of the present paper is on the densest packings of _ congruent _ nonspherical particles , both convex and concave shapes .we will show that both the symmetry and local principal curvatures of the particle surface play a crucial role in how the rotational degrees of freedom couple with the translational degrees of freedom to determine the maximal - density configurations .thus , different particle shapes will possess maximal - density configurations with generally different packing characteristics .we have recently devised organizing principles ( in the form of conjectures ) to obtain maximally dense packings of a certain class of convex nonspherical hard particles .interestingly , our conjecture for certain centrally symmetric polyhedra have been confirmed experimentally very recently .here we generalize these organizing principles to other convex particles as well as concave shapes .the principles for concave shapes were implicitly given in ref . and were explicitly stated ( without elaboration ) elsewhere .tunability capability via particle shape provides a means to design novel crystal , liquid and glassy states of matter that are richer than can be achieved with spheres ( see fig .2 ) . while it is seen that hard spheres exhibit entropically driven disorder - order transitions , metastable disordered states , glassy jammed states , and ordered jammed states with maximal density , the corresponding phase diagram for hard nonspherical particles will generally be considerably richer in complexity due to the rotational degrees of freedom and smoothness of the particle surface ( see sec .iv for further details ) .[ htbp ] {fig2.eps } \end{array} ] a _ lattice _ in is a subgroup consisting of the integer linear combinations of vectors that constitute a basis for . in the physical sciences and engineering ,this is referred to as a _bravais _ lattice .unless otherwise stated , the term `` lattice '' will refer here to a bravais lattice only .a _ lattice packing _ is one in which the centroids of the nonoverlapping identical particles are located at the points of , and all particles have a common orientation .the set of lattice packings is a subset of all possible packings in . in a lattice packing, the space can be geometrically divided into identical regions called _ fundamental cells _ , each of which contains the centroid of just one particle .thus , the density of a lattice packing is given by where is the volume of a single -dimensional particle and is the -dimensional volume of the fundamental cell .for example , the volume of a -dimensional spherical particle of radius is given explicitly by where is the euler gamma function .figure [ lat - period](a ) depicts lattice packings of congruent spheres and congruent nonspherical particles . a more general notion than a lattice packing is a periodic packing .periodic _ packing of congruent particles is obtained by placing a fixed configuration of particles ( where ) with _ arbitrary nonoverlapping orientations _ in one fundamental cell of a lattice , which is then periodically replicated without overlaps .thus , the packing is still periodic under translations by , but the particles can occur anywhere in the chosen fundamental cell subject to the overall nonoverlap condition .the packing density of a periodic packing is given by where is the number density , i.e. , the number of particles per unit volume .figure [ lat - period](b ) depicts a periodic non - lattice packing of congruent spheres and congruent nonspherical particles .note that the particle orientations within a fundamental cell in the latter case are generally not identical to one another .{fig4a.eps } & \includegraphics[width=3.5cm , keepaspectratio]{fig4b.eps } & \includegraphics[width=3.5cm , keepaspectratio]{fig4c.eps } \\\mbox{(a ) } & \mbox{(b ) } & \mbox{(c ) } \\ \end{array}$ ] we will see subsequently that certain characteristics of particle shape , e.g. , whether it possesses central symmetry and equivalent principal axes , play a fundamental role in determining its dense packing configurations .a -dimensional particle is _ centrally symmetric _ if it has a center that bisects every chord through connecting any two boundary points of the particle , i.e. , the center is a point of inversion symmetry .examples of centrally symmetric particles in include spheres , ellipsoids and superballs ( see ref . for definitions of these shapes ) .a triangle and tetrahedron are examples of non - centrally symmetric two- and three - dimensional particles , respectively .figure [ central ] depicts examples of centrally and non - centrally symmetric two - dimensional particles .a -dimensional centrally symmetric particle for is said to possess equivalent principal ( orthogonal ) axes ( directions ) associated with the moment of inertia tensor if those directions are two - fold rotational symmetry axes such that the chords along those directions and connecting the respective pair of particle - boundary points are equal .( for , the two - fold ( out - of - plane ) rotation along an orthogonal axis brings the shape to itself , implying the rotation axis is a `` mirror image '' axis . ) whereas a -dimensional superball has equivalent directions , a -dimensional ellipsoid generally does not ( see fig .[ central ] ) .we have formulated several organizing principles in the form of three conjectures for convex polyhedra as well as other nonspherical convex shapes ; see fig . 5 for examples of dense packings of certain convex nonspherical hard particles .in this section , we generalize them in order to guide one to ascertain the densest packings of other convex nonspherical particles as well as _ concave _shapes based on the characteristics of the particle shape ( e.g. , symmetry , principal axes and local principal curvature ) .the generalized organizing principles are explicitly stated as four distinct propositions .we apply and test all of these organizing principles to the most comprehensive set of both convex and concave particle shapes examined to date , including catalan solids , prisms , antiprisms , cylinders , dimers of spheres and various concave polyhedra .we demonstrate that all of the densest known packings associated with this wide spectrum of nonspherical particles are consistent with our propositions . in sec .iv , we will apply our propositions to construct analytically the densest known packings of other nonspherical particles , including spherocylinders and `` lens - shaped '' particles that are centrally symmetric , as well as square and rhombic pyramids that lack central symmetry .we also apply the organizing principles to infer the high - density equilibrium crystalline phases of hard convex and concave particles in sec .iv . _remarks _ : the centrally symmetric platonic and archimedean solids are polyhedra with three equivalent principal axes . for packings of polyhedra ,face - to - face contacts are favored over other types of contacts ( e.g. , face - to - edge , face - to - vertex , edge - to - edge etc . ) , since the former allow the particle centroids to get closer to one another and thus , to achieve higher packing densities . for centrally symmetricpolyhedra , aligning the particles ( i.e. , all in the same orientation ) enable a larger number of face - to - face contacts .for example , a particle with faces , possesses families of axes that go through the centroid of the particle and intersect the centrally - symmetric face pairs such that the particles ( in the same orientation ) with their centroids arranged on these axes can form face - to - face contacts .the requirement that the particles have the same orientation is globally consistent with a ( bravais ) lattice packing .indeed , in the optimal lattice packings of the centrally symmetric platonic and archimedean solids , each particle has the maximum number of face - to - face contacts that could possibly be obtained without violating the nonoverlapping conditions .it is highly unlikely that such particles possessing three equivalent principal axes and aligned in the same direction could form a more complicated non - lattice periodic packings with densities that are larger than the optimal lattice packings ..characteristics of dense packings of platonic and archimedean solids , including whether the particle possesses central symmetry , whether the packing is a bravais lattice packing , the number of basis particles in the fundamental cell if the packing is a non - bravais - lattice periodic packing , the numerically obtained packing density , the analytically obtained density , and the upper bound on the density .[ cols="^,^,^,^,^,^",options="header " , ] in addition , recent studies of mrj packings of polyhedra suggest that the iso - counting conjecture proposed for sphere packings ( i.e. , each particle in the packing possesses 2 constraints , where is the number of degrees of freedom ) is also true for polyhedra ( see table iv ) .we note that for polyhedra , the number of constraints at each contact can be exactly given . on the other hand , for smoothly - shaped particles , it was found that the mrj packings are generally hypostatic , with each particle possessing a smaller number of contacts than 2 ; see table iv .moreover , it was shown that the surface curvature at contact points play an important role in blocking the rotation of the particles and thus , in jamming the packing .however , it is difficult to quantify the effective number of constraints provided by each contact with different local principal curvatures .a successful quantification of this effective counting of constraints could lead to the conclusion that the iso - counting conjecture also holds for smoothly - shaped nonspherical particles .finally , we note that tunability capability via particle shape provides a means to design novel crystal , liquid and glassy states of matter that are much richer than can be achieved with spheres .for example , by introducing particle asphericity , a variety of optimal crystalline packings , diverse disordered jammed packings as well as a wide spectrum of equilibrium phases have been obtained from computer simulations . on the experimental side ,it has been recently shown that silver polyhedral nanoparticles ( with central symmetry ) can self assemble into our conjectured densest lattice packings of such shapes .the resulting large - scale crystalline packings may facilitate the design and fabrication of novel three - dimensional materials for sensing , nanophotonics and photocatalysis .our general organizing principles can clearly provide valuable guidance for novel materials design , which we will systematically investigate in future work .it will also be very useful to determine order maps for packings of nonspherical particles that are the analogs of fig . 1 for sphere packings .we are very grateful to marjolein dijkstra for giving us permission to adapt the figures of the octapod and tetrapod that appeared in ref .this work was supported by the mrsec program of the national science foundation under award number dmr-0820341 .this two - parameter description is but a very small subset of the relevant parameters that are necessary to fully characterize a configuration , but it nonetheless enables one to draw important conclusions . for example , another axes that could be included is the mean contact number per particle , .a. bezdek and w. kuperberg , applied geometry and discrete mathematics : dimacs series in discrete mathematics and theoretical computer science 4 , edited by p. gritzmann and b. sturmfels ( american mathematics society , providence , ri , 1991 ) , pp . 71 - 80 in future work , we will slowly compress dense liquid states of truncated tetrahedra to examine whether the resulting crystalline phases are the same as the ones that we identified via decompression of the putative densest packing of this archimedean solid .precise free energy computations will provide of course the best way to ascertain the phase behavior of truncated tetrahedra over the allowable range of densities .
we have recently devised organizing principles to obtain maximally dense packings of the platonic and archimedean solids , and certain smoothly - shaped convex nonspherical particles [ torquato and jiao , phys . rev . e * 81 * , 041310 ( 2010 ) ] . here we generalize them in order to guide one to ascertain the densest packings of other convex nonspherical particles as well as _ _ shapes . our generalized organizing principles are explicitly stated as four distinct propositions . these organizing principles are applied to and tested against the most comprehensive set of both convex and concave particle shapes examined to date , including catalan solids , prisms , antiprisms , cylinders , dimers of spheres and various concave polyhedra . we demonstrate that all of the densest known packings associated with this wide spectrum of nonspherical particles are consistent with our propositions . among other applications , our general organizing principles enable us to construct analytically the densest known packings of certain convex nonspherical particles , including spherocylinders , `` lens - shaped '' particles , square pyramids and rhombic pyramids . moreover , we show how to apply these principles to infer the high - density equilibrium crystalline phases of hard convex and concave particles . we also discuss the unique packing attributes of maximally random jammed packings of nonspherical particles .
over the years , the study and characterization of complex systems have become a major research topic in many areas of science .part of this massive interest is due to a common requirement in the modeling and analysis of several natural phenomena existing in the world around us : to understand how relationships between pieces of information give rise to collective behaviors among different scale levels of a system .reasons for the appearance of this complexity are countless and are not completely known .often , in complex systems , the interaction between the components is highly non - linear and/or non - deterministic , which brings several challenges that prevent us from getting a better understanding of the underlying processes that govern the global behavior of such structures . with the growing volume of data that is being produced in the world these days ,the notion of information is more present and relevant in any scale of modern society . in this scenario , where data plays a central role in science , an essential step in order to learn , understand and assess the rules governing complex phenomena that are part of our world is not only the mining of relevant symbols along this vast ocean of data , but especially the identification and further classification of these patterns .after the pieces of information are put together and the relationship between them is somehow uncovered , a clearer picture start to emerge , as in the solution of an intricate puzzle . in this paradigm ,computational tools for data analysis and simulations are a fundamental component of this data - driven knowledge discovery process . in this context ,random fields are particularly interesting mathematical structures .first , it is possible to replace the usual statistical independence assumption by a more realistic conditional independence hypothesis .in other words , unlike most classical stochastic models , we can incorporate the dependence between random variables in a formal and elegant way .this is a key aspect when one needs to study how local interactions can lead to the emergence of global effects .second , if we constrain the size of the maximum clique to be two , that is , we assume only binary relationships , then we have a pairwise interaction markov model , which is mathematically tractable . finally , considering that the coupling parameter is invariant and isotropic , all the information regarding the spatial dependence structure of the random field is conveyed by a single scalar parameter , from now on denoted by . in the physics literature ,this parameter is referred as the inverse temperature of the system , and plays an important role in statistical mechanics and thermodynamics .random fields have been used with success in several areas of science from a long time ago .recently , information geometry has emerged as an unified approach in the study and characterization of the parametric spaces of random variables by combining knowledge from two distinct mathematical fields : differential geometry and information theory .however , most information geometry studies are focused in the classical assumption of independent samples drawn from exponential family of distributions .little is known about information geometry on random fields , more precisely , about how the geometric properties of the parametric space of these models are characterized .although some related work can be found in the literature , there are still plenty of room for contributions in this field . along centuriesmany researchers have studied the concept of time . during our investigations ,some questions that motivated this research were based on the relation between time and complexity : what are the causes to the emergence of complexity in dynamical systems ? is it possible to measure complex behavior along time ?what is time ? why does time seem to flow in one single direction ? how to characterize time in a complex system ?we certainly do not have definitive answers to all these questions , but in an attempt to study the effect of time in the emergence of complexity in dynamical systems , this paper proposes to investigate an information - theoretic approach to understand these phenomena in random fields composed by gaussian variables .our study focuses on the information theory perspective , motivated by the connection between fisher information and the geometric structure of stochastic models , provided by information geometry .the main goal of this paper is to characterize the information geometry of gaussian random fields , through the derivation of the full metric tensor of the model s parametric space .basically , we want to sense each component of this riemannian metric as we perform positive and negative displacements in the inverse temperature `` axis '' in order to measure the geometric deformations induced to the underlying manifold ( parametric space ) .it is known that when the inverse temperature parameter is zero , the model degenerates to a regular gaussian distribution , whose parametric space exhibit constant negative curvature ( hyperbolic geometry ) .it is quite intuitive to think that the shape and topology of the parametric space has a deep connection with the distances between random fields operating in different regimes , which is crucial in characterizing the behavior of such systems .to do so , we propose to investigate how the metric tensor components change while the system navigates through different entropic states . in summary, we want to track all the deformations in the metric tensor from an initial configuration a , in which temperature is infinite ( ) , to a final state b , in which temperature is much lower .additionally , we want to repeat this process of measuring the deformations induced by the metric tensor , but now starting at b and finishing at a. if the sequence of deformations a is different from the sequence of deformations b , it means that the process of taking the random field from an initial lower entropic state a to a final higher entropic state b and bring it back to a induces a natural intrinsic one way direction of evolution : an arrow of time . in practical terms , our proposal consists in using information geometry as a mathematical tool to measure the emergence of an intrinsic notion of time in random fields in which temperature is allowed to deviate from infinity . since we are restraining our analysis only to gaussian random fields , which are mathematically tractable ,exact expressions for the components of the full metric tensor are explicitly derived .computational simulations using markov - chain monte carlo algorithms validate our hypothesis that the emergence of an arrow of time in random fields is possibly a consequence of asymmetric deformations in the metric tensor of the statistical manifold when the inverse temperature parameter is disturbed .however , in searching for a solution to this main problem in question , two major drawbacks have to be overcome : 1 ) the information equality does not hold for , which means that we have two different versions of fisher information ; and 2 ) the computation of the expected fisher information ( the components of the metric tensor ) requires knowledge of the inverse temperature parameter for each configuration of the random field .the solution for the first sub - problem consists in deriving not one but two possible metric tensors : one using type - i fisher information and another using type - ii fisher information .for the second sub - problem our solution was to perform maximum pseudo - likelihood estimation in order to accelerate computation by avoiding calculations with the partition function in the joint gibbs distribution .besides , these two sub - problems share an important connection : it has been verified that the two types of fisher information play a fundamental role in quantifying the uncertainty in the maximum pseudo - likelihood estimation of the inverse temperature parameter through the definition of the asymptotic variance of this estimator . in the following ,we describe a brief outline of the paper . in section 2we define the pairwise gaussian - markov random field ( gmrf ) model and discuss some basic statistical properties .in addition , we provide an alternative description of the evolving complex system ( random field ) as a non - deterministic finite automata in which each cell may assume an infinite number of states . in section 3the complete characterization of the metric tensor of the underlying riemannian manifold in terms of fisher information is detailed .section 4 discusses maximum pseudo - likelihood , a technique for estimating the inverse temperature parameter given a single snapshot of the random field .section 5 presents the concept of fisher curve , a geometrical tool to study the evolution of complex systems modelled by random fields by quantifying the deformations induced to the parametric space by the metric tensor .section 6 shows the computational simulations using markov chain monte carlo ( mcmc ) algorithms , the obtained results and some final remarks .finally , section 7 presents the conclusions of the paper .the objective of this section is to introduce the random field model , characterizing some basic statistical properties .gaussian random fields are important models in dealing with spatially dependent continuous random variables , once they provide a general framework for studying non - linear interactions between elements of a stochastic complex system along time .one of the main advantages of these models is the mathematical tractability , which allows us to derive exact closed - form expressions for two relevant quantities in this investigation : 1 ) estimators for the inverse temperature parameter ; and 2 ) the expected fisher information matrix ( the riemannian metric of the underlying parametric space manifold ) . according to the hammersley - clifford theorem , which states the equivalence between gibbs random fields ( global models ) and markov random fields ( local models ) it is possible to characterize an isotropic pairwise gaussian random field by a set of local conditional density functions ( lcdf s ) , avoiding computations with the joint gibbs distribution ( due to the partition function ) .an isotropic pairwise gaussian markov random field regarding a local neighborhood system defined on a lattice is completely characterized by a set of local conditional density functions , given by : ^{2 } \right\ } \label{eq : gmrf}\ ] ] with the parameters vector , where and are respectively the expected value ( mean ) and the variance of the random variables in the field , and is the inverse temperature or coupling parameter , which is responsible for controlling the global spatial dependence structure of the system . note that if , the model degenerates to the usual gaussian model for independent random variables .a model belongs to the parametric exponential family if it can be expressed as : where is a vector of natural parameters , is vector of natural sufficient statistics , is an arbitrary function of the parameters and is an arbitrary function of the observations . a model is called curved if the dimensionality of both and ( number of natural sufficient statistics ) is greater than the dimensionality of the parameter vector ( number of parameters in the model ) . for instance , considering a sample of the isotropic pairwise gaussian markov random field model in which denotes the support of the neighborhood system ( i.e , 4 , 8 , 12 , etc . ) , we can express the joint conditional distribution , which is the basis for the definition of the pseudo - likelihood function , as : ^ 2 \right\ } \\\nonumber & = \left ( 2\pi\sigma^2\right)^{-n/2}exp\left\ { -\frac{1}{2\sigma^2}\sum_{i=1}^{n}\left [ x_{i}^{2 } -2x_{i}\mu + \mu^2 - 2\beta\sum_{j\in\eta_i}(x_{i } - \mu)(x_{j } - \mu ) \right .\\ \nonumber & \hspace{6 cm } + \left .\beta^2 \sum_{j\in\eta_i}\sum_{k\in\eta_i}(x_{j } - \mu)(x_{k } - \mu)\right ] \right\ } \\\nonumber & = exp\left\{-\frac{n}{2}log(2\pi\sigma^2 ) - \frac{1}{2\sigma^2}\sum_{i=1}^{n}x_{i}^2 + \frac{\mu}{\sigma^2}\sum_{i=1}^{n } x_{i } - \frac{n\mu^2}{2\sigma^2 } \right . \\ \nonumber & \hspace{4 cm } + \left . \frac{\beta}{\sigma^2 } \left [ \sum_{i=1}^{n}\sum_{j\in\eta_i } x_{i}x_{j } - \mu \delta \sum_{i=1}^{n } x_{i } -\mu\sum_{i=1}^{n}\sum_{j\in\eta_i } x_{j } + \delta \mu^2 n \right ] \right\ } \\\nonumber & \times exp\left\ { -\frac{\beta^2}{2\sigma^2}\left [ \sum_{i=1}^{n } \sum_{j\in\eta_i}\sum_{k\in\eta_i}x_{j}x_{k } - \mu \delta \sum_{i=1}^{n } \sum_{j\in\eta_i}x_{j } - \mu \delta \sum_{i=1}^{n}\sum_{k\in\eta_i}x_{k } + \delta^2 \mu^2 n \right ] \right\ } \\ \nonumber \\ \nonumber & = exp\left\ { -\frac{n}{2}\left [ log(2\pi\sigma^2 ) + \frac{\mu^2}{\sigma^2 } \right ]+ \frac{\beta\delta\mu^2 n}{\sigma^2}\left [ 1 - \frac{\beta\delta}{2 } \right ] \right\ } \\\nonumber & \times exp\left\ { \left [ \frac{\mu}{\sigma^2}\left(1 - \beta\delta\right ) \right]\sum_{i=1}^{n}x_{i } -\frac{1}{2\sigma^2}\sum_{i=1}^{n}x_{i}^2 + \frac{\beta}{\sigma^2}\sum_{i=1}^{n}\sum_{j\in\eta_i}x_{i}x_{j } \right .\\ \nonumber & \left .\hspace{4 cm } - \left [ \frac{\beta\mu}{\sigma^2}(1 - \beta\delta)\right]\sum_{i=1}^{n}\sum_{j\in\eta_i}x_{j } - \frac{\beta}{2\sigma^2}\sum_{i=1}^{n}\sum_{j\in\eta_i}\sum_{k\in\eta_i}x_{j}x_{k } \right\}\end{aligned}\ ] ] by observing the above equation , it is possible to identify the following correspondence : , -\frac{1}{2\sigma^2 } , \frac{\beta}{\sigma^2 } , -\left [ \frac{\beta\mu}{\sigma^2}(1 - \beta\delta)\right ] , - \frac{\beta}{2\sigma^2 } \right ) \\ \nonumber \vec{t } = \left ( \sum_{i=1}^{n}x_{i } , \sum_{i=1}^{n}x_{i}^2 , \sum_{i=1}^{n}\sum_{j\in\eta_i}x_{i}x_{j } , \sum_{i=1}^{n}\sum_{j\in\eta_i}x_{j } , \sum_{i=1}^{n}\sum_{j\in\eta_i}\sum_{k\in\eta_i}x_{j}x_{k } \right ) \end{aligned}\ ] ] with and + \frac{\beta\delta\mu^2 n}{\sigma^2}\left [ 1 - \frac{\beta\delta}{2 } \right]\ ] ] note that the model is a member of the curved exponential family , since even though the parametric space is a 3d manifold , the dimensionality of and is more than that ( there is a total of 5 different natural sufficient statistics , more than one for each parameter ) . once again ,notice that for , the mathematical structure is reduced to the traditional gaussian model where both vectors and are 2 dimensional , perfectly matching the dimension of the parameters vector : \right\}\ ] ] where now we have and : \end{aligned}\ ] ] hence , from a geometric perspective , as the inverse temperature parameter in a random field deviates from zero , a complex deformation process transforms the underlying parametric space ( a 2d manifold ) into a completely different structure ( a 3d manifold ) .it has been shown that the geometric structure of regular exponential family distributions exhibit constant curvature .it is also known that from an information geometry perspective , the natural riemannian metric of these probability distribution manifolds is given by the fisher information matrix .however , little is known about information geometry on more general statistical models , such as random field models . in this paper ,our primary objective is to study , from an information theory perspective , how changes in the inverse temperature parameter affect the metric tensor of the gaussian markov random field model .the idea is that by measuring these components ( fisher information ) we are capturing and quantifying an important complex deformation process induced by the metric tensor into the parametric space as temperature is disturbed . our main goal is to investigate how displacements in the inverse temperature parameter direction ( `` axis '' ) affect the metric tensor and as a consequence , the geometry of the parametric space of random fields .the evolution of a random field from a given initial configuration is a dynamical process that can be viewed as the simulation of a non - deterministic cellular automata in which each cell has a probability to accept a new behavior depending on the behaviors of the neighboring cells in the grid .essentially , this is what is done by markov chain monte carlo algorithms to perform random walks throughout the state space of a random field model during a sampling process . in this papera cellular automata is considered as a continuous dynamical system defined on a discrete space ( 2d rectangular lattice ) .the system is governed by local rules defined in terms of the neighborhood of the cells in a way that these laws describe how the cellular automata evolves in time .a discrete - space cellular automata can be represented as a sextuple , where : * is a n - dimensional lattice of the euclidean space , consisting of cells , ; * is a set of states for each cell ( in our model is an infinite continuous set that represents the outcome of a gaussian random variable to express an infinite number of possible behaviors ) ; * an output function maps the state of a cell at a discrete time , denoted by ; * is an initial configuration ( in our model it is a random configuration generated by the outputs of independent gaussian variables ) ; * a neighborhood function yields every cell to a finite sequence so that has distinct cells ( is the support of the neighborhood system ) ; * a transition function describes the rules governing the dynamics of every cell so that : thus , the resulting cellular automata characterization for our particular random field model is given by : is the 2d rectangular lattice , is the real line ( to allow each cell to express an infinite number of possible behaviors ) , an output is performed by sampling from the probability density function of a given cell ( the lcdf of the random field model as given by equation [ eq : gmrf ] ) , the neighborhood function is the usual moore neighborhood ( the 8 nearest neighbors ) and the transition function is defined in terms of the metropolis - hastings acceptance rate .to do so , let be defined as : where both and are two different outputs for a cell . in other words , and denote two possible values for .let be the minimum value between 1 and .then , the transition function is given by : where the parameter used to compute can be written as : \bigg\}\end{aligned}\ ] ] some observations are important at this point .first , the rule for the non - deterministic automata can be put in words as : generate a new candidate for the behavior of the cell , compute and accept the new behavior with probability or keep the old behavior with probability .the crucial part however is the analysis of the transition function in terms of the spatial dependence structure of the random field , controlled by the inverse temperature parameter .note that , when , the second term of equation ( inside the parenthesis ) vanishes , indicating that the transition function favors behaviors that are similar to the global one , indicated in this model by the expected value or simply the parameter . in this scenario ,new behaviors are considered probable if they fit the global one . on the other hand ,when grows significantly , this second term , which is a measure of local adjustment , becomes increasingly relevant to the transition function . in these situations ,the cells try to adjust their behaviors to the behavior of the nearest neighbors , ignoring the global structure .figure [ fig : automata ] illustrates two distinct configurations regarding the evolution of a gaussian random field .the left one corresponds to the initial random configuration in which the inverse temperature parameter is zero .the right image is the configuration obtained after 200 steps of evolution for starting at zero and with regular and fixed increments of in each iteration .different colors encode different behaviors for the cells in the grid .note the major difference between the two scenarios described above ..,title="fig : " ] .,title="fig : " ] in summary , our main research goal with this paper is to investigate how changes in the inverse temperature parameter affect the transition function of a non - deterministic cellular automata modeled according to a gaussian random field .this investigation is focused in the analysis of fisher information , a measure deeply related to the geometry of the underlying random field model s parametric space , since it provides the basic mathematical tool for the definition of the metric tensor ( natural riemannian metric ) of this complex statistical manifold .in this section , we discuss how information geometry can be applied in the characterization of the statistical manifold of gaussian random fields by the definition of the proper riemannian metric , given by the fisher information matrix .information geometry has been a relevant research area since the pioneering works of shunichi amari in the 80 s , developed by the application of theoretical differential geometry methods to the study of mathematical statistics . since then, this field has been expanded and successfully explored by researchers in a wide range of science areas , from statistical physics and quantum mechanics to game theory and machine learning .essentially , information geometry can be viewed as a branch of information theory that provides a robust and geometrical treatment to most parametric models in mathematical statistics ( belonging to the exponential family of distributions ) . within this context , it is possible to investigate how two distinct independent random variables from the same parametric model are related in terms of intrinsic geometric features .for instance , in this framework it is possible to measure distances between two gaussian random variables and .basically , when we analyse isolated random variables ( that is , they are independent ) , the scenario is extensively known , with the underlying statistical manifolds being completely characterized .however , little is known about the scenario in which we have several variables interacting with each other ( in other words , the inverse temperature parameter is not null ) . in geometric terms , this imply the emergence of an extra dimension in the statistical manifold , and therefore , in the metric tensor .we will see in the following subsections that the emergence of this inverse temperature parameter ( ) strongly affects all the components of the original metric tensor .suppose is a statistical model belonging to the exponential family , where denotes the parameters vector of the model .then , the collection of all admissible vectors defines the parametric space , which has shown to be a riemannian manifold .moreover , it has been shown that in the gaussian case , the underlying manifold is a surface with constant negative curvature , defining its geometry as hyperbolic .since the parametric space is not an euclidean space , it follows that the manifold is curved .thus , to make the computation of distances and arc lengths in possible , it is necessary to express an infinitesimal displacement in the manifold in an adaptive or locally way . roughly speaking ,that is the reason why a manifold must be equipped with a metric tensor , which is the mathematical structure responsible for the definition of inner products in the local tangent spaces . with the metric tensor it is possible to express the square of an infinitesimal displacement in the manifold , , as a function of an infinitesimal displacement in the tangent space , which in case of a 2d manifold is given by a vector $ ] . assuming a matrix notation we have : where the matrix of coefficients , , e is the metric tensor .if the metric tensor is a positive definite matrix , the manifold is is known as riemannian .note that in the euclidean case , where the metric tensor is the identity matrix ( since the space is flat ) , we have the known pitagorean relation .since its definition , in the works of sir ronald fisher , the concept of fisher information has been present in an ubiquitous manner throughout mathematical statistics , playing an important role in several applications , from numerical estimation methods based on the newton - raphson iteration to the definition of lower bounds in unbiased estimation ( cramer - rao lower bound ) .more recently , with the development of information geometry , another fundamental role of fisher information in statistical models has been discovered : it defines intrinsic geometric properties of the parametric space of a model , by characterizing the metric tensor of the respective manifold . in other words , the fisher information matrix is the natural riemannian metric of the manifold ( parametric space ) , given a statistical model .roughly speaking , fisher information can be thought as a likelihood analog to entropy , which is often used as a measure of uncertainty , but it is based in probability , not likelihood .basically , in the context of information theory , fisher information measures the amount of information a random sample conveys about an unknown parameter .let be a probability density function where is the parametric vector .the fisher information matrix , which is the natural riemannian metric of the parametric space , is defined as : it is known from the statistical inference theory that information equality holds for independent observations from the regular exponential family of distributions . in other words , it is possible to compute the expected fisher information matrix of a model by two different but equivalent ways ( since the integration and differentiation operators can be interchangeable ) , defining the condition known as the information equality : = -e\left [ \frac{\partial^2}{\partial\theta_i \partial\theta_j } log~p(x ; \vec{\theta } ) \right]\ ] ] in this investigation we replace by the local conditional density function of an isotropic pairwise gaussian random field ( equation [ eq : gmrf ] ) . more details on how this lcdf is used to build the pseudo - likelihood function are presented in the next sections of the paper . however , what we observe is that , given the intrinsic spatial dependence structure of random field models , induced by the existence of an inverse temperature parameter , information equality is not a natural condition . in general ,when the inverse temperature parameter gradually drifts apart from zero ( temperature deviates from infinity ) , this notion of information `` equilibrium '' fails .thus , in dealing with random field models , we have to consider two different versions of fisher information , from now on denoted by type - i ( due to the first derivative operator in the log likelihood function ) and type - ii ( due to the second derivative operator ) . eventually ,when certain conditions are satisfied , these two values of information converge to a unique bound .one trivial condition for the information equality is to have , which means an infinite temperature ( there is no induced spatial dependence structure since the variables are independent and the model degenerates to a regular exponential family density ) .therefore , in random fields , these two versions of fisher information play distinct roles , especially in quantifying the uncertainty in the estimation of the inverse temperature parameter , as we will see in future sections . in this sectionwe present the derivation of all components of the metric tensor in an isotropic pairwise gaussian markov random field model .the complete characterization of both versions of the metric tensor , using type - i and type - ii fisher information is discussed in details . for purposes of notation, we define these tensors as : and where is the type - i fisher information matrix and is the type - ii fisher information matrix . in the following ,we proceed with the definition of the type - i fisher information matrix .the first component of is given by : \ ] ] where is the replaced by the lcdf of the gaussian random field , given by equation . plugging the equations and computing the derivatives leads to : ^ 2 \right\ } \label{eq : mu_mu_1 } \\\nonumber & = \frac{1}{\sigma^2}\left(1 - \beta\delta \right)^2 e\left\ { \frac{1}{\sigma^2 } \left [ \left(x_i - \mu\right)^2 - 2\beta\sum_{j\in\eta_i}\left ( x_i - \mu \right)\left ( x_j - \mu \right ) \right .\\ \nonumber & \hspace{4 cm } \left .+ \beta^2 \sum_{j\in\eta_i}\sum_{k\in\eta_i}\left ( x_j - \mu \right)\left(x_k - \mu \right ) \right ] \right\ } \\ \nonumber & = \frac{\left(1 - \beta\delta \right)^2}{\sigma^2 } \left [ 1 - \frac{1}{\sigma^2}\left ( 2\beta\sum_{j\in\eta_i}\sigma_{ij } - \beta^2\sum_{j\in\eta_i}\sum_{k\in\eta_i}\sigma_{jk } \right ) \right]\end{aligned}\ ] ] where denotes the support of the neighborhood system ( in our case since we have a second - order system ) , denotes the covariance between the central variable and one of its neighbors and denotes the covariance between two variables and in the neighborhood .the second component of the metric tensor is : \ ] ] which leads to : ^ 3 \right\ } \\\nonumber & \hspace{3 cm } -\frac{(1 - \beta\delta)}{2\sigma^4}e\left\ { \left ( x_i - \mu \right ) - \beta\sum_{j\in\eta_i}\left(x_j - \mu \right ) \right\ } \end{aligned}\ ] ] note that second term of equation is zero since : - \beta\sum_{j\in\eta_i}e\left [ x_j - \mu\right ] = 0 - 0 = 0\ ] ] the expansion of the first term in leads to : ^ 3 \right\ } & = e\left [ \left ( x_i - \mu \right)^3 \right ] \\\nonumber & - 3\beta\sum_{j\in\eta_i}e\left [ ( x_i - \mu ) ( x_i - \mu ) ( x_j - \mu ) \right ] \\\nonumber & + 3\beta^2 \sum_{j\in\eta_i}\sum_{k\in\eta_i}e\left [ ( x_i - \mu ) ( x_j - \mu ) ( x_k - \mu ) \right ] \\\nonumber & - \beta^3 \sum_{j\in\eta_i}\sum_{k\in\eta_i}\sum_{l\in\eta_i}e\left ( ( x_j - \mu ) ( x_k - \mu ) ( x_l - \mu ) \right]\end{aligned}\ ] ] note that the first term of is zero for gaussian random variables since every central moment of odd order is null . according to the isserlis theorem , it is trivial to see that in fact all the other terms are null .therefore , .we now proceed to the third component of , defined by : \ ] ] replacing the equations and manipulating the resulting expressions leads to : \\ \nonumber & - 2\beta\sum_{j\in\eta_i}\sum_{k\in\eta_i}e\left [ ( x_i - \mu ) ( x_j - \mu ) ( x_k - \mu ) \right ] \\\nonumber & + \beta^2 \sum_{j\in\eta_i}\sum_{k\in\eta_i}\sum_{l\in\eta_i}e\left [ ( x_j - \mu ) ( x_k - \mu ) ( x_l - \mu ) \right ] \bigg\}\end{aligned}\ ] ] once again , all the higher - order moments are a product of an odd number of gaussian random variables so by the isserlis s theorem they all vanish , resulting in . for the next component , , we have : = 0\ ] ] since and changing the order of the product does not affect the expected value .proceeding to the fifth component of the metric tensor we have to compute : \ ] ] which is given by : ^ 2 \right\ } \\\nonumber & = \frac{1}{4\sigma^4 } - \frac{1}{2\sigma^6}e\left\ { \left [ ( x_i - \mu ) - \beta\sum_{j\in\eta_i}(x_j - \mu ) \right]^2 \right\ } \\ \nonumber & \hspace{1 cm } + \frac{1}{4\sigma^8}e\left\ { \left [ ( x_i - \mu ) - \beta\sum_{j\in\eta_i}(x_j - \mu ) \right]^4 \right\}\end{aligned}\ ] ] in order to simplify the calculations , we expand each one of the expected values separately .the first expectation leads to the following equality : ^ 2 \right\ } = \sigma^2 - 2\beta\sum_{j\in\eta_i}\sigma_{ij } + \beta^2 \sum_{j\in\eta_i}\sum_{k\in\eta_i}\sigma_{jk}\ ] ] in the expansion of the second expectation term note that : ^ 4 \right\ } = e\left [ ( x_i - \mu)^4 \right ] - 4\beta\sum_{j\in\eta_i}e\left [ ( x_i - \mu)^3 ( x_j - \mu ) \right ] \nonumber \\ & \hspace{3 cm } + 6\beta^2 \sum_{j\in\eta_i}\sum_{k\in\eta_i}e\left [ ( x_i - \mu)^2 ( x_j - \mu ) ( x_k - \mu ) \right ] \\\nonumber & \hspace{3 cm } - 4\beta^3 \sum_{j\in\eta_i}\sum_{k\in\eta_i}\sum_{l\in\eta_i } e\left[(x_i - \mu ) ( x_j - \mu ) ( x_k - \mu ) ( x_l - \mu ) \right ] \\ \nonumber & \hspace{3 cm } + \beta^4 \sum_{j\in\eta_i}\sum_{k\in\eta_i}\sum_{l\in\eta_i}\sum_{m\in\eta_i}e\left[(x_j - \mu ) ( x_k - \mu ) ( x_l - \mu ) ( x_m - \mu ) \right]\end{aligned}\ ] ] leading to five novel expectation terms . using the isserlis theorem for gaussian distributed random variables , it is possible to express the higher - order moments as functions of second - order moments .therefore , after some algebra we have : + \frac{1}{\sigma^8}\left [ 3\beta^2 \sum_{j\in\eta_i}\sum_{k\in\eta_i}\sigma_{ij}\sigma_{ik } \right .\\ \nonumber \\ \nonumber & \hspace{2 cm } \left .- \beta^3 \sum_{j\in\eta_i}\sum_{k\in\eta_i}\sum_{l\in\eta_i}\left ( \sigma_{ij}\sigma_{kl } + \sigma_{ik}\sigma_{jl } + \sigma_{il}\sigma_{jk } \right ) \right .\\ \nonumber & \hspace{2 cm } \left .+ \beta^4 \sum_{j\in\eta_i}\sum_{k\in\eta_i}\sum_{l\in\eta_i}\sum_{m\in\eta_i}\left ( \sigma_{jk } \sigma_{lm } + \sigma_{jl}\sigma_{km } + \sigma_{jm}\sigma_{kl } \right ) \right ] \end{aligned}\ ] ] the next component of the metric tensor is : \ ] ] which is given by : \times \right . \\ \nonumber & \hspace{3 cm } \left . \left[ \frac{1}{\sigma^2}\left((x_i - \mu ) - \beta\sum_{j\in\eta_i}(x_j - \mu ) \right)\left ( \sum_{j\in\eta_i}(x_j - \mu ) \right ) \right ] \right\ } \\\nonumber & = -\frac{1}{2\sigma^4 } e\left\ { \left [ ( x_i - \mu ) - \beta\sum_{j\in\eta_i}(x_j - \mu ) \right]\left [ \sum_{j\in\eta_i}(x_j - \mu ) \right ] \right\ } \\ \nonumber & \hspace{2 cm } + \frac{1}{2\sigma^6}e\left\{\left [ ( x_i - \mu ) - \beta\sum_{j\in\eta_i}(x_j - \mu ) \right]^3 \left [ \sum_{j\in\eta_i}(x_j - \mu ) \right ] \right\}\end{aligned}\ ] ] the first expectation can be simplified to : \left [ \sum_{j\in\eta_i}(x_j - \mu ) \right ] \right\ } = \sum_{j\in\eta_i}\sigma_{ij } - \beta\sum_{j\in\eta_i}\sum_{k\in\eta_i}\sigma_{jk}\ ] ] the expansion of the second expectation leads to : ^ 3 \left [ \sum_{j\in\eta_i}(x_j - \mu ) \right ] \right\ } = \\ \nonumber & e\left\ { \left [ \sum_{j\in\eta_i}(x_j - \mu ) \right ] \left [ ( x_i - \mu)^3 - 3\beta\sum_{j\in\eta_i}(x_i - \mu)^2 ( x_j - \mu ) \right .. \\ \nonumber \\ \nonumber & \hspace{4 cm } \left .+ 3\beta^2 \sum_{j\in\eta_i}\sum_{k\in\eta_i}(x_i - \mu)(x_j - \mu)(x_k - \mu ) \right .\\ \nonumber & \hspace{5 cm } \left .-\beta^3 \sum_{j\in\eta_i}\sum_{k\in\eta_i}\sum_{l\in\eta_i}(x_j - \mu)(x_k - \mu)(x_l - \mu ) \right ] \right\}\end{aligned}\ ] ] thus , by applying the isserlis equation to compute the higher - order cross moments as functions of second - order moments , and after some algebraic manipulations , we have : - \frac{1}{2\sigma^6}\left [ 6\beta\sum_{j\in\eta_i}\sum_{k\in\eta_i}\sigma_{ij}\sigma_{ik } \right . \\ \nonumber \\ \nonumber & \hspace{3 cm } \left .- 3 \beta^2 \sum_{j\in\eta_i}\sum_{k\in\eta_i}\sum_{l\in\eta_i}\left ( \sigma_{ij}\sigma_{kl } + \sigma_{ik}\sigma_{jl } + \sigma_{il}\sigma_{jk } \right ) \right .\\ \nonumber & \hspace{3 cm } \left .+ \beta^3 \sum_{j\in\eta_i}\sum_{k\in\eta_i}\sum_{l\in\eta_i}\sum_{m\in\eta_i } \left ( \sigma_{jk}\sigma_{lm } + \sigma_{jl}\sigma_{km } + \sigma_{jm}\sigma_{kl } \right ) \right]\end{aligned}\ ] ] moving forward to the next components , it is easy to verify that and , since the order of the products in the expectation is irrelevant for the final result . finally , the last component of the metric tensor is defined as : \ ] ] which is given by : ^ 2 \left [ \sum_{j\in\eta_i}(x_j - \mu ) \right]^2\right\ } \\\nonumber & = \frac{1}{\sigma^4 } e\left\ { \left [ ( x_i - \mu)^2 - 2\beta \sum_{j\in\eta_i } ( x_i - \mu)(x_j - \mu ) + \beta^2 \sum_{j\in\eta_i}\sum_{k\in\eta_i } ( x_j - \mu)(x_k - \mu ) \right ] \times \right . \\ \nonumber & \left .\hspace{5 cm } \left [ \sum_{j\in\eta_i}\sum_{k\in\eta_i } ( x_j - \mu)(x_k - \mu ) \right ] \right\ } \\ \nonumber & = \frac{1}{\sigma^4 } e \left\ { \sum_{j\in\eta_i}\sum_{k\in\eta_i}(x_i - \mu)(x_i - \mu)(x_j - \mu)(x_k - \mu ) \right .\\ \nonumber & \hspace{2 cm } \left . - 2\beta\sum_{j\in\eta_i}\sum_{k\in\eta_i } \sum_{l\in\eta_i}(x_i - \mu)(x_j - \mu)(x_k - \mu)(x_l - \mu ) \right .\\ \nonumber & \hspace{3 cm } \left .+ \beta^2 \sum_{j\in\eta_i } \sum_{k\in\eta_i } \sum_{l\in\eta_i } \sum_{m\in\eta_i}(x_j - \mu)(x_k - \mu ) ( x_l - \mu ) ( x_m - \mu ) \right\}\end{aligned}\ ] ] using the isserlis formula and after some algebra , we have : \end{aligned}\ ] ] therefore , we conclude that the type - i fisher information matrix of an isotropic pairwise gaussian random field model has the following structure : where , , and are the coefficients used to define how we compute an infinitesimal displacement in the manifold ( parametric space ) around the point : with this we have completely characterized the type - i fisher information matrix of the isotropic pairwise gaussian random field model ( metric tensor for the parametric space ) .note that , from the structure of the fisher information matrix we see that the parameter is orthogonal to both and . in the following ,we proceed with the definition of the type - ii fisher information matrix . in the following ,we provide a brief discussion based on about the information equality condition , which is a valid property for several probability density function belonging to the exponential family . for purposes of simplificationwe consider the uniparametric case , knowing that the extension to multiparametric models is quite natural .let be a random variable with a probability density function .note that : \ ] ] by the product rule we have : = -\frac{1}{p(x;\theta)^2}\left [ \frac{\partial}{\partial\theta}p(x;\theta ) \right]^2 + \frac{1}{p(x;\theta)}\frac{\partial^2}{\partial\theta^2}p(x;\theta)\ ] ] which is leads to ^ 2 + \frac{1}{p(x;\theta)}\frac{\partial^2}{\partial\theta^2}p(x;\theta)\ ] ] rearranging the terms and applying the expectation operator gives us : = -e\left [ \frac{\partial^2}{\partial\theta^2 } log~p(x;\theta ) \right ] + e\left [ \frac{1}{p(x;\theta)}\frac{\partial^2}{\partial\theta^2}p(x;\theta ) \right]\ ] ] by the definition of expected value , the previous expression can be rewritten as : = -e\left [ \frac{\partial^2}{\partial\theta^2 } log~p(x;\theta ) \right ] + \int \frac{\partial^2}{\partial\theta^2 } p(x;\theta ) dx\ ] ] under certain regularity conditions , it is possible to differentiate under the integral sign by interchanging differentiation and integration operators , which implies in : leading to the information equality condition . according to , these regularity conditions can fail for two main reasons : 1 ) the density function may not tail off rapidly enough to ensure the convergence of the integral ; 2 ) the range of integration ( the set in for which is non - zero ) may depend on the parameter .however , note that in the general case the integral defined by equation is exactly the difference between the two types of fisher information , or in a more geometric perspective , between the respective components of the metric tensors and : - \\ \nonumber & \left\ { - e\left [ \frac{\partial^2}{\partial\theta_i \partial\theta_j } log~p(x;\vec{\theta } ) \right ] \right\ } \\ \nonumber \\ \nonumber & = i_{\theta_i \theta_j}^{(1)}(\vec{\theta } ) - i_{\theta_i \theta_j}^{(2)}(\vec{\theta})\end{aligned}\ ] ] we will see in the experiments that these measures ( fisher information ) , more precisely and , play an important role in signaling changes in the system s entropy along an evolution of the random field . by using the second derivative of the log likelihood function , we can compute an alternate metric tensor , given by the type - ii fisher information matrix .the first component of the tensor is : \ ] ] which is given by : \right\ } = \frac{1}{\sigma^2}\left ( 1 - \beta\delta \right)^2\end{aligned}\ ] ] where is the size of the neighborhood system .the second component is defined by : \ ] ] resulting in = \frac{1}{\sigma^4}(1 - \beta\delta)\left [ 0 - 0 \right ] = 0\end{aligned}\ ] ] similarly , the third component of the metric tensor is null , since we have : \\ \nonumber & = \frac{1}{\sigma^2}e\left\ { \delta \left [ ( x_i - \mu ) - \beta\sum_{j\in\eta_i}(x_j - \mu ) \right ] + ( 1 - \beta\delta)\left [ \sum_{j\in\eta_i}(x_j - \mu ) \right ] \right\ } \\ \nonumber & = 0 + 0 = 0\end{aligned}\ ] ] proceeding to the fourth component , it is straightforward to see that , since changing the order of the partial derivative operators is irrelevant to the final result . for now , note that both and are approximations to ( equation [ eq : mu_mu_1 ] ) and ( equation [ eq : sigma_sigma_1 ] ) neglecting quadratic and cubic terms of the inverse of the parameter , respectively .thus , we proceed directly to the fifth component , given by : \\ \nonumber & = - e \left\ { \frac{\partial}{\partial\sigma^2}\left [ -\frac{1}{2\sigma^2 } + \frac{1}{2\sigma^4 } \left ( x_i - \mu -\beta\sum_{j\in\eta_i}(x_j - \mu ) \right)^2 \right ] \right\ } \\ \nonumber & = - e \left\ { \frac{1}{2\sigma^4 } - \frac{1}{\sigma^6}\left [ ( x_i - \mu ) - \beta\sum_{j\in\eta_i}(x_j - \mu ) \right]^2 \right\ } \\\nonumber & = \frac{1}{2\sigma^4 } - \frac{1}{\sigma^6}\left [ 2\beta\sum_{j\in\eta_i } \sigma_{ij } - \beta^2 \sum_{j\in\eta_i}\sum_{k\in\eta_i}\sigma_{jk } \right]\end{aligned}\ ] ] the next component of the metric tensor is : \\ \nonumber & = - e \left\ { \frac{\partial}{\partial\sigma^2}\left [ \frac{1}{\sigma^2 } \left ( x_i - \mu - \beta\sum_{j\in\eta_i}(x_j - \mu ) \right)\left ( \sum_{j\in\eta_i}(x_j - \mu ) \right ) \right ] \right\ } \\ \nonumber & = \frac{1}{\sigma^4}\left [ \sum_{j\in\eta_i}\sigma_{ij } - \beta \sum_{j\in\eta_i}\sum_{k\in\eta_i}\sigma_{jk } \right]\end{aligned}\ ] ] which is , again , an approximation to ( equation [ eq : sigma_beta_1 ] ) obtained by discarding higher - order functions of the parameters and .it is straightforward to see that the next two components of are identical to their symmetric counterparts , that is , and .finally , we have the last component of the fisher information matrix : \end{aligned}\ ] ] which is given by : \right\ } \\\nonumber & = \frac{1}{\sigma^2}e\left [ \left ( \sum_{j\in\eta_i}(x_j - \mu ) \right ) \left ( \sum_{j\in\eta_i}(x_j - \mu ) \right ) \right ] \\\nonumber & = \frac{1}{\sigma^2}\sum_{j\in\eta_i}\sum_{k\in\eta_i}\sigma_{jk}\end{aligned}\ ] ] once again , note that is an approximation to ( equation [ eq_beta_beta_1 ] ) where higher - order functions of the parameters and are suppressed .it is clear that the difference between the components of the two metric tensors and is significant when the inverse temperature parameter is not null . on the other hand ,the global structure of is essentially the same of , implying that the definition of is identical to the previous case , but with different coefficients for , , and .note also that when the inverse temperature parameter is fixed at zero , both metric tensors converge to : where is a constant defining the support of the neighborhood system .this is exactly the fisher information matrix of a traditional gaussian random variable ( excluding the third row and column ) , as it would be expected . in order to simplify the notations and also to make computations faster , the expressions for the components of the metric tensors and can be rewritten in a matrix - vector form using a tensor notation .let be the covariance matrix of the random vectors , obtained by lexicographic ordering the local configuration patterns for a snapshot of the system ( a static configuration ) . in this work ,we choose a second - order neighborhood system , making each local configuration pattern a patch .thus , since each vector has 9 elements , the resulting covariance matrix is .let be the sub - matrix of dimensions obtained by removing the central row and central column of ( these elements are the covariances between the central variable and each one of its neighbors ) .also , let be the vector of dimensions formed by all the elements of the central row of , excluding the middle one ( which denotes the variance of actually ) .[ fig : cov_matrix ] illustrates the process of decomposing the covariance matrix into the sub - matrix and the vector in an isotropic pairwise gmrf model defined on a second - order neighborhood system ( considering the 8 nearest neighbors ) . into and on a second - order neighborhood system ( ) .* by expressing the components of the metric tensors in terms of kronocker products , it is possible to compute fisher information in a efficient way during computational simulations . ]given the above , we can express the elements of the fisher information matrix in a tensorial form using kronecker products .the following definitions provide a computationally efficient way to numerically evaluate exploring tensor products .let an isotropic pairwise gaussian markov random field be defined on a lattice with a neighborhood system of size ( usual choices for are even values : 4 , 8 , 12 , 20 , 24 , ... ) .assuming that the set denotes the global configuration of the system at iteration , and both and are defined according to figure [ fig : cov_matrix ] , the components of the metric tensor ( fisher information matrix ) can be expressed as : \ ] ] \\ \nonumber & + \frac{1}{\sigma^8}\left [ 3\beta^2 \left\| \vec{\rho } \otimes \vec{\rho } \right\|_{+ } - 3 \beta^3 \left\| \vec{\rho } \otimes \sigma_{p}^{- } \right\|_{+ } + 3\beta^4 \left\| \sigma_{p}^{- } \otimes \sigma_{p}^{- } \right\|_{+ } \right ] \nonumber\end{aligned}\ ] ] \\ \nonumber & - \frac{1}{2\sigma^6 } \left [ 6\beta \left\| \vec{\rho } \otimes \vec{\rho } \right\|_{+ } - 9 \beta^2 \left\| \vec{\rho } \otimes \sigma_{p}^{- } \right\|_{+ } + 3\beta^3 \left\| \sigma_{p}^{- } \otimes \sigma_{p}^{- } \right\|_{+ } \right ] \nonumber\end{aligned}\ ] ] \ ] ] where denotes the summation of all the entries of the vector / matrix ( not to be confused with the norm ) and denotes the kronecker ( tensor ) product .similarly , we can express the components of the metric tensor in this form .let an isotropic pairwise gaussian markov random field be defined on a lattice with a neighborhood system of size ( usual choices for are even values : 4 , 8 , 12 , 20 , 24 , ... ) .assuming that the set denotes the global configuration of the system at iteration , and both and are defined according to figure [ fig : cov_matrix ] , the components of the metric tensor ( fisher information matrix ) can be expressed as : \ ] ] \ ] ] from the above equations it is clear to see that the components of are approximations to the components of , obtained by discarding the higher - order terms ( the cross kronecker products vanish ) .entropy is one of the most ubiquitous concepts in science , with applications in a large number of research fields . in information theory ,shannon entropy is the most widely know statistical measure related to a random variable , since it often characterizes a degree of uncertainty about any source of information .similarly , in statistical physics , entropy plays an important role in thermodynamics , being a relevant measure in the the study and analysis of complex dynamical systems . in this paper , we try to understand entropy in a more geometrical perspective , by means of its relation to fisher information .our definition of entropy in a gaussian random field is done by repeating the same process employed to derive the fisher information matrices . knowing that the entropy of random variable x is defined by the expected value of self - information , given by , we have the following definition .let a pairwise gmrf be defined on a lattice with a neighborhood system .assuming that the set of observations denote the global configuration of the system at time , then the entropy for this state is given by : & = \frac{1}{2}\left [ log\left ( 2\pi\sigma^2 \right ) + 1\right ] \\\nonumber & - \frac{1}{\sigma^2 } \left [ \beta\sum_{j \in \eta_i}\sigma_{ij } - \frac{\beta^2}{2}\sum_{j \in \eta_i}\sum_{k \in \eta_i}\sigma_{jk } \right]\end{aligned}\ ] ] note that , for the expression is reduced to the entropy of a simple gaussian random variable , as it would be expected . by using the tensor notation ,we have : = h_{g } - \left [ \frac{\beta}{\sigma^{2}}\left\| \vec{\rho } \right\|_{+ } - \frac{\beta^{2}}{2 } i_{\beta\beta}^{(2)}(\vec{\theta } ) \right ] \label{eq : entropy}\end{aligned}\ ] ] where denotes the entropy of a gaussian random variable with mean and variance , and is a component of the fisher information matrix . in other words ,entropy is related to fisher information .we will see in the experimental results that the analysis of fisher information can bring us insights in predicting whether the entropy of the system is increasing or decreasing .a fundamental step in our simulations is the computation of the fisher information matrix ( metric tensor components ) and entropy , given an output of the random field model .all these measures are function of the model parameters , more precisely , of the variance and the inverse temperature . in all the experiments conducted in this investigation ,the gaussian random field parameters and are both estimated by the sample mean and variance , respectively , using the maximum likelihood estimatives . however , maximum likelihood estimation is intractable for the inverse temperature parameter estimation ( ) , due to the existence of the partition function in the joint gibbs distribution .an alternative , proposed by besag , is to perform maximum pseudo - likelihood estimation , which is based on the conditional independence principle .the basic idea with this proposal is to replace the independence assumption by a more flexible conditional independence hypothesis , allowing us to use the local conditional density functions of the random field model in the definition of a likelihood function , called pseudo - likelihood .it has been shown that maximum likelihood estimators are asymptotically efficient , that is , the uncertainty in the estimation of unknown parameters is minimized . in order to quantify the uncertainty in the estimation of the inverse temperature parameter ,it is necessary to compute the asymptotic variance of the maximum pseudo - likelihood estimator .we will see later that the components and of both tensors and are crucial in quantifying this uncertainty .first , we need to define the pseudo - likelihood function of a random field model .let an isotropic pairwise markov random field model be defined on a rectangular lattice with a neighborhood system .assuming that denotes the set corresponding to the observations at a time ( a snapshot of the random field ) , the pseudo - likelihood function of the model is defined by : where .the pseudo - likelihood function is the product of the local conditional density functions throughout the field viewed as a function of the model parameters . for an isotropic pairwise gaussian markov random field ,the pseudo - likelihood function is given by plugging equation into equation : ^{2 } \label{eq : gmrf_pl}\ ] ] by differentiating equation with respect to and properly solving the pseudo - likelihood equation , we obtain the following estimator for the inverse temperature parameter : }{\displaystyle\sum_{i=1}^{n}\left [ \displaystyle\sum_{j \in \eta_i}\left ( x_{j } - \mu \right ) \right]^{2 } } \label{eq : betampl}\ ] ] assuming that the random field is defined on a retangular 2d lattice where the cardinality of the neighborhood system is fixed ( ) , the maximum pseudo - likelihood estimator for the inverse temperature parameter can be rewritten as : which means that we can also compute this estimative from the covariance matrix of the configuration patterns . in other words , given a snapshot of the system at an instant , , all the measures we need are based solely in the matrix .therefore , in terms of information geometry , a sequence of gaussian random field outputs in time can be summarized into a sequence of covariance matrices . in computational terms , it means a huge reduction in the volume of data . in our computational simulations , we fix initial values for the parameters , and , and at each iteration an infinitesimal displacement in the inverse temperature ( axis ) is performed .a new random field output is generated for each iteration and in order to avoid any degree of supervision throughout the process of computing the entropy and both fisher information metrics of each configuration , the unknown model parameters are properly estimated from data .however , in estimating the inverse temperature parameter of random fields via maximum pseudo - likelihood , a relevant question emerges : how to measure the uncertainty in the estimation of ?is it possible to quantify this uncertainty ?we will see that both versions of fisher information play a central role in answering this question .it is known from the statistical inference literature that both maximum likelihood and maximum pseudo - likelihood estimators share an important property : asymptotic normality .it is possible , therefore , to characterize their behavior in the limiting case by knowing the asymptotic variance .a limitation from maximum pseudo - likelihood approach is that there is no result proving that this method is asymptotically efficient ( maximum likelihood estimators have been shown to be asymptotically efficient since in the limiting case their variance reaches the cramer - rao lower bound ) .it is known that the asymptotic variance of the inverse temperature parameter in an isotropic pairwise gmrf is given by : ^ 2 } = \frac{1}{i_{\beta\beta}^{(2)}(\vec{\theta } ) } + \frac{1}{i_{\beta\beta}^{(2)}(\vec{\theta})^{2}}\left(i_{\beta\beta}^{(1)}(\vec{\theta } ) - i_{\beta\beta}^{(2)}(\vec{\theta } ) \right)\ ] ] showing that in the information equilibrium condition , that is , , we have the traditional cramer - rao lower bound , given by the inverse of the fisher information .a very simple interpretation of this equation indicates that the uncertainty in the estimation of the inverse temperature parameter is reduced when is minimized and is maximized .essentially , it means that most local patterns must be aligned to the expected global behavior and , in average , the local likelihood functions should not be flat ( indicating that there is a small number of candidates for ) . by computing , and , we have access to three important information theoretic measures regarding a global configuration of the random field .we call the 3d space generated by these 3 measures , the information space . a point in this spacerepresents the value of that specific component of the metric tensor , , when the system s entropy value is .this allows us to define the fisher curves of the system .let an isotropic pairwise gmrf model be defined on a lattice with a neighborhood system and be a sequence of outcomes ( global configurations ) produced by different values of ( inverse temperature parameters ) for which .the fisher curve from to is defined as the parametric curve that maps each configuration to a point in the information space : where and denote the components of the metric tensors and , respectively , and denotes the entropy . the motivation behind the fisher curve is the development of a computational tool for the study and characterization of random fields . basically , the fisher curve of a system is the parametric curve embedded in this information - theoretic space obtained by varying the inverse temperature parameter from an initial value to a final value .the resulting curve provides a geometrical interpretation about how the random field evolves from a lower entropy configuration a to a higher entropy configuration b ( or vice - versa ) , since the fisher information plays an important role in providing a natural metric to the riemannian manifold of a statistical model .we will call the path from a global system configuration a to a global system configuration b as the _ fisher curve _ ( from a to b ) of the system , denoted by . instead of using the notion of time as parameter to build the curve , we parametrize by the inverse temperature parameter . in geometrical terms, we are trying to measure the deformation in the metric tensor of the stochastic model ( local geometric property ) induced by a displacement in the inverse temperature parameter direction .we are especially interested in characterizing random fields by measuring and quantifying their behavior as the inverse temperature parameter deviates from zero , that is , when temperature leaves infinity . as mentioned before, the isotropic pairwise gmrf model belongs to the regular exponential family of distributions when the inverse temperature parameter is zero ( ) . in this case, it has been shown that the geometric structure , whose natural riemannian metric is given by the fisher information matrix ( metric tensor ) , has constant negative curvature ( hyperbolic geometry ) .besides , fisher information can be measured by two different but equivalent ways ( information equality ) . as the inverse temperature increases , the model starts to deviate from this known scenario , and the original riemannian metric does not correctly represents the geometric structure anymore ( since there is an additional parameter ) . the manifold which used to be 2d ( surface )now slowly is transformed ( deformed ) to a different structure . in other words , as this extra dimension is gradually emerging ( since not null ) , the metric tensor is transformed ( the original fisher information matrix becomes a matrix ) .we believe that the intrinsic notion of time in the evolution of a random field composed by gaussian variables is caused by the irreversibility of this deformation process , as the results suggest .in this section , we present some experimental results using computational methods for simulating the dynamics and evolution of gaussian random fields .all the simulations were performed by applying markov chain monte carlo ( mcmc ) algorithms for the generation of random field outcomes based on the specification of the model parameters . in this paper , we make intensive use of the metropolis - hastings algorithm , a classic method in the literature .all the computational implementations are done using the python anaconda platform , which includes several auxiliary packages for scientific computing .the main objective here is to measure , and along a mcmc simulation in which the inverse temperature parameter is controlled to guide the global system behavior .initially , is set to , that is , the initial temperature is infinite . in the following , is linearly increased , with fixed increments , up to an upper limit .after that , the exact reverse process is performed , that is , the inverse temperature is linearly decreased using the same fixed increments ( ) all the way down to zero . with this procedure, we are actually performing a positive displacement followed by a negative displacement along the inverse temperature parameter `` direction '' in the parametric space . by sensing each component of the metric tensor ( fisher information ) at each point, we are essentially trying to capture the deformation in the geometric structure of the statistical manifold ( parametric space ) throughout the process .the simulations were performed using the following parameter settings : , ( initial value ) , , , and 1000 iterations . at the end of a single mcmc simulation , 2.1 gb of data is generated , representing 1000 random field configurations of size .[ fig : gmrf_configs ] shows some samples of the random field during the evolution of the system .is first increased from zero to 0.5 and then decreased from 0.5 to zero . ] the goal of this investigation is to analyse the behavior of the metric tensor of the statistical manifold of a gaussian random field by learning everything from data , including the inverse temperature parameter . at each iteration of the simulation , the values of and are updated by computing the sample mean and sample variance , respectively .the inverse temperature parameter is updated by computing the maximum pseudo - likelihood estimative . in order to sense the local geometry of the parametric space during the random field dynamics ,we have computed the values of all the components of the metric tensor at each iteration of the simulation . since we are dealing with both forms of fisher information ( using the square of the first derivative and the negative of the second derivative ) to investigate the information equality condition , both and tensorsare being estimated .[ fig : fisher ] shows a comparison between each component of with its corresponding component in along the entire simulation . at this point, some important aspects must be discussed .first , these results show that the components , and are practically negligible in comparison to in terms of magnitude .second , while the differences , and are also negligible , the difference is very significant , especially for larger values of . andthird , note that even though the total displacement in the inverse temperature direction adds up to zero ( since is updated from zero to 0.5 and back ) , is highly asymmetric , which indicates that the deformations induced by the metric tensor to the statistical manifold when entropy is increasing are different than those when entropy is decreasing .tensor and the red lines represent the components of the tensor .the first row shows the graphs of versus and versus .the second row shows the graphs of versus and versus .note that , from an information geometry perspective , the most relevant component in this geometric deformation process of the statistical manifold is the one regarding the inverse temperature parameter. two important aspects that must be remarked are : 1 ) there is a large divergence between and , that is , the information equality condition fails when deviates from zero ; 2 ) although the total displacement in the `` axis '' adds up to zero , is highly asymmetric , which indicates that the deformations induced by the metric tensor to the statistical manifold when entropy is increasing are different from those when entropy is decreasing . ] in practical terms , what happens to the metric tensor can be summarized as : by moving forward units in the `` axis '' we sense an effect that is not always the inverse of the effect produced by a displacement of units in the opposite direction .in other words , moving towards higher entropy states ( when increases ) is different from moving towards lower entropy states ( when decreases ) .this effect , which resembles the conceptual idea of a hysteresis phenomenon , in which the future output of the system depends on its history , is illustrated by a plot of the fisher curve of the random field along the simulation . making a analogy with a concrete example ,it is like the parametric space were made of a plastic material , that when pressured by a force deforms itself .however , when the pressure is vanishing , a different deformation process takes place to recover the original shape .[ fig : fishercurve_beta ] shows the estimated fisher curves for ( the blue curve ) and for ( the red curve ) regarding each component of the metric tensor .this natural orientation in the information space induces an arrow of time along the evolution of the random field .in other words , the only way to go from a to b by the red path would be running the simulation backwards .note , however , that when moving along states whose variation in entropy is negligible ( for example , a state a in the same plane of constant entropy ) the notion of time is not apparent . in other words , it is not possible to know whether we are moving forward or backwards in time , simply because at this point the notion of time is not clear ( time behaves similar to a space - like dimension since it is possible to move in both directions in this information space , once the states a and a are equivalent in terms of entropy , because there is no significant variation of ) . during this period ,it the perception of the passage of time is not clear , since the deformations induced by the metric tensor into the parametric space ( manifold ) are reversible for opposite displacements in the inverse temperature direction .note also that , from a differential geometry perspective , the torsion of the curve seems to be related to the divergence between the two types of fisher information .when diverges from the fisher curve leaves the plane of constant entropy .the results suggest that the torsion of the curve at a given point could be related to the notion of the passage of time : large values suggest that time seems to be `` running faster '' ( large change in entropy ) while small values suggest the opposite ( if we are moving through a plane of constant entropy then time seems to be `` frozen '' ) . .* the parametric curve was built by varying the inverse temperature parameter from ( state a ) to ( state b ) and back .the results show that moving along different entropic states causes the emergence of a natural orientation in terms of information ( an arrow of time ) .this behavior resembles the conceptual idea of the phenomenon known as hysteresis . ]following the same strategy , the fisher curves regarding the remaining components were generated .fig : fishercurve_mu ] , [ fig : fishercurve_sigma ] and [ fig : fishercurve_sigbeta ] illustrates the obtained results .note , however , that the notion of time is not captured in these curves . by looking at these measurementswe can not say whether the system is moving forwards or backwards in time , even for large variations on the inverse temperature parameter .since the fisher curves and are essentially the same , the path from a ( ) to b ( ) is the inverse of the path from b to a. . *the parametric curve was built by varying the inverse temperature parameter from ( state a ) to ( state b ) and back . in this casethe arrow of time is not evident since the two curves , and , are essentially the same .the parametric curve was built by varying the inverse temperature parameter from ( state a ) to ( state b ) and back . in this casethe arrow of time is not evident since the two curves , and , are essentially the same . ] .* the parametric curve was built by varying the inverse temperature parameter from ( state a ) to ( state b ) and back .once again , in this case the arrow of time is not evident since the two curves , and , are essentially the same . ]this section describes the main results obtained in this paper , focusing on the interpretation of the proposed mathematical model of hysteresis for the study of complex systems : the fisher curve of a random field .basically , when temperature is infinite ( ) entropy fluctuates around a minimum base value and the information equality prevails . from an information geometry perspective, a reduction in temperature ( increase in ) causes a series of changes in the geometry of the parametric space , since the metric tensor ( fisher information matrix ) is drastically deformed in an apparently non - reversible way , inducing the emergence of a natural orientation of evolution ( arrow of time ) . by quantifying and measuring an arrow of time in random fields , a relevant aspect that naturally arises concerns the notions of past and future .suppose the random field is now in a state a , moving towards an increase in entropy ( that is , is increasing ) . within this context , the analysis of the fisher curves suggests a possible interpretation : past is a notion related to a set of states whose entropies are below the current entropic plane of the state a. equivalently , the notion of past could also be related to a set of states whose entropies are above the current entropic plane , provided the random field is moving towards a lower entropy state .again , let us suppose the random field is in a state a and moving towards an increase in entropy ( is increasing ) .similarly , the notion of future refers to a set of states whose entropies are higher than the entropy of the current state a ( or equivalently , future could also refer to the set of states whose entropies are lower than a , provided that the random field is moving towards a decrease in entropy ) . according to this possible interpretation , the notion of futureis related to the direction of the movement , pointed by the tangent vector at a given point of the fisher curve . if along the evolution of the random fieldthere is no significant change in the system s entropy , then time behaves similar to a spatial dimension , as illustrated by fig .[ fig : past_future ] .in this paper , we addressed the problem of characterizing the emergence of an arrow of time in gaussian random field models . to intrinsically investigate the effect of the passage of time , we performed computational simulations in which the inverse temperature parameter is controlled to guide the system behavior throughout different entropic states .investigations about the relation between two important information theoretic measures , entropy and fisher information , led us to the definition of the fisher curve of a random field , a parametric trajectory embbeded in an information space , which characterizes the system behavior in terms of variations in the metric tensor of the statistical manifold .basically , this curve provides a geometrical tool for the analysis of random fields by showing how different entropic states are `` linked '' in terms of fisher information , which is , by definition , the metric tensor of the underlying random field model parametric space .in other words , when the random field moves along different entropic states , its parametric space is actually being deformed by changes that happen in fisher information matrix ( the metric tensor ) . in this scientific investigationwe observe what happens to this geometric structure when the inverse temperature parameter is modified , that is , when temperature deviates from infinity , by measuring both entropy and fisher information .an indirect subproblem involved in the solution of this main problem was the estimation of the inverse temperature parameter of a random field , given an outcome ( snapshot ) of the system . to tackle this subproblem, we used a statistical approach known as maximum pseudo - likelihood estimation , which is especially suitable for random fields , since it avoids computations with the joint gibbs distribution , often computationally intractable .our obtained results show that moving towards higher entropy states is different from moving towards lower entropy states , since the fisher curves are not the same .this asymmetry induces a natural orientation to the process of taking the random field from an initial state a to a final state b and back , which is basically the direction pointed by the arrow of time , since the only way to move in the opposite direction is by running the simulations backwards . in this context, the fisher curve can be considered a mathematical model of hysteresis in which the natural orientation is given by the arrow of time .future works may include the study of the fisher curve in other random field models , such as the ising and q - state potts models .haddad wm .temporal asymmetry , entropic irreversibility , and finite - time thermodynamics : from parmenides - einstein time - reversal symmetry to the heraclitan entropic arrow of time .2012;14(3):407455 .levada alm .learning from complex systems : on the roles of entropy and fisher information in pairwise isotropic gaussian markov random fields .entropy , special issue on information geometry .. roberts go .markov chain concepts related to sampling algorithms . in : gilks wr , richardson s , spiegelhalter dj , editors .markov chain monte carlo in practice ( edited by gilks , w. r. , richardson , s. and spiegelhalter , d. j. ) .chapman & hall / crc ; 1996 .
random fields are useful mathematical objects in the characterization of non - deterministic complex systems . a fundamental issue in the evolution of dynamical systems is how intrinsic properties of such structures change in time . in this paper , we propose to quantify how changes in the spatial dependence structure affect the riemannian metric tensor that equips the model s parametric space . defining fisher curves , we measure the variations in each component of the metric tensor when visiting different entropic states of the system . simulations show that the geometric deformations induced by the metric tensor in case of a decrease in the inverse temperature are not reversible for an increase of the same amount , provided there is significant variation in the system s entropy : the process of taking a system from a lower entropy state a to a higher entropy state b and then bringing it back to a , induces a natural intrinsic one - way direction of evolution . in this context , fisher curves resemble mathematical models of hysteresis in which the natural orientation is pointed by an arrow of time .
physical systems require very often different descriptions at the micro and macro scale . it is the case for instance in systems which exhibit emergent phenomena , and for systems which undergo a phase transition . in this case, one could argue that the degrees of freedoms change with the scale effectively , and thus phase space counting should be different depending on the lens with which one look at the system .this line of thinking has been very fruitful in the last century , since the very initial work of gell - man and low on the renormalization group .the concept of emergence , in particular , has enlighted many physical phenomena , giving them in the first place both a renewed appeal from the new interpretation . with this same line of reasoning, dynamical systems can often exhibit correlations which are not only time dependent , but that at short time scales with respect to thermalization typical time scale , exhibit different behaviors .the introduction of a macroscopic entropy functional for statistical systems has been introduced by lloyd and pagels in , and at the same time by lindgren .lloyd and pagels showed that the depth of a hamiltonian system is proportional to the difference between the system and the coarse grained entropy .this paper introduced the concept of `` thermodynamic depth '' .if is the probability that a certain system arrived at a macroscopic state , then the thermodynamic depth of that state is proportional to .this implies that the average depth of a system , the complexity , is proportionall to the shannon entropy , or the boltzman entropy .in addition , it has been shown in that the only functional that is continuous , monotonically increasing with system size , and is extensive is the boltzman functional up to a constant .one can show that such argument is true also for _ macroscopic states _, described by trajectories . in this case , the thermodynamic depth of this state is given by .in general , the average depth of a system with many macroscopic states can be very large .in fact , it has been shown in that the macroscopic entropy defined by : is monotonically increasing , i.e. , and .it has been also shown that , if in general the macrostate is described by a string of length , one can obtain a finite specific thermodynamic depth , with being the infinite string .the idea of thermodynamic depth has inspired ekroot and cover to introduce the entropy of markov trajectories in .if denote the row of a markov transition matrix , one can define the entropy of a state as : with being the markov operator .if one introduces the probability of a trajectory going from to as , then , the macroscopic entropy of the markov trajectory is given by : for markov chains , one has that , which thus leads to a recurrence relation : which follows from the chain rule of the entropy , and allows to calculate a closed formula for in terms of the entropy of the nodes , that we will call , and the asymptotic , stationary distribution of the markov chain , . over the last decade , a huge effort has been devoted to understanding processes on networks , understanding their statistical properties , as interactions very often occur on nontrivial network topologies , as for instance scale free or small world networks , called complex networks . with this widespread interest in networks , the study of global properties of graphs and graph ensembles has given a renewed impetuous to the study of entropies on graphs .in general , in analogy with what happens for markov chains , one is interested in quantifying their complexity by means of information theory approach . since for strongly connected graphs , the transition kernel , given by , with being the adjacency matrix of the graph and being the diagonal matrix of degree with . if is an ergodic operator ( which depends on the topological properties of the underlying graph ) , one can study operators based on the asymptotic properties of a random walk .the dynamics and the structure of many physical networks , such as those involved in biological , physical , economical and technological systems , is often characterized by the topology of the network itself . in order to quantify the complexity of a network , several measures of complexity of a networkhave been introduced , as for instance in , studying the entropy associate to a certain partitioning of a network .the standard boltzmann entropy per node was defined as the transition kernel of a random walk in . in general , in complex networks , one is interested in the average complexity of an ensemble of networks of the same type , as for instance erds - renyi or watts - strogats and barabsi - albert random graphs .along these lines in particular , we mention the entropy based on the transition kernel of anand and bianconi .one can in fact write the partition function of a network ensemble subject to a micro - canonical constraint ( the energy ) and then , given the probability of certain microcanonical ensemble , calculate its entropy , similarly to what proposed in for random graphs . in general ,an entropy of a complex network can be associated from a test particle performing a diffusion process on the network , as in ; for scale free networks , it is found that the entropy production rate depends on the tail of the distribution of nodes , and thus on the exponent of the tail . along these lines , in a von neumann entropy based on the graph laplacian has been introduced , merging results inspired from pure states in quantum mechanics , and networks , and finding that the von neumann entropy is related to the spectrum of the laplacian . in particular , it has been shown that many graph properties can be identified using this laplacian approach .a huge body of work has been done by the group of burioni and cassi , in defining the statistical properties of graphs for long walks , for instance using a heat kernel approach to study spectral and fractal dimensions of a graph , finite and infinite ( see and references therein ) .in general , these approaches rely on a local operator ( transition kernels , laplacians ) with support on the graph . therefore , if one is interested in knowing macroscopic properties of the graph , is indeed forced to use non - local operators .in addition to the theoretical interest of describing the macroscopic properties of a graph in terms of information theory quantities , it is important to remark that very often these have important applications in classifying systems according to their topological properties .for instance , in it has been shown that graph entropy can be used to differentiate and identify cancerogenic cells . in particular , shows the importance of studying entropies based on the non - local ( macroscopic ) properties of a network , as for instance the _ higher - order _ network entropy given by with satisfying an approximate diffusion equation at order , in addition to the approaches just described , one could think of using , instead of the diffusion kernel above , a node - entropy based on diffusion as .it is easy to see , however that for , if the operator is ergodic , the asymptotic entropy is independent from the initial state : it easy a known fact that if has a unique perron root , , where is a nilpotent operator such that .the same happens for the diffusion kernel at long times : in this case the diffusion kernel approaches the asymptotic distribution , which indeed has forgotten from which node the diffusion started . with the aim of retaining the information on the node, we introduce the entropy on the paths originating at a node which , as we shall show , has very interesting asymptotic properties for long walks . in the next sectionwe describe the construction of the non - local entropy , and an application to random graphs and fractals .conclusions follow .we shall start by introducing some basic definitions .let us consider a markov operator on states , i.e. , such that : with .entropy gives a measure of how mixing are some states , i.e. how much one state is related to the other states .we can the define the following quantity : that for reasons it will be clear soon , we call first order entropy . ] .it is clear , however , the this definition is purely local , i.e. , the mixing defined by the first order entropy is a local concept , as the it gives a sense of how much mixing there is at the second step in a markov process for each node . generalizing this entropy for longer times ,i.e. when the operator is applied several times , is not obvious .one obstruction might be given , in fact , by the ergodicity of the operator : in this case , the operator is trivial , in the sense that each row of is identical , due to the ergodic theorem , and thus eqn .( [ entropy ] ) is non - trivially generalizable if one wants to assign a ranking to each node . in the approximation oflong walks , if one used the operator , eqn .( [ entropy ] ) would become : as a result of the ergodic theorem , the entropy evaluated on asymptotic states is independent from the initial condition .however , here we argue that there is a definition of entropy which indeed depends on the initial condition , which is the macroscopic entropy evaluated on the space of trajectories .we thus introduced the following entropy on the paths a markov particle went through after -steps , or order entropy : where is a factor which depends only on , and is used to keep the entropy finite in the limit .we will first assume that does not depend on any other parameter ; this choice easily leads to the factor . following the discussion in ,it is easy to show , after having defined the operation on the entropy on paths of length , that and , and . + + this implies also that this definition of macroscopic entropy has good asymptotic properties , i.e. that , a unique such that in order to better interpret this non - local entropy , we introduce the following notation . we denote with , a path , a string of states of length k , and with an infinite string of states of the form .we then denote as the ordered product and with the infinite product , we also denote with ( ) , the sums over all possible paths of length ( infinite ) starting in and ending in , and the sum over all possible paths of length ( infinite ) starting at .we can then write , compactly ( setting temporarily ) : it is now easy to see that this can be written in terms of products : which gives an idea of how fast this product can grow as a function of . in order to set the stage for what follows ,let us consider the simpler case of a 2 dimensional markov chain with transition probabilities parametrized by two positive parameters , : and our aim is now to use a recursion relations for the infinite trees in order to calculate the exact values for and .this can be easily generalized , and in fact there is no obstruction to calculate this for generic markov matrices ; as we will see shortly the result is independent on the dimensionalilty of the matrix .let us thus consider the entropies for and .these can be written recursively , for , as now we can use the properties of logarithms , and the fact that following from the fact that the matrix is stochastic .we can at this point separate the various terms , obtaining : and thus we reach the following recursive equation : where we see that the first order entropy enters : in doing the calculation , we observe that now we have a generic formula , which depends only on the markov operator . due to the linearity of the recursion relation ,it is easy to observe that it is independent from the dimensionality .we observe that the case can be taken into account by defining . for generic , this equation can be written as : first of all we notice that the limit is well defined , and so is its cesro mean . to see this, we see that is a positive bounded operator , .thus , by the cesro mean rule , we have then that and thus we discover that also for this entropy , , with , and thus we failed yet to distinguish the entropy of the paths for each single node .it is easy to realize that this is due to the normalization factor , , which thanks to the cesro rule leads to a different result we were looking for to begin with . in order to do improve the counting , then , we can assume that now the normalization factor depends on an extra parameter , .in particular , we will be interested in the natural choice of contractions , i.e. , as this choice has nice asymptotic behavior and has a straightforward interpretation in terms of path lengths , as we will see after .we thus consider the following entropy functional : this formula is now defined in terms of an extra parameter ; we will now show that thanks to the recursion rule , one can find a closed formula at .having gained experience on how to write the recursion rule in the previous section , we promptly modify the recursion rule in order to account for the normalization .thus , following the same decomposition in order to find the recurrence rule , we find : which leads to the following closed formula for the recursion , in terms of , and : writing down all the terms , recursively , we find : and , realizing that we can now take the limit safely , : which is finite if , and is the main result of this paper . a compact way of rewriting equation in eqn .( [ entfin ] ) , is by multiplying and dividing by , and writing the entropy in terms of the matrix resolvent . we thus now see that we have traded the infinity for a `` forgetting '' parameter , which adds a further variable to the analysis , and which might seem puzzling at first .in particular , we do recognize that this operator has been widely used in several fields , which is reassuring .in fact , the entropy just introduced resembles several centrality measures on network , as for instance the katz centrality , although applied to a vector which is different than a column of ones , but has the entropy calculated at the first order for in each node .in particular , we realize that the resolvent is often used for measuring correlations ( for instance in ) . it is worthmake few comments regarding eqn .( [ entfin ] ) .it is striking , as often it happens , that it is indeed easier to calculate this path entropy , thanks to the fixed point , for an infinite number of steps , rather than a finite number .we can in fact easily convince ourselves that calculating all the paths of length on a graph can grow as in the worst case , which can be a rather big number for fairly small graphs after few steps . using the formula above, one can calculate this entropy by merely inverting a matrix which , apart from being the fortune of several search engines and big data analysts , and although being slow in some cases or can have convergence problems , can be done also for large matrices ( and thus graphs ) , which is rather convenient .if this was not enough , one can however also tune the parameter in order calculate the entropy ( on average ) after a finite number of steps , as we shall soon show . in general, one could generalize this formula by refining on the type of paths one is interested of summing on ( for instance , self - avoiding loops , closed random walks ) . although this approach is definitely feasible , it is hard , at the end of the computation , to find a closed formula at the fixed point .the reason is that , by summing on all possible indices , the equations can be written in terms of matrix multiplication of the markov transition kernel , thus simplifying the final equation . as a final remark, we note that , differently from the approach of , we define the macroscopic entropy not on markov trajectories defined by a source node and a destination node , but indeed are aimed at studying the path complexity attached to a node , given by all the possible paths which can be originated from it .the introduction of a renormalization parameter , able to keep the entropy finite in the asymptotic limit , and at the same time pertaining the information on the originating node , might seem puzzling at first .in general , as we shall show now , one can associate the parameter to the average length path to be considered . in fact , one can write the average path length , recursively , and obtain the formula : and under the assumption that , one can obtain the roots of this equation for as a function of , it is easy to see that now ] , to put in relation the two : \ ^1 \vec s \nonumber \\= -\frac{1}{\epsilon_2 \epsilon_1 } ( \frac{1}{\epsilon_2}-\frac{1}{\epsilon_1})[\epsilon_2 \ ^ * \vec s_{\epsilon_2}-\epsilon_1 \ ^ * \vec s_{\epsilon_1}]\end{aligned}\ ] ] and thus find the identity : \end{aligned}\ ] ] which , after rearrangement can be casted into the simpler form : showing that the -path complexity can be _ evolved _ from one particular to another using the resolvent .we are now interested in showing how the path entropy can indeed provide important informations of the properties of a graph .we thus study the random walk on a graph , with transition matrix , with being the degree matrix .in particular , having shown that differently from analogous graph entropies present in the literature one can still distinguish asymptotically different nodes according to their path complexity , we would like to rank nodes according to their complexity for .a first test of this statement is applied to random ( positive ) matrices of different sizes , as in fig .[ fig : randomn ] . as a first comment ,it is easy to see that the complexity depends on the size of the graph , , showing that different growth curves appear as a function of .although for different sizes , the entropies are clustered around similar values , zooming onto the curve shows that indeed these pertain the memory on their path complexity in the factor , asymptotically .one can then perform a similar analysis for other type of random graphs .we extend this analysis also to the case of erds - renyi graphs in fig .[ erdosrenyi ] .we have generated various instances of random graphs , according to different realizations of the probability parameter to have a link or not ; we have considered graph with the same number of nodes , .it is easy to see tha different growth curves can be distinguished according to the parameter , although these curves become more and more similar for larger values of .-path complexity for erds - renyi graphs , generated with value of the probability parameter .the lower set of curves is associated with the probability parameter , and the higher with .,title="fig : " ] [ fig : random ] as a case study , we evaluate the complexity of nodes on a self - similar graph , as for instance the sierpinsky fractal .the results are shown in fig .[ fig : sier ] .we have analyzed the growth curves for each node , for a graph with nodes , observing that few nodes exhibited lower growth curved as compared to the others . by plotting a heat map of the node complexity on the fractal, one observes that the nodes at the boundary of the sierpinsky fractal have lower path complexity .a histogram of the asymptotic complexity shows that most of the nodes exhibit a similar complexity , meanwhile fewer nodes can be clearly distinguished from the others .-path complexity for the nodes of a sierpinsky fractal as a function of .we observe that , although most of the nodes have similar entropies , there are few outliers . in orderto identify which nodes exhibit lower complexity , we plot a heatmap of a sierpinsky fractal in the figure on the right . _central : _ a heat map of path complexity for the sierpinsky fractal evaluated at , and with a number of nodes .we observe how , although the self - similarity properties , the entropy is able to identify points which possess lower path complexity on the boundary ._ bottom : _ we plot the frequency distribution of asymptotic complexity . we see that a large fraction of node have analogous asymptotic path complexity value very close to 1.38 , meanwhile few nodes , showin in the top right figure , take lower values.,title="fig : " ] -path complexity for the nodes of a sierpinsky fractal as a function of .we observe that , although most of the nodes have similar entropies , there are few outliers . in orderto identify which nodes exhibit lower complexity , we plot a heatmap of a sierpinsky fractal in the figure on the right . _central : _ a heat map of path complexity for the sierpinsky fractal evaluated at , and with a number of nodes .we observe how , although the self - similarity properties , the entropy is able to identify points which possess lower path complexity on the boundary . _bottom : _ we plot the frequency distribution of asymptotic complexity . we see that a large fraction of node have analogous asymptotic path complexity value very close to 1.38 , meanwhile few nodes , showin in the top right figure , take lower values.,title="fig : " ] -path complexity for the nodes of a sierpinsky fractal as a function of .we observe that , although most of the nodes have similar entropies , there are few outliers . in orderto identify which nodes exhibit lower complexity , we plot a heatmap of a sierpinsky fractal in the figure on the right . _central : _ a heat map of path complexity for the sierpinsky fractal evaluated at , and with a number of nodes .we observe how , although the self - similarity properties , the entropy is able to identify points which possess lower path complexity on the boundary . _bottom : _ we plot the frequency distribution of asymptotic complexity . we see that a large fraction of node have analogous asymptotic path complexity value very close to 1.38 , meanwhile few nodes , showin in the top right figure , take lower values.,title="fig : " ]in this paper we have introduced and studied the entropy associated with the number of paths originating at a node of a graph . motivated by distinguishing the asymptotic behavior of non - local entropy defined on a graphs , and inspired by earlier studies on macroscopic entropies , we have obtained a closed formula for the path complexity of a node in a graph .this entropy can be thought as the centrality operator applied to the local definition of entropy of a node in a graph , and depends on an external constant , that we introduced in order to keep the entropy finite asymptotically .although the entropy introduced in the present paper is a non - local , the extra parameter has a nice interpretation in terms of the average number of walks to be considered .this allows to study the average transient behavior of the entropy , and in particular to introduce the asymptotic path complexity , given by the constant which charachterizes the asymptotic behavior of the path complexity of a node .we have applied this entropy to studying ( normalized ) random matrices , random graphs and fractals , and in particular have shown that the overal complexity of a node depends in value on the size of the graph . for random graphs , we have shown that one that the average asymptotic behavior of a node depends on the value of the probability parameter .in addition , we have shown that this entropy is able to distinguish points in the bulk of a fractal from those at some specific boundaries , showing that these have lower path complexity as compared to the others .in general , we have the feeling of having introduced a new measure of macroscopic complexity for graphs , based on the fact that the number of paths generating at a node can differ substantially depending where a node is located with respect to the whole graph . given this non - local definition, one would expect that the path complexity can give important insights on the relevance of topological properties of networks in several of their applications .in addition , we have compared this entropy to those introduced in the past , showing that this entropy contributes to the growing literature on graph entropies . as a closing remark, we believe that this entropy has better asymptotic properties ( long walks ) as compared to those introduced so far , and thus can be used to study the properties of large graphs .we would like to thank j. d. farmer , j. mcnerney and f. caccioli for comments on an earlier drafting of this entropy .we aknowledge funding from icif , epsrc : ep / k012347/1 .+ + s. lloyd , h. pagels , `` complexity as thermodynamic depth '' , ann . of phys .188 , ( 1988 ) k. lindgren , `` microscopic and macroscopic entropy '' , phys .rev . a , vol .38 no 9 , ( 1988 ) s. lloyd , `` valuable information '' , in _ complexity , entropy and the physics of information _ , sfi studies in the sciences of complexity .addison - wesley ma , ( 1990 ) l. ekroot , t. m. cover , `` the entropy of markov trajectories '' , ieee transactions on information theory , vol 39 , no 4 , ( 1993 ) a. barrat , m. barthelemy , a. vespignani , dynamical processes on complex networks , cambridge university press , cambridge , uk , ( 2009 ) d. gfeller , j .- c .chappelier and p. de los rios,``finding instabilities in the community structure of complex networks '' , phys .e 72 , 056135 , ( 2005 ) g. bianconi,``the entropy of randomized network ensembles '' , europhys .lett 81 , 28005 ( 2008 ) ; g. bianconi,``entropy of network ensembles '' , phys .e 79 , 036114 , ( 2009 ) k. anand , g. bianconi , `` entropy measures for complex networks : toward an information theory of complex topologies '' , phys .e 80 , 045102(r ) , ( 2009 ) l. bogacz , z. burda , b. waclaw,``homogeneous complex networks '' , physica a , 366 , 587 ( 2006 ) j. gomez - gardenes , v. latora , `` entropy rate of diffusion processes on complex networks '' , phys . rev .e 78 , 0655102 , ( 2008 ) s. l. braunstein , s. ghosh , s. severini , `` the laplacian of a graph as a density matrix : a basic combinatorial approach to separability of mixed states '' , annals of combinatorics , 10 , no 3 , ( 2006 ) ; f. passerini , s. severini , `` the von neumann entropy of networks '' , arxiv:0812.2597 , ( 2008 ) s. burioni , d. cassi , `` random walk on graphs : ideas , techniques and results'',topical review , jour .phys . a 38 , r45 , ( 2005 ) j. west , g. bianconi , s. severini , a. e. teschendorff , `` differential network entropy reveals cancer system hallmarks '' , scientific reports 2 : 802 , ( 2012 ) d. griffith , `` spatial autocorrelation : a primer '' .washington , dc : association of american geographers resource publication , ( 1987 )
thermalization is one of the most important phenomena in statistical physics . often , the transition probabilities between different states in the phase space is or can be approximated by constants . in this case , the system can be described by markovian transition kernels , and when the phase space is discrete , by markov chains . in this paper , we introduce a macroscopic entropy on the states of paths of length and , studying the recursion relation , obtain a fixed point entropy . this analysis leads to a centrality approach to markov chains entropy .
the concept of equilibrium plays a central role in economics . of all , the most influential is the walrasian general equilibrium as represented by arrow and debreu . though it is a grand concept , andwell established in the profession , it can not be more different from the real economy .the walrasian theory specifies preferences and technologies of all the consumers and firms , and defines the equilibrium in which micro - behaviors of all the economic agents are precisely determined .it is just as one analyses object such as gas comprising many particles by determining the equations of motion for all the particles .physicists know that this approach though it may look reasonable at first sight , is actually infeasible and on the wrong track . instead , following the lead of maxwell , boltzmann and gibbs , they have developed statistical physics . curiously ,despite of the fact that the macroeconomy consists of many heterogeneous consumers and firms , the basic method of statistical physics has had almost no impact on economics . in the walrasian equilibrium ,the marginal productivities of production factors such as labor and capital are equal in all the firms , industries , and sectors .otherwise , there exists inefficiency in the economy , and it contradicts the notion of equilibrium .however , in the real economy , we actually observe significant productivity dispersion .that is , there is a distribution rather than a unique level of productivity .search theory has attempted to explain such distribution by considering frictions and search costs which exist in the real economy .however , it is still based on representative agent assumptions . to explain equilibrium distribution, the most natural and promising approach is to eschew pursuit of precise micro - behavior on representative agent assumptions , and resort to the method of statistical physics .foley is a seminal work which applies such statistical method to the general equilibrium model .yoshikawa argues that the study of productivity dispersion provides correct micro - foundations for keynesian economics , and that to explain distribution of productivity we should apply the method of statistical physics . in a series of papers , we have attempted to establish the empirical distribution using a large data set covering more than a million firms in the japanese manufacturing and non - manufacturing industries . to explain this empirically observed distribution of productivity , iyetomi introduced the notion of _ negative temperature_. based on this notion of negative temperature, yoshikawa also made a similar attempt with the help of grandcanonical partition function . in this paper, we explore the problem from a different angle than the standard entropy maximization . before doing so, we first update our empirical investigation of distribution of labor productivity in section 2 .most of theoretical works exploring distribution of productivity resort to the straight - forward entropy maximization .instead , scalas and garibaldi suggest that we study the same problem using the ehrenfest - brillouin model , a markov chain which describes random creations and destructions in system comprising many elements moving across a finite number of categories .following their lead , we present such a model in section 3 . by considering detailed balance , we derive the stationary distribution of the model which explains the empirically observed distribution .section 4 offers brief concluding remarks .the labor productivity denoted by , is simply defined by here , is the value added in units of yen , and the number of workers .iyetomi has studied the firm data in japan for the year 2006 . since then, we have obtained the data up to the year 2010 and will use the 2008 data in this paper , as it contains the largest number of firms .let us briefly review the method of calculating the value added .dataset is constructed by unifying two datasets , the nikkei economic electric database ( needs ) for large firms ( most of which are listed ) and and the credit risk database ( crd ) for small to medium firms .the value added is calculated by the so - called boj method , established by the statistics department of the bank of japan , and gives the value added as the sum of net profits , labor costs , financing costs , rental expenses , taxes , and depreciation costs .although the original datasets contain over a million firms together , by limiting the analysis to firms which have non - empty entries in all these items , we end up with 180,181 firms for 2008 .figure [ fig:3_pdf0_2008 ] shows the pdf of the firms and the worker s of the labor productivity in units of yen / person .the fact that the major peak of the latter is shifted to right compared to that of the former indicates that the average number of workers per firm increases in this region . in fact , fig .[ fig:3_nbar0_2008 ] shows the dependence of on the labor productivity of the firm ( ) .we observe that as the productivity rises , it first goes up to about and then decreases .iyetomi explained the upward - sloping distribution in the low productivity region by introducing the negative temperature theory .the downward - sloping part in the high productivity region is close to linear ( denoted by the dotted line ) in this double - log plot .this indicates that it obeys the power low : we have studied this phenomenon in the period of 2000 through 2008 , and not only for all the sectors but also for the manufacturing and the non - manufacturing sectors separately .it turns out that we always find the qualitatively same pattern as shown in fig .[ fig:3_nbar0_2008 ] ; the number of workers exponentially increases as increases up to a certain level of productivity whereas it decreases following power law ( eq.([ncgamma ] ) ) in the high productivity region .we thus conclude that this broad shape of distribution of productivity among firms is quite robust and universal .we note that this is somewhat counter - intuitive in the sense that firms that achieved higher productivity through innovation and high - quality management would grow larger , so that equilibrium distribution would simply have monotonically increasing with .therefore we need to find what is the main reason that causes this behavior , which we will do in the following section .one way to analyze the equilibrium distribution of labor productivity based on statistical physics is to maximize entropy .instead , scalas and garibaldi suggest that we can usefully apply the ehrenfest - brillouin model , a markov chain to analyze the problem . in this section, we present such a model .the macroeconomy consists of many firms with different levels of productivity .differences in productivity arise from different capital stocks , levels of technology and/or demand conditions facing firms .we call a group of firms with the same level of productivity a _ cluster_. workers randomly move from a cluster to another for various reasons at various times . despite of these random changes ,the distribution of labor productivity as a whole remains stable because those incessant random movements balance with each other .this balancing must be achieved for each cluster , and is called detail - balance . in the following , we present a general treatment of this detail - balance using particle - correlation theory la . in doing so, we make an assumption that the number of workers who belong to clusters with high productivity is constrained ; see also ref . .we denote the number of workers who belong to a cluster with the level of labor productivity , by . the total output in the economy as a whole , is assumed to be equal to aggregate demand , and is given : for the productivity distribution of workers specified by to be in equilibrium , the number of workers who move _ out _ of cluster per unit time must be equal to that of workers who move _ into _ this cluster per unit time .we consider the minimal process that satisfies the condition that the total output in the economy as a whole is conserved with eq .( [ con0 ] ) . in this process, two workers move simultaneously .this is illustrated in figure [ fig : process](a ) : a worker in a cluster with productivity and a worker in a cluster with productivity move to clusters with productivities and , respectively . for the total output to remain constant , the following condition must be satisfied : such job - switchings occur for various unspecifiable reasons .the best we can do is to consider a markov chain defined by transition rates , .they have the following trivial symmetries : we also assume that the reverse process , illustrated in figure [ fig : process](b ) occurs with the same probability : equilibrium condition then requires the number of workers moving from to per unit time denoted by must be equal to the number of those from to denoted by : the flux is proportional to the numbers of workers in clusters and , and and the corresponding transition rate : the fundamental assumption we make is that a cluster with productivity can accommodate workers at most .it means that where is a function that limits the number of workers in a cluster with productivity : one can obtain a general solution in terms of for the detail - balance equation ( [ db ] ) in the following way .thanks to eq .( [ timeref ] ) , substituting eq .( [ np ] ) into eq .( [ db ] ) enables us to find where . because of eq .( [ con1 ] ) , we obtain or , by denoting , this proves that is linear in .it leads us to where and are real free parameters .once the function is given , the equilibrium distribution can be obtained by solving the above . in order to model the distribution of labor productivity , we need to allow to be any integer number . furthermore , we find it most natural to choose so that it is continuous at . herewe adopt a simple linear model , as depicted in fig .[ fig : lhype ] .we can reasonably assume that , because there would be no restrictions for hiring workers if there are none in the firm . by substituting eq .( [ eq : lhype ] ) into eq . ( [ gsol ] ) and solving for , we finally obtain this is a simple extension of the fermi - dirac statistics . in passing ,we note that the partition function that yields eq .( [ ggsol ] ) is it is a reasonable extension of the fermi - dirac statistics in the sense that the partition function has the expansion and yet allows existence of levels .first , we note that when there is no limit to the number of the workers , _i.e. _ , , eq . ( [ ggsol ] )boils down to the boltzmann distribution , when we apply eq .( [ eq : bd ] ) to low - to - intermediate range of where is an exponentially increasing function of as observed in fig . [ fig:3_nbar0_2008 ] , we must have the negative temperature .we , therefore , assume that is negative in the following .the current model thus incorporates the boltzmann statistics model with negative temperature advanced in ref . .secondly , we recall the observation that the power law ( [ ncgamma ] ) holds for in the high productivity side . we can use this empirical fact to determine the functional form for .equation ( [ ggsol ] ) implies that when temperature is negative , approaches in the limit .these arguments persuade us to adopt the following anzatz for , given the present model , explaining the empirically observed distribution of productivity is equivalent to determining four parameters , , , , and in eqs .( [ ggsol ] ) and ( [ eq : gc ] ) .we estimate these four parameters by the fit to the empirical results as shown in fig .[ fig:3_nbar0_2008 ] .figure [ fig:5_pw_fitplot_2008 ] demonstrates the results of the best fit for three datasets of firms , namely , those in all the sectors , the manufacturing sector , and the non - manufacturing sector .the fitted parameters are listed in table [ tab : bf ] .the present model is quite successful in unifying the two opposing functional behaviors of the average number of workers in the low - to - medium and high productivity regimes .the crossover takes place at the productivity in the nonmanufacturing sector which is about 40% as high as that in the manufacturing sector .also we see that the inverse temperature of the non - manufacturing sector is just half of that of the manufacturing sector .this manifests there is a much wider demand gap in the non - manufacturing sector .the economic system is thus far away from equilibrium in demand exchange .in contrast , times gives almost the same values for the two sectors , indicating that the system seems to be in equilibrium as regards exchange of workers .these findings agree well with those obtained in the previous study ..best - fit parameters and the position of the peak . [ cols="^,>,>,>,>,>",options="header " , ]a theoretical model was proposed to account for empirical facts on distribution of workers across clusters with different labor productivity .its key idea is to assume that there are restrictions on capacity of workers for clusters with high productivity .this is a rational assumption because most of firms belonging to such superb clusters are expected to be in a cutting - edge stage to lead industry .we then derived a general formula for the equilibrium distribution of productivity , adopting the ehrenfest - brillouin model along with the detail - balance condition necessary for equilibrium .fitting of the model to the empirical results confirmed that the theoretical model could encompass both of a boltzmann distribution with negative temperature on low - to - medium productivity side and a decreasing part in a power - law form on high productivity side .the authors would like to thank yoshi fujiwara , yuichi ikeda and wataru souma for helpful discussions and comments during the course of this work . andtoshiyuki masuda , chairman of the kyoto shinkin bank for his comments on japanese small to medium firms .we would also thank the credit risk database for the data used in this paper .this work is supported in part by _ the program for promoting methodological innovation in humanities and social sciences by cross - disciplinary fusing _ of the japan society for the promotion of science .
we construct a theoretical model for equilibrium distribution of workers across sectors with different labor productivity , assuming that a sector can accommodate a limited number of workers which depends only on its productivity . a general formula for such distribution of productivity is obtained , using the detail - balance condition necessary for equilibrium in the ehrenfest - brillouin model . we also carry out an empirical analysis on the average number of workers in given productivity sectors on the basis of an exhaustive dataset in japan . the theoretical formula succeeds in explaining the two distinctive observational facts in a unified way , that is , a boltzmann distribution with negative temperature on low - to - medium productivity side and a decreasing part in a power - law form on high productivity side .
assume that we have a matrix representing points in . in this paper, we will be concerned with linear feasibility problems that ask if there exists a vector that makes positive dot - product with every , i.e. where boldfaced is a vector of zeros .the corresponding algorithmic question is `` if ( p ) is feasible , how quickly can we find a that demonstrates ( p ) s feasibility ? '' .such problems abound in optimization as well as machine learning . for example , consider _ binary linear classification _ - given points with labels , a classifier is said to separate the given points if has the same sign as or succinctly for all .representing shows that this problem is a specific instance of ( p ) .we call ( p ) the _ primal _ problem , and ( we will later see why ) we define the _ dual _ problem ( d ) as : and the corresponding algorithmic question is `` if ( d ) is feasible , how quickly can we find a certificate that demonstrates feasibility of ( d ) ? '' .+ our aim is to deepen the geometric , algebraic and algorithmic understanding of the problems ( p ) and ( d ) , tied together by a concept called _margin_. geometrically , we provide intuition about ways to interpret margin in the primal and dual settings relating to various balls , cones and hulls . analytically , we prove new margin - based versions of classical results in convex analysis like gordan s and hoffman s theorems .algorithmically , we give new insights into the classical perceptron algorithm .we begin with a gentle introduction to some of these concepts , before getting into the details .[ [ notation ] ] * notation * + + + + + + + + + + when we write for vectors , we mean for all their indices ( similarly ) . to distinguish surfaces and interiors of balls more obviously to the eye in mathematical equations , we choose to denote euclidean balls in by , and the probability simplex by .we denote the linear subspace spanned by as lin , and convex hull of by conv .lastly , define and is the ball of radius ( are similarly defined ) .the margin of the problem instance is classically defined as if there is a such that , then . if for all , there is a point at an obtuse angle to it , then . at the boundary be zero .the in the definition is important if it were , then would be non - negative , since would be allowed .this definition of margin was introduced by goffin who gave several geometric interpretations .it has since been extensively studied ( for example , and ) as a notion of complexity and conditioning of a problem instance .broadly , the larger its magnitude , the better conditioned the pair of feasibility problems ( p ) and ( d ) are , and the easier it is to find a witnesses of their feasibility .ever since , the margin - based algorithms have been extremely popular with a growing literature in machine learning which it is not relevant to presently summarize . in sec . [sec : affmargin ] , we define an important and `` corrected '' variant of the margin , which we call _ affine - margin _ , that turns out to be the actual quantity determining convergence rates of iterative algorithms . [[ gordans - theorem ] ] * gordan s theorem * + + + + + + + + + + + + + + + + + + + this is a classical _ theorem of the alternative _ , see .it implies that exactly one of ( p ) and ( d ) is feasible .specifically , it states that exactly one of the following statements is true . 1 .there exists a such that .2 . there exists a such that .this , and other separation theorems like farkas lemma ( see above references ) , are widely applied in algorithm design and analysis .we will later prove generalizations of gordan s theorem using affine - margins .[ [ hoffmans - theorem ] ] * hoffman s theorem * + + + + + + + + + + + + + + + + + + + the classical version of the theorem from characterizes how close a point is to the solution set of the feasibility problem in terms of the amount of violation in the inequalities and a problem dependent constant . in a nutshell , if then + \big\|\ ] ] where is the `` hoffman constant '' and it depends on but is _ independent of . this and similar theorems have found extensive use in convergence analysis of algorithms - examples include .gler , hoffman , and rothblum generalize this bound to any norms on the left and right hand sides of the above inequality. we will later prove theorems of a similar flavor for ( p ) and ( d ) , where will almost magically turn out to be the affine - margin .such theorems are used for proving rates of convergence of algorithms , and having the constant explicitly in terms of a familiar quantity is useful . ** geometric * : in sec.[sec : affmargin ] , we define the _ affine - margin _ , and argue why a subtle difference from eq.([eq : margin ] ) makes it the `` right '' quantity to consider , especially for problem ( d ) .we then establish geometrical characterizations of the affine - margin when ( p ) is feasible as well as when ( d ) is feasible and connect it to well - known _radius theorems_. this is the paper s appetizer . * * analytic * : using the preceding geometrical insights , in sec.[sec : gordan ] we prove two generalizations of gordan s theorem to deal with alternatives involving the affine - margin when either ( p ) or ( d ) is strictly feasible . building on this intuition further , in sec.[sec : hoffman ] ,we prove several interesting variants of hoffman s theorem , which explicitly involve the affine - margin when either ( p ) or ( d ) is strictly feasible .this is the paper s main course . * * algorithmic * : in sec.[sec : np ] , we prove new properties of the normalized perceptron , like its margin - maximizing and margin - approximating property for ( p ) and dual convergence for ( d ) .this is the paper s dessert .we end with a historical discussion relating von - neumann s and gilbert s algorithms , and their advantage over the perceptron .an important but subtle point about margins is that the quantity determining the difficulty of solving ( p ) and ( d ) is actually _ not _ the margin as defined classically in eq.([eq : margin ] ) , but the affine - margin which is the margin when is restricted to lin( ) , i.e. for some coefficient vector .the affine - margin is defined as where is a key quantity called the gram matrix , and is easily seen to be a self - dual semi - norm . intuitively , when the problem ( p ) is infeasible but is not full rank , i.e. lin( ) is not , then will never be negative ( it will always be zero ) , because one can always pick as a unit vector perpendicular to lin , leading to a zero dot - product with every . since no matter how easily inseparable is, the margin is always zero if is low rank , this definition does not capture the difficulty of verifying linear infeasibility .similarly , when the problem ( p ) is feasible , it is easy to see that searching for in directions perpendicular to is futile , and one can restrict attention to lin , again making this the right quantity in some sense . for clarity , we will refer to when the problem ( p ) is strictly feasible ( ) or strictly infeasible ( ) respectively .we remark that when , we have , so the distinction really matters when , but it is still useful to make it explicit .one may think that if is not full rank , performing pca would get rid of the unnecessary dimensions .however , we often wish to only perform elementary operations on ( possibly large matrices ) that are much simpler than eigenvector computations . unfortunately , the behaviour of is quite finicky unlike is not stable to small perturbations when conv( ) is not full - dimensional . to be more specific ,if ( p ) is strictly feasible and we perturb all the vectors by a small amount or add a vector that maintains feasibility , can only change by a small amount .however , if ( p ) is strictly _ _in__feasible and we perturb all the vectors by a small amount or add a vector that maintains infeasibility , can change by a large amount . for example , assume lin is not full - dimensional , and is large .if we add a new vector to to form where has a even a tiny component orthogonal to lin( ) , then suddenly becomes zero .this is because it is now possible to choose a vector which is in lin , and makes zero dot - product with , and positive dot - product with .similarly , instead of adding a vector , if we perturb a given set of vectors so that lin( ) increases dimension , the negative margin can suddenly jump from to zero . despite its instability and lack of `` continuity '' , it is indeed this negative affine margin that determines rate of convergence of algorithms for ( d ) . in particular , the convergence rate of the von neumann gilbert algorithm for ( d ) is determined by much the same way as the convergence rate of the perceptron algorithm for ( p ) is determined by .we discuss these issues in detail in section [ sec : np ] and section [ sec.vng ] .the positive margin has many known geometric interpretations it is the width of the feasibility cone , and also the largest ball centered on the unit sphere that can fit inside the dual cone ( is the dual cone of cone ) see , for example . here, we provide a few more interpretations .remember that when eq .is feasible .[ margindual ] the distance of the origin to conv is . when , and eq .holds because ( d ) is feasible making the right hand side also zero . when , note that the first two equalities holds when , the next by the minimax theorem , and the last by self - duality of . the quantity is also closely related to a particular instance of the minimum enclosing ball ( meb ) problem . while it is common knowledge that meb is connected to margins ( and support vector machines ) , it is possible to explicitly characterize this relationship , as we have done below .[ meb ] assume \in { \mathbb{r}}^{d\times n} ] where and . since and , we have and consequently hence , whose distance from is precisely .the interpretation of the preceding theorem is that the distance to feasibility for the problem ( p ) is governed by the magnitude of the largest mistake and the positive affine - margin of the problem instance .we also provide an alternative proof of the theorem above , since proving the same fact from completely different angles can often yield insights .we follow the techniques of , though we significantly simplify it .this is perhaps a more classical proof style , and possibly more amenable to other bounds not involving the margin , and hence it is instructive for those unfamiliar with proving these sorts of bounds . forany given , define and hence note that . we used the self - duality of in eq.([eq : l2dual ] ) , lp duality for eq.([eq : lpdual ] ) , by definition for eq.([eq:2 g ] ) , and holder s inequality in eq.([eq : cb ] ) .the last equality follows because , since by proposition [ margindual ] .the perceptron algorithm was introduced and analysed by to solve the primal problem ( p ) , with many variants in the machine learning literature .for ease of notation throughout this section assume \in { \mathbb{r}}^{d\times n} ] with , and .then the iterates generated by the vng algorithm satisfy in particular , the algorithm finds with in at most steps .figure [ fig : vng ] illustrates the idea of the proof .assume as otherwise there is nothing to show .by the definition of affine margin , there must exist a point such that or equivalently .vng sets to be the nearest point to the origin on the line joining with .consider as the nearest point to the origin on a ( dotted ) line parallel to through .note ( internal angles of parallel lines ) .then , .hence , vng can converge linearly with strict infeasibility of ( p ) , but np can not .nevertheless , np and vng can both be seen geometrically as trying to represent the center of circumscribing or inscribing balls ( in ( p ) or ( d ) ) of conv(a ) as a convex combination of input points . in this paper , we advance and unify our understanding of margins through a slew of new results and connections to old ones .first , we point out the correctness of using the affine margin , deriving its relation to the smallest ball enclosing conv(a ) , and the largest ball within conv(a ) .we proved generalizations of gordan s theorem , whose statements were conjectured using the preceding geometrical intuition . using these tools , we then derived interesting variants of hoffman s theorems that explicitly use affine margins .we ended by proving that the perceptron algorithm turns out to be primal - dual , its iterates are margin - maximizers , and the norm of its iterates are margin - approximators .right from his seminal introductory paper in the 1950s , hoffman - like theorems have been used to prove convergence rates and stability of algorithms .our theorems and also their proof strategies can be very useful in this regard , since such hoffman - like theorems can be very challenging to conjecture and prove ( see for example ) . similarly , gordan s theorem has been used in a wide array of settings in optimization , giving a precedent for the possible usefulness of our generalization .lastly , large margin classification is now such an integral machine learning topic , that it seems fundamental that we unify our understanding of the geometrical , analytical and algorithmic ideas behind margins .this research was partially supported by nsf grant cmmi-1534850 .10 francis bach .duality between subgradient and conditional gradient methods . , 2012 .hd block .the perceptron : a model for brain functioning .i. , 34(1):123 , 1962 .jonathan borwein and adrian lewis . ,volume 3 .springer , 2006 .dennis cheung and felipe cucker . a new condition number for linear programming ., 91(1):163174 , 2001 .vasek chvatal . .macmillan , 1983 .george dantzig .an -precise feasible solution to a linear program with a convexity constraint in iterations independent of problem size .technical report , stanford university , 1992 .carl eckart and gale young .the approximation of one matrix by another of lower rank ., 1(3):211218 , 1936 .marina epelman and robert m freund .condition number complexity of an elementary algorithm for computing a reliable solution of a conic linear system ., 88(3):451485 , 2000 . marina a epelman , robert m freund , et al . .citeseer , 1997 .robert m freund and jorge r vera . some characterizations and properties of the distance to ill - posedness and the condition measure of a conic linear system ., 86(2):225260 , 1999 . elmer g gilbert .an iterative procedure for computing the minimum of a quadratic form on a convex set ., 4(1):6180 , 1966 .andrew gilpin , javier pea , and tuomas sandholm .first - order algorithm with convergence for -equilibrium in two - person zero - sum games ., 133(1 - 2):279298 , 2012 .jl goffin .the relaxation method for solving systems of linear inequalities ., pages 388414 , 1980 .osman gler , alan j hoffman , and uriel g rothblum .approximations to solutions to systems of linear inequalities . , 16(2):688696 , 1995 .alan j hoffman .on approximate solutions of systems of linear inequalities . , 49(4):263265 , 1952 .mingyi hong and zhi - quan luo . on the linear convergence of the alternating direction method of multipliers . , 2012 .dan li and tams terlaky .the duality between the perceptron algorithm and the von neumann algorithm ., 62:113136 , 2013 . albert bj novikoff . on convergence proofs for perceptrons. technical report , 1962 .aaditya ramdas and javier pea .margins , kernels and non - linear smoothed perceptrons .in _ proceedings of the 31st international conference on machine learning ( icml ) _ , 2014 .james renegar . some perturbation theory for linear programming ., 65(1):7391 , 1994 .james renegar .incorporating condition measures into the complexity theory of linear programming ., 5(3):506524 , 1995 .frank rosenblatt .the perceptron : a probabilistic model for information storage and organization in the brain . , 65(6):386 , 1958 .negar soheili and javier pea . a primal dual smooth perceptron von neumann algorithm . in _ discrete geometry and optimization _ , pages 303320 .springer , 2013 .m. todd and y. ye .approximate farkas lemmas and stopping rules for iterative infeasible - point iterates for linear programming ., 81:121 , 1998 .vladimir n vapnik . statistical learning theory .
given a matrix , a linear feasibility problem ( of which linear classification is a special case ) aims to find a solution to a primal problem or a certificate for the dual problem which is a probability distribution . inspired by the continued importance of `` large - margin classifiers '' in machine learning , this paper studies a condition measure of called its _ margin _ that determines the difficulty of both the above problems . to aid geometrical intuition , we first establish new characterizations of the margin in terms of relevant balls , cones and hulls . our second contribution is analytical , where we present generalizations of gordan s theorem , and variants of hoffman s theorems , both using margins . we end by proving some new results on a classical iterative scheme , the perceptron , whose convergence rates famously depends on the margin . our results are relevant for a deeper understanding of margin - based learning and proving convergence rates of iterative schemes , apart from providing a unifying perspective on this vast topic .
embedding diagrams have been used extensively to visualize and understand properties of hypersurfaces in curved space .they are surfaces in a fiducial flat space having the same _ intrinsic _ curvature as the hypersurface being studied . in this paperwe call the former a `` model surface '' and the latter a `` physical surface '' .a familiar example is the `` wormhole '' construction as the embedding diagram of the time symmetric hypersurface in the maximally extended schwarzschild geometry .another example often used is a sheet of paper curled into a cone in the 3 dimensional flat space . with the intrinsic curvature of the conical surface being zero , the `` model surface '' in the embedding diagramis a flat surface . in this paperwe investigate the construction of a different kind of embedding diagrams .we examine the construction of a model surface ( in a fiducial flat space ) having the same _ extrinsic _ curvature as the physical surface .such an _ extrinsic _ curvature embedding diagram describes not the geometry of the physical surface , but instead how it is _ embedded _ in the higher dimensional physical spacetime .( for convenient of description , in this paper we will discuss in terms of a 3 dimensional spacelike hypersurface in the 4 dimensional spacetime .the same idea applies to a surface of any dimension in a space of any higher dimensions ) .it is of interest to note that such an extrinsic curvature embedding diagram carries two senses of `` embedding '' : ( 1 ) it is a surface `` embedded '' in a fiducial flat space to provide a representation of some properties of the physical surface ( the meaning of embedding in the usual kind of embedding diagram based on intrinsic curvature ) , and ( 2 ) the diagram is also representing how the physical surface is `` embedded '' in the physical spacetime . the extrinsic curvature embedding carries information complimentary to the usual kind of embedding diagram showing the intrinsic curvature ( which we call `` intrinsic curvature embedding '' in this paper ) .for example , in the case of the constant schwarzschild time hypersurface in a schwarzschild spacetime , the _ extrinsic _ curvature embedding is a flat surface . for the case of the curled paper ,the extrinsic curvature embedding is a conical surface .in addition to its pedagogical value ( like those of intrinsic curvature embedding in providing visual understanding ) , such extrinsic curvature embedding may help understand the behavior of different time slicings in numerical relativity , and properties of different foliations of spacetimes .some elementary examples are worked out in this paper as a first step in understanding extrinsic curvature embedding .in the usual kind of embedding diagram ( the intrinsic curvature embedding ) one constructs a `` model '' surface in a fiducial flat space which has the same intrinsic geometry as the physical surface , in the sense of having the same induced metric .it should immediately be noted that in general it is impossible to match all metric components of the two surfaces .for example , for a 3 dimensional ( 3d ) surface in a 4d curved space , the induced metric ( ) ( the first fundamental form ) has 6 components , each of which is function of 3 variables ( ) .the 3d model surface in the fiducial 4d flat space ( with flat metric in coordinates ( ) ) is represented by only one function of 3 variables .there are 3 more functions one can choose , which can be regarded either as making a coordinate change in the physical or model surface , or as choosing the mapping between a point ( ) on the physical surface to a point ( ) on the model surface .altogether , there are 4 arbitrary functions ( e.g. , ) at our disposal . in generalwe can not match all 6 components of the induced metric .only certain components can be matched , and the embedding can only provide a representation of these components . an alternative is to construct an embedding with the model surface in a higher dimensional space . in the case of a stationary spherical symmetric spacetime like the schwarzschild spacetime , and when one is examining the geometry of a constant - killing - time slice , one can choose a coordinate system ( e.g. , the schwarzschild coordinate ) in which there is only one non - trivial induced metric component ( e.g. , the radial metric component ) .this component can be visualized with an embedding diagram using the trivial mapping , , between the physical space and the fiducial space , with being the circumferential radius , and .this leads to the `` wormhole '' embedding diagram in textbooks and popular literature .next we turn to extrinsic curvature embedding diagrams .to illustrate the idea , we discussed in terms of a 3d spacelike hypersurface in a 4d spacetime .consider a constant time hypersurface in a 4d spacetime with the metric given in the usual form is the lapse function , is the shift vector , and is the spatial 3-metric of the constant hypersurface . the extrinsic curvature ( the second fundamental form ) expressed in terms of the lapse and shift function is here `` '' represents covariant derivative in the three - dimensional space .we seek a surface with the same extrinsic curvature embedded in a fiducial 4d flat spacetime it is easy to see that the extrinsic curvature of the surface is given by where , , and the covariant derivative in is with respect to a 3-metric defined by . is the matrix inverse of . for any given 3 hypersurface in a 4d spacetime, we have only 4 functions that we can freely specify ( and the 3 spatial coordinate degrees of freedom ) , but there are 6 components to be matched . in general we can only have embedding representations of 4 of the components of the extrinsic curvature unless we go to a higher dimensional space , just like in the case of intrinsic curvature embedding .this brings a set of interesting questions : under what conditions will a surface be fully `` extrinsically - embeddable '' in a fiducial flat space one dimensional higher ?how many dimensions higher must a fiducial space be in order for a general surface to be extrinsically - embeddable ?we hope to return to these questions in future publications .the two kinds of embedding diagrams , intrinsic curvature embedding and extrinsic curvature embedding , are supplementary to one another and can be used together .the information contained in the usual kind of intrinsic embedding diagram is partial in the sense that different slicings of the same spacetime will give different intrinsic curvature embedding diagrams , and this information of which slicing is used ( the choice of the `` time '' coordinate ) is contained in the extrinsic curvature embedding .similarly , the information given in the extrinsic curvature embedding is partial , in the sense that the extrinsic curvature components depend on the choice of the spatial coordinates , an information that is contained in the intrinsic curvature embedding . with the two kinds of embedding diagram constructed together, one can read out both the induced metric components and the extrinsic curvature components . in principle , all geometric properties of the surface can then be reconstructed , including how the surface is embedded in the higher dimensional spacetime . in the following we give explicit examples of these constructions .we begin with the simple case of the schwarzschild metric in schwarzschild coordinate , since the metric is time independent and has zero shift , from ( [ eq : kij ] ) one sees immediately that the constant slicing has for all i and j. the `` extrinsic curvature embedding '' is obtained by identifying a point ( ) to a point ( ) in the fiducial flat space , and by requiring the extrinsic curvatures of the physical surface ( embedded in schwarzschild spacetime ) and the model surface ( embedded in flat spacetime ) be the same .this leads to a flat model surface in the fiducial flat space .we see that while the _ intrinsic _ curvature embedding of the schwarzschild slicing is non - trivial ( as given in text books and popular articles ) , the _ extrinsic _ curvature embedding is trivial .this high - lights that the constant schwarzschild time slicing is a `` natural '' foliation of the schwarzschild geometry , in the sense that these ( curved ) constant - schwarzschild- surfaces are embedded in the ( curved ) schwarzschild geometry in a trivial manner : same as a flat surface embedded in a flat spacetime .it is interesting to compare this to different time slicings in schwarzschild spacetime .define the schwarzschild metric ( [ eq : sch ] ) becomes the surfaces have flat _ intrinsic _ geometry , so the intrinsic curvature embedding is trivial ( the model surface is a flat surface in the fiducial flat space ) .but the _ extrinsic _ curvature embedding is non - trivial ; as we shall work out below .this is just the opposite situation of the constant - schwarzschild- slice ( non - trivial intrinsic embedding but trivial extrinsic embedding ) . for the extrinsic curvature embedding of the constant- `` flat slicing '' of metric ( 3.3 ) , with the spherical symmetry, it suffices to examine the slice . a constant slicing in metric ( [ eq : sch2 ] )has extrinsic curvature the extrinsic curvature embedding is given by a surface embedded in a fiducial 3d minkowski space using ( [ eq : embedk ] ) , it is straightforward to find that the non - trivial extrinsic curvature components are where is a function to be determined by matching the extrinsic curvature to that of the physical surface given by ( 3.4 ) , ( 3.5 ) .it is immediately clear that with only one arbitrary function , it would not be possible to match both of the two non - trivial extrinsic curvature components . to enable the matching , we introduce a spatial coordinate transformation _ on _ the physical surface . as is a tensor on the surface ,the coordinate change will change the value of but _ not _ how the surface is embedded . due to the spherical symmetry, it suffices to rescale only the radial coordinate , keeping the angular coordinate unchanged . using ( 3.7 , 3.8 ) and ( 3.4 , 3.5 ) , and identifying the fiducial flat space coordinates with physical space coordinate , we obtain the conditions on the functions and where .the boundary conditions for the system are ( i ) tends zero at infinity , and ( ii ) tends to at infinity ; that is , the embedding is trivial asymptotically .the two equations lead to a quadratic equation for with the two roots while both the `` + '' and the `` - '' sign solutions satisfy the boundary condition ( i ) for , it is easy to see that only the `` - '' solution leads to a that satisfies the boundary condition ( ii ) for .integration of the 2nd order equation associated with the `` - '' solution gives the extrinsic curvature embedding diagram for the spatially flat constant time slicing of the schwarzschild spacetime as shown in fig .1 . the height of the surface is the value of , the horizontal plane is the plane ( recall ) .all quantities are in unit of ( i.e. , ) . in what sense does this figure provide a `` visualization '' of the extrinsic curvature of the physical surface ?the extrinsic curvature compares the normal of the surface at two neighboring points ( cf .21.5 of ) . in fig .1 , with the model surface embedded in a flat space , one can easily visualize ( i ) unit vectors normal to the surface , ( ii)the parallel transport of a unit normal vector to a neighboring point , and ( iii ) the subtraction of the transported vector from the unit normal vector at the neighboring point , all in the usual flat space way .for example , in fig . 1 , imagine unit normals at two neighboring points and . with the horn shape surface , the `` tips '' of the unit normal vectors are closer than their bases .when parallel transported , subtracted and projected into the direction ( all done in the flat space sense ) this gives the value of .on the other hand , if we compare the normals of the neighboring points and , the `` tips '' of the normal vectors are further away than their bases .this accounts for the difference in sign of and in ( 3.4 ) and ( 3.5 ) .also explicit visually is the fact that , at large , the unit normals at neighboring points ( both in the and directions ) become parallel , showing that the extrinsic curvature goes to zero .( notice that is not going to zero as is not a unit vector ; rather , the extrinsic curvature contracted with the _ unit _ vector in the direction is going to zero as in the same way as . )we note that does not tend to a constant but is proportional to at large , although does go to zero as implied by the boundary condition .we note that this prescription of visualizing the covariant components of the extrinsic curvature is preciously the flat space version of the prescription given in sec .21.5 of .while the directions of the normal vectors and the result of a parallel transport are not readily visualizable in the curved space construction given in , the use of an embedding diagram in a fiducial flat space enables the easy visualization of normal vectors and their parallel transport as all of them are constructed in the usual flat space sense .it is also for the easiness of visualization that we choose to work with the covariant component of the extrinsic curvature .while the contravariant components can be treated equivalently ( note that we are working with spacetimes endowed with metrics ) , its visualization involved one - form which is less familiar ( see however the visualization of forms in ) .returning to the example at hand , we show in figs .2a and 2b the `` scaling function '' v.s .we see that is linear in for large , satisfying the boundary condition ( ii ) . in fig .2a , we see that is nearly linear throughout . to see that is not exactly linear , we show in fig . 2b that is appreciably different from zero in the region of smaller .this small difference from exact linearity is precisely what is needed to construct a model surface that can match both and .we see that the embedding is perfectly regular at the horizon ( ) .it has a conical structure at , in the sense that is not going to zero but instead approaches 1 from below ( i.e. , for small .although the surface covers all values , we note that approaches a constant , implying that the embedding diagram does _ not _ cover the inner - most region ( from to ) of the the circumferential radius .comparing this to the constant - schwarzschild - time slicing ( constant slicing in metric ( 3.1 ) ) is again interesting : the _ intrinsic _ curvature embedding of the constant - schwarzschild - time slicing also does not cover the inner - region ( from to ) , while the _ extrinsic _ curvature embedding of the constant - schwarzschild - time slicing covers all values just like the _ intrinsic _ curvature embedding of the `` spatially flat '' slicing .we emphasize again that the extrinsic curvature embedding diagram fig . 1 does _ not _ carry any information about the intrinsic geometry of the surface .for example , the circumference of a circle at a fixed is not , and the distance on the model surface is not the physical distance between the corresponding points on the physical surface ( unlike the case of the intrinsic curvature embedding diagram ) .this extrinsic curvature embedding diagram fig .1 carries only the information of how the `` spatially flat '' slicing is embedded in the schwarzschild geometry , in the sense that the relations between the normal vectors of the slicing embedded in the curved schwarzschild spacetime are the same as given by the surface shown in fig . 1 embedded in a flat minkowski spacetime. one might want to obtain the physical distance between two neighboring points , say , at and , in fig .1 . this information is contained in figs.2a and 2b , as the scaling factor gives the relation between and .one can also give a visual representation of this information of the intrinsic geometry by plotting an _ intrinsic _ embedding diagram , as in fig . 2c .for this spatially flat slicing , the _ intrinsic _ embedding diagram is a flat surface in a fiducial flat space . to enable this intrinsic embedding diagram fig.2c to be used conveniently with the extrinsic embedding diagram fig . 1 . ,we have plotted fig .2c in a way different from what is usually done in plotting embedding diagrams : the labeling of the spatial coordinate in this diagram is given in , the same coordinate ( note ) as used in the extrinsic embedding diagram ( or more precisely , it is , and ) . in this way ,the physical distance between any two coordinate points and in the _ extrinsic _ curvature embedding fig . 1 ( remember , the coordinate used in fig .1 ) can be obtained directly by measuring the distance on the model surface between the corresponding two points and in fig . 2c .hence , between this pair of intrinsic and extrinsic embedding diagrams , we can obtain all necessary information about the physical surface , with both the first ( metric ) and second ( extrinsic curvature ) fundamental forms explicitly represented .we note that in fig .2c , the coordinate labels are very close to equally spaced .this is a reflection of the fact that the scaling function given in fig .2a is very close to being linear ( but not exactly ) .this near - linearity of the scaling function , together with the fact the intrinsic embedding diagram is flat , tell us that in this special case , the physical distances ( the physical metric ) on the _ extrinsic _ curvature embedding surface in fig .1 between points are , to a good approximation , given simply by their coordinate separations in ( while the extrinsic curvature is contained in the shape of the surface ) .obviously this would not be true in general .next we turn to another simple example .the infalling eddington - finkelstein coordinate is defined by let the schwarzschild metric in the `` infalling slicing '' becomes both the intrinsic and extrinsic curvature embedding diagrams of the infalling slicing are non - trivial . in the following we work out the extrinsic curvature embedding .the extrinsic curvature of the `` infalling slicing '' is given by again with the spherical symmetry it suffice to study the slicing .to construct the extrinsic embedding , we ( i ) introduce a coordinate scaling , ( ii ) identify the coordinate with of ( 3.6 ) , and ( iii ) require .this leads to the following equations for and : eliminating leads to a quadratic equation for , the two roots of which give two second order equations for .we omit the rather long expressions here .again only one of the two equations admit a solution with the correct asymptotic behavior at large ( tends to zero and tends to ) .integrating this second order equation gives the embedding diagram shown in fig .3 . the height of the surface represents the value of , the horizontal plane is the ( ) plane .all quantities are in unit of .4a gives the scaling function v.s . , showing that it satisfies the boundary condition at infinity .asymptotically tends to , while , and .again we see that is very close to being linear . to show that it is not exactly linear , we plot in fig .4b the derivative of v.s . . for ,the derivative is considerably less than .as one may expect , the embedding is regular at the horizon , but has a conical structure at , same as the `` spatially flat slicing '' case above . for small , tends to ( from above ) , while tends to a constant .this implies that the inner most region of the circumferential radius ( from 0 to 1.2 m ) is not covered in the embedding diagram , again similar to the `` spatially flat slicing '' extrinsic curvature embedding studied above .we see that while the model surface in the `` spatially flat slicing '' embedding diagram fig .1 dips down for small , the model surface in the `` infalling slicing '' embedding diagram fig .3 spikes up .this is expected as the signs of the extrinsic curvature components ( ) are opposite of one another for the two slicings .we can easily see in figs . 1 and 3 , that in one case `` the tips of the normal are closer than their base '' or vise versa .such visual inspection is possible as the model surfaces are now embedded in flat spaces , enabling the use of flat space measure of distances , and normal vectors .again , one might want to visualize the physical distance between two neighboring points in fig .this can be done by plotting the corresponding _ intrinsic _ embedding diagram in the coordinate , as is given in fig .the physical distance between any two coordinate points and can be measured by their distance on this _ intrinsic _ embedding surface , in the flat space way . due to the near linearity of the scaling function , we see that the coordinate labels are again very close to equally spaced. however , in this case , unlike the spatially flat slicing above , the physical distance between the same coordinate distance is larger for smaller , as we can see from the curving of the intrinsic embedding surface . between this pair of intrinsic and extrinsic embedding diagrams, we can again visualize all information of the physical surface .in this paper we propose a new type of embedding diagram , i.e. , the `` extrinsic curvature embedding diagram '' based on the 2nd fundamental form of a surface .it shows how a surface is embedded in a higher dimensional curved space .it carries information complimentary to the usual kind of `` intrinsic curvature embedding diagram '' based on the 1st fundamental form of the surface .we illustrate the idea with 3 different slicings of the schwarzschild spacetime , namely the constant schwarzschild slicing ( eq . ( 3.1 ) ) , the `` spatially flat '' slicing ( eq . ( 3.3 ) ) and the `` infalling '' slicing ( eq . ( 3.13 ) ) .the intrinsic and extrinsic curvature embeddings of the different slicings are discussed , making interesting comparisons .the intrinsic curvature embedding diagram depends on the choice of the `` time '' slice ( in the 3 + 1 language of this paper ) , which is a piece of information carried in the extrinsic curvature . on the other hand ,the extrinsic curvature embedding diagram constructed out of the extrinsic curvature components depends on the choice of the `` spatial '' coordinates , which is a piece of information carried in the intrinsic curvature embedding diagram . with the two kinds of embedding diagram constructed together , all geometric properties of the surfacecan then be reconstructed , including how the surface is embedded in the higher dimensional spacetime .why do we study embedding diagrams ?one can ask this questions for both the intrinsic and extrinsic embedding constructions .it is clear that embedding construction has pedagogical value , e.g. , the wormhole diagram of the schwarzschild geometry appears in many textbooks introducing the ideas of curved spacetimes .the usual embedding diagrams shown are those based on the intrinsic curvature . herewe introduce a complimentary kind of embedding diagrams which is needed to give the full information of the surface in the curved spacetime . beyond their pedagogical value, we would like to point out that embedding diagram could be useful in numerical relativity .indeed the authors were led to the idea of extrinsic curvature embedding in trying to find a suitable foliation ( to choose the lapse function ) in the numerical construction of a black hole spacetime . in the standard 3 + 1 formulation of numerical relativity , the spatial metric and the extrinsic curvature used in parallel as the fundamental variables in describing a particular time slice .one chooses a lapse function to march forward in time .a suitable choose is crucial to make both the and regular , smooth and evolving in a stable manner throughout the spacetime covered by the numerical construction . whether a choice is suitable depends on the properties of the slicing and hence has to be dynamical in naturethis is a problem not fully resolved even in the construction of a simple schwarzschild spacetime .embedding diagrams let us see the pathology of the time slicing clearly and hence could help in the picking of a suitable lapse function .for example , in the constant schwarzschild time slicing ( eq.(3.1 ) ) , the intrinsic curvature embedding dips down to infinity at and can not cover the region inside ( the extrinsic curvature embedding is flat and nice for all ) . in the timeslicing of eq .( 3.3 ) , the intrinsic curvature embedding is flat and nice for all , but the extrinsic curvature embedding has a conical singularity near and can not cover the region inside , as shown in sec .3 of this paper . for the use of embedding diagrams in numerical relativity , and in particular in looking at the stability of numerical constructions with different choices of time slicing, one would need to investigate the two kinds of embedding diagrams in dynamical spactimes .we are working on simple cases of this presently .we thank malcolm tobias for help in preparing the figures . this work is supported in part by usnsf grant phy 9979985 . c. w. misner, k. s. thorne and j. a. wheeler , * gravitation * , ( w. h. freeman , san . francisco , 1973 ) .e. kasner , am . j. math . ,* 43 * , 126 , ( 1921 ) . c. fronsdal , phys ., * 116 * , 778 , ( 1959 ) . c. j. s. clarke , proc .london a , * 314 * , 417 , ( 1970 ) .e. kasner , am . j. math . , * 43 * , 130 , ( 1921 ) . fig .2a . scaling factor defined by ( 3.9 , 3.10 ) for the `` spatially flat slicing '' of the schwarzschild spacetime ( line element ( 3.3 ) ) . tends to at infinity and is basically linear through out .it tends to a non - zero constant as approaches zero .derivative of with respect to is plotted in the close zone , showing that it is not exactly linear .this slight deviation from exact nonlinearity is needed to enable both and be matched .the _ intrinsic _ curvature embedding diagram ( corresponding to the _ extrinsic _ curvature embedding diagram in fig.1 ) is plotted in , the same coordinate as used in fig .1 ( or more precisely , it is , and ) .the physical distance between any two coordinate points and in the _ extrinsic _ curvature embedding fig .1 can be obtained directly by measuring the distance in the flat space sense between the corresponding two points and on the model surface in fig . 2c . between fig . 1 and 2c, we can obtain all necessary information about the physical surface , with both the first ( metric ) and second ( extrinsic curvature ) fundamental forms explicitly represented .embedding diagram for the `` infalling slicing '' of the schwarzschild spacetime ( line element ( 3.13 ) ) .the function defined by ( 3.15 , 3.16 ) is plotted on the plane . tends to 1 at the origin ( tends to 0 ) , where the embedding has a conical singularity .all quantities are in unit of m. fig .4a . scaling factor defined by ( 3.15 , 3.16 ) for the `` spatially flat slicing '' of the schwarzschild spacetime ( line element ( 3.13 ) ) . tends to at infinity and is nearly linear through out .it tends to a non - zero constant as approaches zero .the _ intrinsic _ curvature embedding diagram for the infalling slicing , corresponding to the _ extrinsic _ curvature embedding diagram in fig.3 , is plotted in , the same coordinate as used in fig .3 . due to the linearity of in fig .4a , the coordinate labels are nearly equally spaced . the physical distance between any two coordinate points and in the _ extrinsic _ curvature embedding fig .3 can be obtained directly by measuring the distance in the flat space sense on the model surface fig .4c between the corresponding two coordinate points and .we see that the same coordinate separation corresponds to a large physical distance in the near zone .
embedding diagrams have been used extensively to visualize the properties of curved space in relativity . we introduce a new kind of embedding diagram based on the _ extrinsic _ curvature ( instead of the intrinsic curvature ) . such an extrinsic curvature embedding diagram , when used together with the usual kind of intrinsic curvature embedding diagram , carries the information of how a surface is _ embedded _ in the higher dimensional curved space . simple examples are given to illustrate the idea .
in quantum information processing , entanglement is a particularly useful resource and has many applications such as secret key distribution , teleportation and dense coding .recently , these quantum communication protocols have been implemented .it is imaginable that in the future a large number of distant users would want to engage in communicating with each other through quantum protocols . to enable this to happen, such distant users will need to share particles in maximally entangled states , irrespective of noise in the entanglement distribution channels .various schemes have been put forward which could directly or indirectly help in such distribution of entanglement , and have been experimentally demonstrated . given a certain physical distribution of users intending to communicate quantum mechanically, one can connect them with quantum channels to construct networks for the distribution of entanglement .networks for such entanglement distribution can have different architectures depending on which users are linked directly by quantum channels and which users are indirectly linked through intermediate nodes .in classical networks , two of the major network topologies are the star network and the ring network . in a ring network , one continuous ring joins all parties who wish to communicate ( as shown in fig.[factors](a ) ) , whereas in a star network all parties are connected to a central hub where information is exchanged ( as shown in fig.[factors](b ) ) .if all the nodes of such classical networks are assumed to be free from attacks and failures , then wire length becomes the variable of interest for comparison of the two network types - i.e. , one network is said to be better than the other when it requires less wire to construct .this is because noise in the connecting channels is unimportant for classical communications .no matter how noisy the connecting channels are , classical information can be amplified arbitrarily and sent faultlessly through these channels .based on the wire - length criterion , for networks having the simple circular layouts with symmetrically placed users , as shown in fig.[factors ] , the ring network is better than a star network when the number of users is , while the reverse holds true for . for a quantum network , however , we use _ entanglement _ , rather than wire - length , as a figure of merit in comparing networks .this is due to the fact that when distributing entanglement via quantum channels , unavoidable noise always degrades perfect entanglement in the transmission process and consequently it is not as easy to reliably distribute entanglement .we therefore use the criterion that the better network is the one that permits the sharing of a greater amount of entanglement between pairs of users on average . to begin , we tackle the problem for a general channel , when the available resources are unlimited or so large that asymptotic entanglement distillation protocols can be used as a part of the entanglement distribution method .note that physically such a situation is permitted _ only when _ each user can store " qubits noiselessly for a long time .this gives them the chance to manipulate a large number of qubits together , as is required for asymptotic entanglement distillation . in this asymptotic case, we find the criterion for better entanglement distribution becomes equivalent to the wire - length criterion of classical networks .next we examine two cases of extremely limited resources ( only one initial entangled pair available to one user , and exactly one initial entangled pair available to each user ) with a specific quantum channel to illustrate the fact that the network for better entanglement distribution can differ sharply from that for better classical communications .after that , we give a heuristic explanation of this striking difference between the cases of limited and infinite resources as seen in our specific examples .we consider a very simple type of network to facilitate study of the problem .suppose there are parties , who are distributed in a circle at a constant distance r from the centre , and wish to share entanglement .they connect themselves using quantum channels using either a star or a ring layout ( see fig.[factors ] ) . as the number increases , the distance between a party and its neighbors decreases .we define a wirelength as the shortest connection available on either network - for the ring network this is the channel between neighbors and for the star network it is the channel between one party and the hub . when any two parties wish to share entanglement , they must use some distribution method to share entanglement between themselves using the most efficient route available to them for the network they are connected by .for a given channel , this method will involve sharing entangled pairs either directly between the two interested parties or between intermediate parties and then joining them using _entanglement swapping _ . there may be some form of distillation involved to concentrate the intermediate or final entanglement in the distribution . in general, one will have to adopt a specific entanglement distillation protocol .it could be asymptotic or non - asymptotic , depending on the availability of resources .it may be optimal or non - optimal depending on whether the knowledge of the optimal distillation protocol exists for the classes states generated during distribution through the channels provided , and whether technology exists for its implementation .a specific type of quantum channel and associated entanglement distillation protocol ( not necessarily optimal ) , together with entanglement swapping to link up adjacent nodes , will comprise a specific distribution method .the basic approach of this paper will be to first choose a specific distribution method and then compare its efficiency on the star and the ring lay - outs .the variables which describe the quantum channel available between two parties are : * - the number of wirelengths between them * - the length of the wirelengths between them to clarify this , consider fig .[ factors ] which shows 6 parties of whom two may wish to communicate . if parties 1 and 4 wish to share entanglement , then using the star network ( fig .1(a ) ) they must use wirelengths of length each . if they are connected using the ring network ( fig .1(b ) ) then they must use wirelengths of length each .let us define a function which gives the entanglement distributed between two parties separated by wirelengths each of length . to be able to compare the two network layouts we calculate an which is the distributed entanglement averaged over all possible pairs of parties who wish to communicate : for a star network is always and is always so we have : for a ring network , is the distance between two neighboring parties , and is so we have : bearing in mind that in a ring network entanglement can be distributed either way round the network means this formula can be refined to always use the shortest distance : where if is odd or if n is even and is 0 if is odd , or 1 if is even .to find at what point one layout becomes better than the other , we are interested in finding , for a particular distribution method , the where : we are interested in comparing how this value of compares with the classical case , where , for parameters we specify , the ring network became better than the star network as the number of parties is increased . for a classical network , as noted before , this occurs at .unfortunately there is no analytic formula available for the function for an arbitrary quantum channel .this is because represents the amount of entanglement that can be distilled from a state after decoherence during transmission through a noisy channel .no general formula is known yet for the distillable entanglement of a given state . despite this fact , it is possible ( as we will show ) to provide a general statement about the case when an unlimited number of pairs are available across each wirelength .however , when only a limited amount of resources are available per user , we have to rely on the explicit form of the distillable entanglement . in case of limited resources , we will therefore calculate for specific circumstances ( specific channel types and specific distribution methods ) and use in eq.([generaleqn ] ) .the cases we consider are : * distributing an unlimited number of pairs along each wirelength and linking each wirelength up using entanglement swapping .we consider a _general _ noisy channel for this case .* distributing one pair traveling from source to destination along one or more wirelengths .we use a specific type of quantum channel and a specific distribution method . * distributing one pair between each party , and then using entanglement swapping to link them up .we use a specific type of channel for this case as well .we then try to investigate and explain the trends observed in the above specific cases .first we consider the case where the parties in the network share a very large number of maximally entangled pairs .an equal number is given to each party . in the case of a ring network each party then sends one half of the pair to their neighbour on the left whereas in a star network each party would send one half to a central hub .the consequence is that in either case , pairs are shared across each wirelength .since was very large , is very large ( assuming stays small , of course ) and so an asymptotic number of maximally entangled pairs can be distilled across each wirelength i.e. where is the distillable entanglement of a pair that has decohered on travel through the wirelength . in this asymptotic casewe assume that the maximally entangled states can be collected together in each wirelength and then _ matched _( i.e. , aligned end to end ) with maximally entangled states in adjacent wirelengths .the ability to do this would depend on being able to discriminate and store the distilled maximally entangled states .connecting up adjacent maximally entangled pairs in succession by entanglement swapping then produces maximally entangled pairs shared between any two users . in this asymptotic case ,the same entanglement arises independent of the number of wirelengths separating two users . in eq.([generaleqn ] ) then becomes a function of only the length of a single wirelength .therefore because depends only on , the situation becomes equivalent to the classical case and the crossover at which the ring network becomes better than the star network is also at .this is because for a radial wirelength has the same length as a circumferential wirelength and therefore will be the same for states travsersing the star or ring networks . for the radial wirelengths are shorter than circumferential ones and so will be greater for states passing through a star network . for circumferential wirelengths are shorter and so the ring network gives a greater overall entanglement .in this scenario we suppose that only one entangled pair is provided to any one of the users and he may have to communicate with any of the other users through a bit - flip channel .this means that the two parties must distribute the single pair between themselves . in the case of a star networkone party sends one half of the pair to a central point and then on again to the other party . for a ring network the half of the pair would travel around the ring through other parties until it reached the destination party .this channel acts on states in the following way : we relate to the length of the channel by : for such states the maximum distillable entanglement is known to be where denotes the von neumann entropy of the state .one particle was kept at the originator while the other was sent to the other party , passing through one or more wirelength to reach it .fig.[oneonly ] shows and plotted against .we see that right from the start ( ) the ring layout is better .this is true for all values of the radius of the network .so we see a difference from the classical case where it was only at that the ring network became better than the star network .and the channel type is bit - flip.,width=384 ]in this scenario we consider each party in the network having one maximally entangled pair .each party then sends one particle of the pair to their neighbour through a bitflip channel .entanglement swapping is then used to create a link between any two parties who wish to share entanglement .the procedure of entanglement swapping produces a pair linking across the two original pairs with a fidelity given by : and the channel type is bit - flip.,width=384 ] for a bit - flip channel , the order in which pairs are connected by entanglement swapping does not matter .fig.[oneeach ] shows and plotted against .we see again that straight away the ring layout is better .in fact , due to the nature of the bit - flip channel , the resultant fidelities for this situation and the previous one where one pair is shared across the entire distance between the two communicating parties , turn out to be identical . thus in the case dealt with in sections iv and v , it becomes apparent that the results of comparing network topologies for entanglement distribution can be _ very different _ from the classical case .in this section , we attempt to explain heuristically the difference in the results for the unlimited number of pairs ( where the ring network becomes better than the star network only after there are more than parties in the network ) , and the result for one pair ( where the ring network is always better than the star network ) . we will first give a general description which encompasses both the case for an unlimited number of pairs and that for a small number of pairs between parties .when a distillation procedure operates on a finite ensemble , we generally have outcomes of various degrees of entanglement with various probabilities . in these outcomes ,the entanglement is either pumped up or pumped down from the original values . to try to express a general pattern ,we restrict ourselves to distribution methods obeying the following assumptions : \1 .assume a general distillation protocol operating on arbitrarily sized ensembles ( finite or infinite ) where there is a probability that distillation in a wirelength is successful and boosts the entanglement to , being the distillable entanglement of the state .there is a probability that it fails and the entanglement is reduced to . in most casesthere will be more than two possible outcomes of a general distillation protocol , but for simplicity we assume just two outcomes .assume is quite near to maximal so that the entanglement swapping to connect adjacent wirelengths is near perfect .when adjacent pairs are connected using entanglement swapping we assume that the resultant entanglement is equal to the lower value of the two pairs .therefore when connecting wirelengths , the entanglement will be the lowest value of the wirelengths . using these rules , we can formulate the following expression for the average entanglement obtained over wirelengths : as we move from distributing a finite number of entangled pairs to an infinite number the s will tend to zero and , meaning the average entanglement will asymptotically tend to approach the distillable entanglement . to make it more clear , in the asymptotic case we have a unit probability ( ) conversion of a homogenous ensemble to an inhomogenous ensemble of maximally entangled and unentangled pairs with the fraction of maximally entangled pairs being .this fraction of maximally entangled pairs in each wirelength can now be connected with unit efficiency using entanglement swapping .asymptotic distillation essentially conserves the distillable entanglement and tiny fluctuations in the final entanglement tend to zero .if we put and in eq.([eqn : ong ] ) we are left with with an average of over different lengths and the merit of the star and the ring networks simply depends on the total wirelength . in the other extreme ( non asymptotic ) , the fluctuation is large ( say of the order ) and we compare with .this leads to the behaviour of ring always being better than the star .thus we have successfully interpolated between the case of distributing a finite number of entangled pairs to an infinite number . note that in the above presentation , asymptotic distillation has been viewed in a rather different angle than usual .usually is interpreted as the probability of a successful distillation and the entanglement produced as a result of a successful distillation is maximal .we invert the interpretation of this same process as a method succeeding with probability and creating an amount of entanglement per initial impure pair in the form of maximally entangled pairs . in this latter ( maximally entangled ) form, the pairs can be connected in series without any loss of entanglement through entanglement swapping . in order to illustrate the meaning of the above heuristic approach with specific values of and s we consider an example - a watchedamplitude damped channel where we distribute the state , with two pairs between connected users to obtain a case somewhat intermediate between the previous examples .{12 } { \left| 00 \right\rangle } _ { e_1e_2 } \\ & + & e^{-d}\sqrt{1-e^{-2d } } { \left| 01 \right\rangle } _ { 12 } { \left| 10 \right\rangle } _ { e_1e_2 } \ \\ & + & e^{-d}\sqrt{1-e^{-2d } } { \left| 10 \right\rangle } _ { 12 } { \left| 01 \right\rangle } _ { e_1e_2 } \\&+ & ( 1-e^{-2d } ) { \left| 00 \right\rangle } _ { 12 } { \left| 11 \right\rangle } _ { e_1e_2 } ) { \end{array}}\ ] ] then , if the environment is being monitored , there is a probability of that the state observed is ( corresponds to the state of the environment ) where the subscript represents the fact that this state is a result of conditional evolution . is not maximally entangled and must now be purified .one method of doing this is to use the procrustean method .this has probability of producing a maximally entangled stated ( mes ) of twice the modulus squared of the lower coefficient .i.e. we combine the probability of observation with that of purification to give an expression for i.e. the successful concentration to an mes . before the entanglement was , now it is 1 .so so the expression for the average entanglement above becomes : with this expression for one pair having been distributed , the ring network is immediately better than the star network as before .in this paper , we have compared entanglement distribution between several users connected by star and ring network configurations .we have shown that the cross over point at which the ring network becomes better than the star network varies with the amount of resources . when the amount of resources is limited and can not be stored , so that the number of entangled pairs available across a channel at a time is finite , then the results for entanglement distribution can differ sharply from that for classical communications .we have given a heuristic explanation of this fact that it stems from the comparison of different powers of probabilities arising in the comparison of star and ring networks .however , in the asymptotic case ( which can physically arise when we can store particles for long and can process a large number of shared entangled pairs parallelly ) the relative merits of the star and the ring configurations are the same as classical .we have arrived at our conclusions by considering extreme examples . at one extreme is just one pair per wirelength and just one pair available to travel all the way from user to user . atthe other extreme is a very large number of pairs shared in parallel between each adjacent user ( or user and node ) in which asymptotic manipulations of entanglement are used . as we gradually increase the number of pairs available to be processed parallelly between users from to , we would expect the cross over point in to increase from to .this is because when more and more pairs can be stored , we have a greater chance of selectively connecting the higher entangled pairs in adjacent wirelengths .when the number of pairs becomes really large , this selective connection becomes really successful and gives just the entanglement in a single wirelength as the effective criterion for comparison . in the future, we intend to investigate explicit examples when an intermediate number of pairs are shared in parallel per wirelength to explore the transition from the non - asymptotic to the asymptotic case .ah thanks the uk epsrc ( engineering and physical sciences research council ) for financial support .plenio and v. vedral , cont . phys . *39 * , 431 ( 1998 ) ; a. zeilinger , phys .world * 11 * , 35 ( 1998 ) .a. k. ekert , phys .lett . * 67 * , 661 ( 1991 ) . c. h. bennett , g. brassard , c. crepeau , r. jozsa , a. peres and w. k. wootters , phys .lett . * 70 * , 1895 ( 1993 ) . c. h. bennett and s. j. wiesner , phys .lett . * 69 * , 2881 ( 1992 ) .k. mattle , h. weinfurter , p. g. kwiat and a. zeilinger , phys .lett . * 76 * , 4656 ( 1996 ) ; d. bouwmeester , j - w .pan , k. mattle , m. eibl , h. weinfurter , and a. zeilinger , nature ( london ) * 390 * , 575 ( 1997 ) ; d. boschi , s. branca , f. de martini , l. hardy and s. popescu , phys .80 * , 1121 ( 1998 ) ; a. furasawa , j.l.srensen , s.l .braunstein , c.a.fuchs , h.j.kimble and e.s.polzik , science * 282 * , 706 ( 1998 ) ; w. tittel , j. brendel , h. zbinden and n. gisin , phys .lett . * 84 * , 4737 ( 2000 ) ; t. jennewein , c. simon , g. weihs , h. weinfurter and a. zeilinger , phys . rev. lett . * 84 * , 4729 ( 2000 ) . c. h. bennett , g. brassard , s. popescu , b. schumacher , j. a. smolin and w. k. wootters , phys .lett . * 76 * , 722 ( 1996 ) ; d. deutsch , a. ekert , r. jozsa , c. macchiavello , s. popescu and a. sanpera , phys .* 77 * , 2818 ( 1996 ) .s. bose , v. vedral and p. l. knight , phys .a * 60 * , 194 ( 1999 ) ; l. hardy and d. d. song , phys .a * 62 * , 052315 ( 2000 ) ; b .- s . shi , y .- k . jiang and g .- c .guo , phys .a * 62 * , 054301 ( 2000 ) ; m. cinchetti and j. twamley , phys .a * 63 * , 052310 ( 2001 ) .pan , d. bouwmeester , h. weinfurter , and a. zeilinger , phys .80 * , 3891 ( 1998 ) ; p. g. kwiat , s. barraza - lopez , a. stefanov and n. gisin , nature * 409 * , 1014 ( 2001 ) ; j .- w .pan , c. simon , c. brukner and a. zeilinger , nature * 410 * , 1067 ( 2001 ) .lo and s. popescu , phys .a * 63 * , 022301 ( 2001 ) ; m. a. nielsen , phys .. lett . * 83 * , 436 ( 1999 ) ; g. vidal , phys .lett . * 83 * , 1046 ( 1999 ) ; d. jonathan and m. b. plenio , phys .* 83 * , 1455 ( 1999 ) ; l. hardy , phys .a * 60 * , 1912 ( 1999 ) ; d. jonathan and m. b. plenio , phys .lett . * 83 * , 3566 ( 1999 ) .
we investigate the differences between distributing entanglement using star and ring type network topologies . assuming symmetrically distributed users , we asses the relative merits of the two network topologies as a function of the number of users when the amount of resources and the type of the quantum channel are kept fixed . for limited resources , we find that the topology better suited for entanglement distribution could differ from that which is more suitable for classical communications .
the reconstruction of projected cluster mass maps from the observable image distortion of faint background galaxies due to the tidal gravitational field is a new and powerful technique . pioneered by kaiser & squires ( 1993 ) , this method has since been modified and generalized to account for ( a ) strong tidal fields in cluster centers ( schneider & seitz 1995 ; seitz & schneider 1995 ; kaiser 1995 ) ; ( b ) finite and in some cases , e.g. wfpc2 images very small data fields ( schneider 1995 ; kaiser et al . 1995 ; bartelmann 1995 ; seitz & schneider 1996 , 1998 ; lombardi & bertin 1998 ) ; and ( c ) the broad redshift distribution of background galaxies ( seitz & schneider 1997 ) . all of these are direct methods in the sense that a local estimate of the tidal field is derived from observed galaxy ellipticities , which is then inserted into an inversion equation to obtain an estimate of the surface mass density of the cluster .whereas these direct methods are computationally fast , can be treated as black - box routines , need only the observed ellipticities and a smoothing length as input data , and yield fair estimates of the surface mass density , their application has several drawbacks : * the data must be smoothed , and the smoothing scale is typically a free input parameter specified prior to the mass reconstruction .there are no objective criteria on how to set the smoothing scale , although some ad - hoc prescriptions for adapting it to the strength of the lensing signal have been given ( seitz et al. 1996 ) . in general , smoothing leads to an underestimate of the surface mass density in cluster centers or sub - condensations . *the quality of the reconstruction is hard to quantify .* constraints on the mass distribution from additional observables ( such as multiple images or giant arcs ) can not simultaneously be included .in particular , magnification information contained in the number density of background sources ( broadhurst et al .1995 ; fort et al .1997 ) or in the image sizes at fixed surface brightness ( bartelmann & narayan 1995 ) , can not be incorporated locally but only globally to break the mass - sheet degeneracy ( gorenstein et al .1988 ; schneider & seitz 1995 ) . to overcome these drawbacks , a different class of methods should be used .bartelmann et al .( 1996 , hereafter bnss ) developed a maximum - likelihood ( ml ) technique in which the values of the deflection potential at grid points are considered as free parameters .after averaging image ellipticities and sizes over grid cells , local estimates of shear and magnification are obtained .the deflection potential at the grid points is then determined such as to optimally reproduce the observed shear and magnification estimates .magnification information can be included this way .the smoothing scale in this method is given by the size of the grid cells , and can be chosen such that the overall of the fit is of order unity per degree of freedom .squires & kaiser ( 1996 ; hereafter sk ) suggested several inverse methods .their _ maximum probability method _ parameterizes the mass distribution of the cluster by a set of fourier modes .if the number of degrees of freedom ( here the number of fourier modes ) is large , the mass model tends to over - fit the data .this has to be avoided by regularizing the model , for which purpose sk impose a condition on the power spectrum of the fourier modes .sk s _ maximum - likelihood method _ specifies the surface mass density on a grid and uses the tikhonov - miller regularization ( press et al .1992 , sect .the smoothness of the mass reconstructions can be changed by varying the regularization parameter , which is chosen such as to give an overall per degree of freedom .bridle et al .( 1998 ) have recently proposed an entropy - regularized ml method in which the cluster mass map is parameterized by the surface mass density at grid points .this method allows to restrict the possible mass maps to such with non - negative surface mass density .this paper describes another variant of the ml method ( seitz 1997 , ph.d .the major differences to the previously mentioned inverse methods are the following : * the observational data ( e.g. the image ellipticities ) are not smoothed , but each individual ellipticity of a background galaxy is used in the likelihood function .whereas this modification complicates the implementation of the method , it allows larger spatial resolution for a given number of grid points , which is useful since the latter determines the computing time . *the number of grid points can be much larger than in bnss , and the likelihood function is regularized .this produces mass reconstructions of variable smoothness : mass maps are smooth where the data do not demand structure , but show sharp peaks where required by the data .the resulting spatially varying smoothing scale is a very desirable feature .fourier methods , such as sk s maximum probability method , have a spatially constant smoothing scale which is determined by the highest - order fourier components . they always need to compromise between providing sufficient resolution near mass peaks and avoiding over - fitting of the data in the outer parts of a cluster .* following bnss , we use the deflection potential to describe a cluster .this is an essential difference to bridle et al .( 1998 ) who used the surface mass density at grid points .as we shall discuss below , working with the deflection potential has substantial fundamental and practical advantages .we describe our method in sect . 2 , with details given in the appendix .we then apply the method to synthetic data sets in sects . 3 & 4 to demonstrate its accuracy .in particular , we compare the performance of our ml method to that of direct methods .the results are then discussed in sect . 4 , and conclusions are given in sect . 5 , where we also discuss further generalizations of the method for , e.g. , including constraints from strong lensing features .for simplicity , we assume throughout the paper that all background galaxies are located at the same redshift . a generalization of our technique to a redshift distributionis given by geiger & schneider ( 1998 ) .the dimensionless surface mass density is related to the deflection potential through the poisson equation , where indices preceded by a comma denote partial derivatives with respect to .the tidal gravitational field of the lens is described by the shear with the two components thus , the surface mass density and the shear , which determine the local properties of the lens mapping , can _ locally _ be obtained from the deflection potential . in contrast , the relation between shear and surface mass density is highly non - local , with . in particular , needs to be given on the entire two - dimensional plane .prescribing on a finite field does therefore not completely specify the shear inside the field , because the latter is also affected by the outside mass distribution .we return to this point further below .the local magnification is }^2-{\left|\gamma({\vec}\theta)\right|}^2 \right\}}^{-1}\;. \label{eq:2.4}\ ] ] the local lens equation relates the ellipticities of a source and its image .we use the complex ellipticity parameter ( see blandford et al .1991 ) to describe image shapes .it is generally defined in terms of the tensor of second brightness moments of an image by .we refer the reader to press et al .( 1992 , chap .18 ) for the basic ingredients of the ml method ; see also bridle et al .we do not repeat the basics here , but describe the application of the method to the cluster mass reconstruction .we start by considering image ellipticities only ; magnification effects will be discussed later .let , , denote the complex ellipticities of galaxy images in the data field , which we assume to be a rectangle of side lengths and .we cover the data field with an equidistant grid of points , with in the lower left corner of the data field .the cluster is described by the deflection potential at the grid points , , , . as discussed in bnss , the grid for is larger than the data field by one column or row of grid points in all four directions to allow simple finite differencing of on the whole field .having found and on all grid points from according to ( [ eq:2.1 ] ) and ( [ eq:2.2 ] ) , and are bilinearly interpolated to all galaxy positions . if the isotropic probability distribution of the intrinsic source ellipticities is given , the probability distribution of the image ellipticities can be predicted . the likelihood functionis then defined as , or , can be maximized with respect to the set of values of the deflection potential at the grid points . since the values of and are obtained from second derivatives of , a constant and a term linear in added to leave unchanged .in addition , the mass - sheet degeneracy renders invariant under the transformation where is an arbitrary parameter ( schneider & seitz 1995 ) .therefore , in maximizing with respect to , the potential can be held fixed at four grid points . noting that the corners of the grid are not used for the calculation of and on ( see appendix ), we see that the maximization of has dimension .provided is not much smaller than the number of galaxies ( which we assume in the following ) , the maximization results in a cluster model which tries to follow closely the noise pattern of the data . disregarding observational effects , the noise is due to the intrinsic ellipticity distribution of the sources .the reconstructed mass distribution will therefore have pronounced small - scale structure , fitting the observed image ellipticities as closely as possible , and having a per degree of freedom much smaller than unity . in order to prevent such over - fitting of the data ,we need to augment by a regularization term . instead of maximizing ,we minimize where is a function of the potential that disfavors strong small - scale fluctuations .the parameter determines how much weight should be attached to smoothness .one can vary such that the resulting reconstruction has approximately the expected deviation from the data , viz . per degree of freedom .larger values of yield mass distributions which are too smooth to fit the data , lower values of cause over - fitting .we experimented with quite a large number of regularization terms .for example , we chose as the sum of over all grid points .mass reconstructions from synthetic data ( see sect . 3 below ) then showed a strong tendency to decrease too slowly towards the outer parts of the cluster , for that regularization preferred to be as flat as possible .regularizations including higher - order derivatives of ( see press et al . 1992 ,18.5 ) led to similar artifacts .thus , such local linear regularizations were dismissed as unsatisfactory .motivated by the success of the maximum - entropy ( me ) image deconvolution ( e.g. narayan & nityananda 1986 ; lucy 1994 ) , we consider instead me regularizations of the form where }^{-1}\,\kappa_{ij } \label{eq:2.9}\ ] ] is the normalized surface mass density at the grid points , and is a similarly normalized prior distribution ( see press et al .1992 , sect .18.7 , for a detailed discussion of the me method ) .we experimented with different choices for the prior .on the whole , a uniform prior , , performed satisfactorily , but tended to smooth mass peaks more than desired .following lucy ( 1994 ) , we therefore use a prior which is determined by the data itself . deferring details to sect . 3 , we note here that one can use the mass distribution obtained from a direct ( finite field ) reconstruction method as an initial prior , then iteratively minimize , and after several iterations use a smoothed version of the current mass distribution as a new prior .lucy ( 1994 ) showed that such a moving prior yields more accurate reconstructions than a constant prior .the regularization parameter can be iteratively adjusted to provide the expected goodness - of - fit .note that the me regularization ensures that the reconstructed surface mass distribution is positive definite .given the intrinsic ellipticity distribution and the local values of and , the probability distribution for the image ellipticities can be calculated . however , the resulting analytic expressions are quite cumbersome and unsuitable for the high - dimensional minimization problem considered here . with the intrinsic distributionnot being accurately known anyway , a precise expression of is not needed .we therefore approximate the image ellipticity distribution by a gaussian , with mean and dispersion .both these values depend on the ( local ) distortion , where is the reduced shear .mean and dispersion can be approximated by ( schneider & seitz 1995 ) }\delta\;,\nonumber\\ \sigma_\chi & = & \sigma_0{\left(1-{\left|\delta\right|}^2\right)}^{\mu_2}\ ; , \label{eq:2.11}\end{aligned}\ ] ] where and is the -th moment of the intrinsic ellipticity distribution . using ( [ eq:2.5 ] ) and ( [ eq:2.7 ] ) , the function to be minimizedcan then be written } + \eta\,{{\cal r}}{\left({\left\{\psi\right\}}\right)}\ ; , \label{eq:2.13}\ ] ] where we have introduced and are the values of and at the position of the -th galaxy , which obviously depend on the deflection potential through the distortion .( see bbns ) rather than the ellipticity , then ( seitz & schneider 1997 ) for a non - critical cluster . in the general case including critical clusters , the variable is more convenient . ]the term in has the form of a -function , which implies that an acceptable mass model should have .this condition constrains the regularization parameter .we outline in the appendix an efficient method for calculating and its derivatives with respect to the values of on the grid points .we note that the terms in the sums of ( [ eq:2.13 ] ) and ( [ eq:2.14 ] ) corresponding to a galaxy at depend only on the values of at neighboring grid points . in that sense ,our method is local .had we parameterized the cluster by at grid points , each term in the sums depended on at all grid points , as can be seen from ( [ eq:2.3 ] ) .it becomes obvious that the description in terms of requires much less computer time .in addition , the use of rather than is strongly disfavored by the fact that the shear on is incompletely specified by on the same field .bridle et al.(1998 ) attack this problem by performing the reconstruction on a region much larger than .they find that the shear information within yields information about outside .although this is true , the information on the mass outside the data field is _ very _ limited : for a circular data field , the shear inside caused by mass outside can be fully described by conveniently defined multipole moments of the mass distribution outside ( schneider & bartelmann 1997 ) , and there are infinitely many mass distributions for which all of those multipole moments agree .for instance , a point mass located just outside the data field produces the same shear pattern in the field as a spherically symmetric mass distribution with the same total mass . finally , extending the region on which the reconstruction is performed increases the dimensionality of the minimization problem .the mass - sheet degeneracy ( [ eq:2.6 ] ) can be lifted if the lens magnification can be estimated .three different methods for measuring magnification were suggested in the literature .broadhurst et al.(1995 ) proposed to use the magnification bias , which changes the local number density of background galaxies due to the magnification , provided the slope of the number counts is sufficiently different from .noting that lensing magnifies objects but leaves their surface brightness unchanged , bartelmann & narayan ( 1995 ) suggested that the sizes of background galaxies at fixed surface brightness could be a convenient measure for the magnification after calibrating with field galaxies .both methods can be used locally or globally . in the first case ,the local magnification information is used for the mass reconstruction , whereas in the latter case , the transformation parameter in ( [ eq:2.6 ] ) is adjusted until the magnification optimally matches the observational estimate .kolatt & bartelmann ( 1998 ) suggested to calibrate globally by using type - ia supernovae as cosmological standard candles .if magnification information is taken into account , the potential can be kept fixed at three points only , yielding . as an example , consider the method suggested by bartelmann & narayan ( 1995 ) .let and be the ratios of the linear sizes of a galaxy and its image , respectively , relative to the mean size of galaxies with the same surface brightness .they are related by .the expectation values of and are and , respectively .hence , if is the probability distribution for , the local probability distribution for is including the size distribution into the likelihood maximization leads to an additional term in ( [ eq:2.13 ] ) , } + \eta\,{{\cal r}}{\left({\left\{\psi\right\}}\right)}\ ; , \label{eq:2.16}\end{aligned}\ ] ] where we assume to be a log - normal distribution ( cf .bartelmann & narayan 1995 ) , }\ ; , \label{eq:2.18}\ ] ] with , and ( [ eq:2.17 ] ) becomes }{\ln r(k)-{\left\langle\ln r\right\rangle}(k)}^2 \over \sigma_r^2}\ ; , \label{eq:2.19}\ ] ] with .hence , is a gaussian in , with dispersion . has the form of a function in , which motivates us to include the factor in the definition of .a satisfactory mass model should have .note that the galaxies whose size can reliably be measured need not be those used for measuring the shear .it is just for notational simplicity that we assume that the two galaxy populations agree .for minimizing ( [ eq:2.13 ] ) or ( [ eq:2.16 ] ) in the process of the ml mass reconstruction , the various quantities ( e.g. , , ) need to be calculated for each set of values of the deflection potential .we outline in the appendix how this can be done efficiently . in order to quickly approach the minimum of , we use derivative information in the minimization procedure .the derivative of with respect to is also given in the appendix .we employ the conjugate gradient method as encoded in the routine frprmn by press et al .( 1992 ) , with line minimization using derivative information .we need a good initial potential to start the minimization .we tested two different approaches .the first starts with a relatively small grid ( say , for a quadratic data field ) , and a potential which corresponds to a constant surface mass density of , say . can be set to zero initially because the large grid cells provide sufficient smoothing to avoid over - fitting the data , or otherwise set to a small value with a constant prior .after several ( 20 , say ) iterations , the current mass map is smoothed and used as the new prior , and the minimization is continued .the regularization parameter is slightly increased or decreased , depending on whether is smaller or larger than unity .once a stable minimum with is obtained , the solution can be interpolated to a finer grid ( , say ) and the minimization can continue , adapting the prior and the regularization parameter as described before .this procedure can be repeated if desired .the second approach starts on a fine grid right away , with initial conditions obtained from a direct finite - field reconstruction method like that described by seitz & schneider ( 1998 ) . from this mass distribution, an approximate deflection potential can be found by integration , although any contribution from the mass outside the data field will be missing in the resulting .this initial prior is a smoothed version of the mass map from the direct reconstruction , and is adapted as described above .if there is no mass concentration directly outside the data field , the second method converges faster , while the first approach should be used if the data field does not encompass most of the mass concentration .of course , both methods finally approach solutions with . if magnification information is available , should finally approximate unity , which provides a useful consistency check of the result .we carry out simulations in which background galaxies are lensed by the mass model shown in fig .[ fig:1 ] .it consists of two softened isothermal spheres , with parameters chosen such that the lens is sub - critical .we successfully performed simulations with critical clusters as well , but concentrate on the non - critical case to simplify comparisons with the noise - filter reconstructions of seitz & schneider ( 1996 ) .the modifications to the practical implementation necessary for critical clusters are given in the appendix .the data field is a square with side length .we choose a number density of 50 galaxies per square arc minute , approximately corresponding to the number density which can be achieved with several hours exposure time at a four - meter class telescope in good seeing .all reconstructions were performed on a grid with .note that the parameters are chosen such that the grid cells have about the same size as the mean separation between galaxies .the intrinsic ellipticity distribution was chosen to be approximately gaussian , }\ , { \rm exp}(-{\left|\chi^{{\rm s}}\right|}^2/ r^2)\ ; , \label{eq:3.1}\ ] ] with , yielding , , and in ( [ eq:2.11 ] ) and ( [ eq:2.12 ] ) .the distribution of relative source sizes is characterized by ( see bartelmann & narayan 1995 for a discussion of this choice ) .the `` observed '' ellipticities and relative source sizes are calculated from realizations of the source distributions , using the gravitational lens equations . in order to assess the expected deviation of and from unity , we plot in fig .[ fig:2 ] the quantities }{\ln r(k)-{\left\langle\ln r\right\rangle}(k)}^2 \over \sigma_r^2\right\rangle}_{{\left|{\vec}\theta_k\right|}<\beta } \;,\nonumber\\ g_r(\beta ) & : = & { \left\langle{{\left[}\right]}{\ln r(k)-{\left\langle\ln r\right\rangle}(k)}^2 \over \sigma_r^2\right\rangle}_{\beta-\delta\beta \le{\left|{\vec}\theta_k\right|}<\beta } \;.\label{eq:3.2}\end{aligned}\ ] ] they are the contributions to and from galaxies closer than to the center of the data field , or within rings of width around the center of the data field . fig .[ fig:2 ] shows that the quantities ( [ eq:3.2 ] ) vary considerably between realizations , owing to the broad distribution of intrinsic source properties . when averaged over 50 realizations ( figs .[ fig:2]d , e ) , their mean values are very close to unity , and their 1- variations are a good indicator for the expected values in true reconstructions .even when the exact deflection potential is used , the resulting mass distribution will deviate from the true surface mass density because is calculated from with finite differencing . for the mass distribution shown in fig .[ fig:1 ] , from finite differencing deviates from the true by everywhere , with the largest deviations occurring at the two mass peaks .the grid is too coarse for a more accurate calculation of the second - order derivatives .since the deviations are sufficiently small ( i.e. much smaller than the expected accuracy that we can hope to achieve from our reconstruction ) , we have chosen not to further refine the grid .it should be noted that the method by bridle et al .( 1998 ) suffers from the same , or worse , inaccuracies , because there the shear is calculated from the surface mass density by integrating over a coarse grid .finally , bridle et al .( 1998 ) calculated the covariance matrix for the resulting mass distribution , which we do not repeat here .one must take into account , though , that the error estimates of the resulting mass reconstruction are strongly correlated , because the shear depends non - locally on .we first neglect magnification information , i.e. we minimize ( [ eq:2.13 ] ) . for the mass model in fig .[ fig:1 ] , mass reconstructions for 50 realizations of the galaxy population were performed . for each realization ,1000 iterations were taken , in each case using the noise - filter reconstruction of seitz & schneider ( 1996 ) as initial potential .the number of iterations was chosen to produce a stable result for all realizations .the actually required number can be substantially smaller in individual cases .typically , 1000 iterations take approximately 30 minutes on an ibm 590 workstation . after every 20 iterations ,the prior was changed to the current mass distribution , smoothed by a gaussian of width for the first 600 iteration steps , and afterwards .a somewhat larger smoothing length at the beginning leads to faster convergence , but produces artifacts due the finite region over which the integral in ( [ eq:2.3 ] ) is performed .note that is of the same size as the grid cells . in order to avoid excessive fine - tuning for these simulations ,the regularization parameter was fixed to for all 50 realizations .of course could be changed to finally achieve for each individual realization .the current choice of was made to achieve for all realizations ( see fig .[ fig:3 ] below ) .the final 20 iteration steps were performed with a value of large enough to make the mass distribution follow the prior very closely , and which yields a smoothing of the mass reconstruction on a scale of .since no magnification information was used in these simulations , the resulting deflection potential ( and thus the mass distribution ) is determined only up to the mass - sheet transformation ( [ eq:2.6 ] ) . in order to compare the reconstructions with the input model , each mass mapwas transformed such that the total mass inside agreed with the true total mass . in fig .[ fig:3 ] , we show the quantities ( [ eq:3.2 ] ) for three different realizations of the galaxy population , together with the ratios of the reconstructed mass inside rings and circles , relative to the true mass distribution .as can be seen , these mass ratios are always very close to unity , which means that the mass maps are reconstructed with high accuracy ( up to the mass - sheet degeneracy ) .the dispersion of about unity is less than 5% , and the mean of over 50 realizations is astonishingly flat .there is no indication that the mass at the center or in the outer parts is systematically over- or underestimated .the choice of the regularization parameter results in being slightly smaller than unity on average , though with substantial variation from case to case .evidently , is significantly smaller in the inner part than in the outer part , an effect also seen in fig .[ fig:2 ] where the true mass distribution was considered .this is due to the fact that the ellipticity distribution after lensing is not really a gaussian , and the deviation from this assumed functional form becomes larger for larger values of the reduced shear , i.e. closer to the center of . whereas there is no fundamental difficulty in replacing the gaussian with a more accurate probability distribution ,the simple form for is computationally convenient and seems to be sufficiently accurate for the mass reconstructions , as seen from the dash - dotted curves in fig .[ fig:3 ] .the deviation of from unity can be substantial in individual reconstructions , but the mean over all realizations is very close to unity . at the edge of the data field , two systematic effects become visible in the mean quantities plotted in figs .[ fig:3]d f ( and also by looking at the 2-d distribution over ) : the value of shows a small but significant decrease , and is slightly too large near the boundary .the first of these effects can be understood by considering the number of galaxies for which the shear estimate is affected if the value of is changed at a grid point . if that grid point is located in the inner part of , the estimates of for galaxies within the neighboring 16 grid cells are affected .this number decreases for points near the boundary of , so that there is less constrained by the measured image ellipticities .this implies that at the boundary , it is easier to `` fit the noise '' caused by the intrinsic ellipticity distribution .the slightly too large near the boundary is due to the prior .the prior is obtained from local averaging of the current mass distribution .if the mass distribution decreases outwards , the local mean value which can only be taken from the grid points within will be slightly too large at the boundary , which explains why the method presented here is slightly biasing the mass map at the boundary . in actual applications ,a strip of width can be ignored in the analysis of the mass distribution if this bias is a worry . however , its amplitude is very small , and it can probably be safely ignored in most situations compared to the stochastic errors .alternatively , one can use a mild extrapolation of the smoothed mass distribution to obtain an estimate of the smoothed values on the boundary less affected by this bias . in the case of our mass model , a simple fix ( instead of a more elaborated extrapolation )can be obtained by decreasing by on all boundary points , which practically eliminates the bias .we use this simple fix to the simulations discussed in the next subsection .we further point out that the amplitude of this mass bias can also be checked with real data , by generating artificial data sets from the reconstructed mass distribution and by performing reconstructions for those in the same way in which the original mass reconstruction was obtained .the mass - sheet transformation ( [ eq:2.6 ] ) allows to determine the quantity ] , and by , where and are two small quantities .these replacements leave finite if a galaxy image is placed on a critical curve .the minimization then proceeds by setting and to about 0.1 at the beginning of the minimization , and then slowly decreasing them in later iteration steps .this leads to convergence without additional problems .in addition , as was true for the direct inversions , by considering a broad redshift distribution of the background galaxies ( seitz & schneider 1997 ) , the singularities connected with critical curves are avoided ( geiger & schneider 1998 ) .abramowitz , m. , stegun , i. 1984 , `` handbook of mathematical functions '' , harri deutsch verlag bartelmann , m. 1995 , a&a , 303 , 643 bartelmann , m. , narayan , r. 1995 , apj , 451 , 60 bartelmann , m. , narayan , r. , seitz , s. , schneider , p. 1996, apj , 464 , l115 ( bnss ) blandford , r.d ., saust , a.b . , brainerd , t.g . ,villumsen , j.v .1991 , mnras 251 , 600 bridle , s.l . , hobson , m.p ., lasenby , a.n . , saunders , r. , 1998 , astro - ph/9802159 broadhurst , t.j . ,taylor , a.n . peacock , j.a . 1995 ,apj , 438 , 49 colley , w.n . , tyson , j.a . ,turner , e.l .1996 , apj 461 , l83 fort , b. , mellier , y. , dantel - fort , m. 1997 , a&a , 321 , 353 geiger , b. , schneider , p. 1998 , in preparation gorenstein , m.v ., falco , e.e . , shapiro , i.i . 1988 , apj , 327 , 693 kaiser , n. , squires , g. 1993 , apj , 404 , 441 kaiser , n. , squires , g. , fahlmann , g.g ., woods , d. , broadhurst , t. 1994 , astro - ph/9411029 kaiser , n. 1995 , apj , 493 , l1 kaiser , n. , squires , g. , broadhurst , t. 1995 , apj,449 , 460 kassiola , a. , kovner , i. , fort , b. 1992 , apj , 400 , 41 kneib , j .- p . , ellis , r.s . , smail , i. , couch , w.j ., sharples , r.m .1996 , apj 471 , 643 kolatt , t.s . , bartelmann , m. 1998 , mnras , in press lombardi , m. , bertin , g. 1998 , astro - ph/9801244 lucy , l. 1994 , a&a 289 , 983 narayan , r. , nityananda , r. 1986 , ara&a 24 , 127 natarayan , p. , kneib , j .-p . , smail , i. , ellis , r.s .1997 , astro - ph/9706129 press , w.h . ,teukolsky , s.a . , vetterling , w.t . , flannery , b.p .1992 , numerical recipes .cambridge ( cambridge university press ) schneider , p. 1995, a&a , 302 , 639 schneider , p. , bartelmann , m. 1997 , mnras 286 , 673 schneider , p. , seitz , c. 1995 , a&a , 294 , 411 seitz , c. , schneider , p. 1995, a&a , 297 , 287 seitz , c. , kneib , j.p . , schneider , p. , seitz , s. 1996 a&a , 314 , 707 seitz , c. , schneider , p. 1997, a&a , 318 , 617 seitz , s. , schneider , p. 1996 , a&a , 305 , 383 seitz , s. 1997 , `` untersuchungen zum schwachen linseneffekt auf quasare und galaxien '' .dissertation ( in german ) , ludwig - maximilians - universitt mnchen .seitz , s. , saglia , r.p ., bender , r. , hopp , u. , belloni , p. , ziegler , b. 1998 , mnras , in press seitz , s. , schneider , p. 1998, a&a submitted , astro - ph/9802051 squires , g. , kaiser , n. 1996 , apj , 473 , 65 ( sk )
we present a new method for reconstructing two - dimensional mass maps of galaxy clusters from the image distortion of background galaxies . in contrast to most previous approaches , which directly convert locally averaged image ellipticities to mass maps ( direct methods ) , our entropy - regularized maximum - likelihood method is an inverse approach . albeit somewhat more expensive computationally , our method allows high spatial resolution in those parts of the cluster where the lensing signal is strong enough . furthermore , it allows to straightforwardly incorporate additional constraints , such as magnification information or strong - lensing features . using synthetic data , we compare our new approach to direct methods and find indeed a substantial improvement especially in the reconstruction of mass peaks . the main differences to previously published inverse methods are discussed .
we , at both the individual and societal levels , have to constantly make decisions on how we should distribute our limited resources and time .we need to make choices as to who to hire , elect , buy from , get information from , award grants to , or make friends with . in this competitive landscape ,each candidate touts a resume highlighting _ experience _ a more easily quantifiable metric that summarizes past achievements , e.g. , the total number of clients a service provider has served , or the years a prospective employee has spent at similar jobs, and _ talent _ or inherent _ fitness _ a more subjective metric that indicates how well the candidates might perform in the future , e.g. , especial pedigree or degree from a prestigious college , or knowledge of a brand new technology , or an articulation of an ideal that captures the imagination .how we strike a balance between entitlement / experience and fitness / potential is a key determining factor in how wealth and power get distributed in a society , and how nimble it is in adapting to changes .too much emphasis on experience alone could lead to an ossified social structure that lacks innovation and can collapse dramatically when confronted with change ; the world history is littered with numerous instances of failed societies who had chosen such a path .the opposite extreme of letting only promising upstarts rule , can equally easily lead to a state of anarchy with no dominant institutions to hold the society together ; the frequent failures of well - intentioned revolutions that supplant existing institutions en masse and make fresh starts , provide eloquent testimonies to the perils of such a path .a society - wide quantitative study of how the experience vs. talent question is resolved , however , has been difficult to perform because of the obvious lack of concrete data .the world wide web ( www ) provides a unique opportunity in this regard .it has emerged as a symbiotic socioeconomic entity , enabling new forms of commerce and social intercourse , while being constantly updated and modified by the activities that it itself enables . given the web s organic nature , its evolution , structure , andinformation dynamics should reflect many of the same dynamics that underlie its real - world counterparts , i.e. , our social and economic institutions .thus , we ask how does this thriving cyber - society deal with the experience vs. talent issue , and how this interplay influences its own structure .the unprecedented scale and transparency of the activities on the web can provide data that hitherto has been unavailable .the web is typically modeled as an evolving network whose nodes are web pages and whose edges are url links or hyperlinks . a web page s in - degree ( i.e. , the number of other pages that provide links to it ) is a good approximation to its ability to compete , since heavily linked web documents are entitled to numerous benefits , such as being easier to find via random browsing , being possibly ranked higher in search engine results , attracting higher traffic and , thus , higher revenue through online advertisements .thus _ the degree of a node _ can be considered _ as a proxy of its experience _ , and it is a reflection of its entitlement , status and accomplishments to date . in fact ,motivated by a power - law ( pl ) distribution of the degree of nodes in the web graph ( i.e. , , where denotes node degree and is the power law ( pl ) exponent ) , the principle of preferential attachment ( pa ) , known to sociologists and economists for decades ( e.g. , as the `` cumulative advantage '' or the `` rich gets richer '' principle ) , was proposed as a dominant dynamic in the web .note that modeled web growth in terms of growth in the sizes of web sites / domains , which is identical to the model used by willis and yule in 1922 to explain the pl in the sizes of the genus .however , as shown in , the yule s model and the simon s model are equivalent to each other , and both rely on the cumulative advantage principle .hence , we refer to both the models introduced in and in as the pa model .alternate local dynamical models of the web , e.g. , via copying of links ( again , inspired by analogous social dynamics , such as referral services ) , account for additional characteristics of the web graph , such as high clustering coefficients and bipartite clique communities , while still retaining the global pa mechanism .the pa or equivalent models , however , imply that the scale is heavily tilted towards experience : the more experienced or older a page is , the more resources it will get and the more dominant it will become . for example , pa predicts that almost all nodes with high in - degree are old nodes ( disallowing newcomers to catch up ) , and that the degree distribution of pages introduced at the same time will be an exponential one , with very low variance .this extreme bias of the model was quickly realized and presented empirical data showing that the degree distribution of nodes of the same age has a very high variance ; they also introduced a fitness or talent parameter allowing different domains to grow at different rates to theoretically account for the high variance .this also prompted a number of researchers to propose and explore the `` preferential attachment with fitness '' dynamical model in which a node acquires a new link with probability proportional to , the product of its current number of links and its intrinsic _ fitness or talent _ .in such a linear fitness model , the degree distribution and the structure of the resulting network _ depends on the distribution of the talent parameter _, , and thus , without any knowledge of the exact distribution , one can not quite say how exactly the talent vs. experience issue gets played out in the system .for example , a uniform distribution of talent has a very different implication than say an exponential distribution .moreover , a significant potential dynamic that has not been studied in the context of the web is the death or deletion dynamic , which is dominant in most societal settings , where institutions and individuals cease to operate .the deletion dynamic , however , has been studied in the context of other networks , and a surprisingfinding is that the heavy - tailed degree distribution disappears in the straight pa model under significant deletion. this prompted us to ask _ data - driven questions _, such as : how dominant is the churn or deletion dynamic in the web ? can a pa model with fitness preserve the heavy tail even in the presence of high deletion rates ?can one empirically verify that the proposed models are _ truly at work _ in the web ?can one empirically estimate the relative fitness of a significant number of pages on the web and quantify the distribution of talent on the web ?most interesting of all , how often can talents overtake the more experienced individuals and emerge as the _ winner _ ? such issues , while have been partially theorized about , have not been empirically studied and validated .+ + * brief summary of findings . * using web crawls that span the period of one year ( i.e. , 13 separate crawls , at one month interval ) , we tracked both the death and the growth processes of the web pages .in particular , we tracked 17.5 thousand web hosts , via monthly crawls , with each crawl containing in excess of 22 million pages ( see _ materials and methods _ ) .first , we discovered that there is a high turnover rate , and for every page created on the web , our conservative numerical estimates show that at least around pages are deleted ( see _ results _ and supporting information ) .this is a significant enough deletion rate that it prompted us to analyze a theoretical model that integrates the deletion process with the fitness - based preferential attachment dynamics ( see _ materials and methods _ ) .previous models of the web had neglected the death dynamic ; recent results , however , show that even a relatively low - grade deletion dynamic could alter network characteristics considerably .given the distribution of fitness , our model can predict the overall degree distribution and the degree distribution of nodes with similar age . the empirical crawl data is then used to estimate the parameters of the model .this allows us to validate for the first time whether detailed time - domain data is consistent with the predictions of the theoretical model .one of the most important assumptions of the model is that each page can be assigned a constant fitness ( which can vary from page to page ) that determines the rate at which it will accumulate hyperlinks .we perform an estimation of the fitness factor for each month , and show that for the period of the crawls , the data do not reject the hypothesis that each page has a constant fitness ( see supporting information ) .a further verification of the model is obtained by validating one of its most direct implications . in particular, the dynamical model predicts that the accumulated in - degree ( i.e. , counting all hyperlinks , including those made by pages that get deleted during our study period ) of a page grows as a power - law .we find that for a vast majority of pages that show any growth , the degree - vs - time plots in the - scale have linear fits with correlation coefficients in excess of ( see the _ results _ section ) .the slope of the linear fit is an affine function of the fitness of the page .the robust estimation of the fitness factors of individual pages allows us to determine the overall distribution .we find the fitness on the web to be exponentially distributed ( i.e. , see figure [ fig : web_fit_log](a ) ) , with a truncation . when inserted into our analytical model , this _ exponential fitness distribution correctly predicts _ the power - law degree distribution empirically observed in the overall web as well as for the set of nodes with similar age . for pages with similar age ,the initial exponential distribution of fitness gets amplified by the pa mechanism , and as a result , the degree distribution of pages of the same age is a pl distribution with exponent , i.e. , with high variance .moreover , the truncated exponential distribution of fitness is one of the few distributions that would generate a constant pl exponent in the overall degree distribution , even as the turnover rate approaches unity ( i.e. , as many pages are deleted as created on the average ) .the empirical data agrees with this prediction and the pl degree distribution retains a constant low - magnitude exponent throughout the period of our study ( see the _ results _ section ) even though the deletion rate of pages remains high .thus the fitness distribution of the pages helps in preserving the heavy - tailed scale - free overall degree distribution of the web .the sequential time - sampled data helps us in better understanding the interplay between experience and talent ( fitness ) .for example , the _ initial in - degree of a page _( i.e. , in june 2006 ) is a measure of its _ experience _ , and _ the accumulated final in - degree _( i.e. , in june 2007 ) is a measure of _ how it fared _ based on its fitness and its experience .we _ define a page to be a winner _ if its final degree exceeds a specific desired target , while starting with an initial degree less than the target value .figure [ fig : sep ] ( a ) shows the _ initial in - degree distribution _ of all pages such that the initial degree was less than 1000 and the accumulated final degree greater than 1000 .the case of different target final degree values is discussed in supporting information .if the growth of the number of hyperlinks acquired by a page was based purely on pa ( i.e. , all pages have the same talent / fitness ) , then only pages with initial degree greater than a certain threshold would end up with final degree greater than a thousand .clearly , the empirical data shows that it is not the case : there are _ talented winners _ who have very low initial in - degree and yet end up as winners ; similarly , there are _ experienced losers _ who start with a large in - degree ( i.e. , greater than the cut - off ) but yet end up with cumulative in - degree less that 1k .figure [ fig : sep ] ( b ) shows the number of talented winners and experienced losers as a function of the cut - off , and for the sake of fair comparison , we pick a value for the cutoff such that the number of talented winners equals the number of experienced losers .thus , we find that for this sample set , _ the web collectively picked talented winners _ , and displaced an equal number of more experienced pages , thus striking a balance between talent and experience . as analyzed in supporting information , the percentage of talented winners seems to remain relatively constant as the target degree is varied . what does the fitness distribution look like for pages with similar experience ?figure [ fig : web_fit_log ] ( b ) shows the fitness distributions of pages with similar initial in - degree , and hence , similar experience .they all are exponentially distributed , except that the average fitness is a function of the initial degrees of the nodes .figure [ fig : avgfitvsdeg ] shows the average fitness as a function of initial in - degree .it shows that the average fitness is largest for nodes with least experience , and decreases as a pl until about an in - degree value of 100 ; it levels off after that .thus the web , seems to give a preferential treatment ( even though , only exponentially rarely ) to pages with low record / experience , and then treats them the same statistically once they are beyond a threshold .thus , the web encourages pages with low or little experience just a bit more than the mature pages ; but for any group , it judges talent quite conservatively keeping the distribution exponential . ;thus always allowing a few new comers to break in .the macroscopic structure of the web is , thus , being guided by a tension between experience / entitlement and talent / potential .the concept of fitness has implications on how we rank the importance and attractiveness of web pages . in the _ discussion _ section, we propose that one can use the fitness estimates of the pages to boost their rankings ; this way , pages with low overall degree but that are growing fast will get higher ranking .* estimating the fitness of webpages : talents are exponentially rare . *if the fitness with deletions model is indeed applicable to the web , the accumulated degree of each node should follow eq .[ k_accum ] as discussed in _ materials and methods_. in particular , from eq .[ k_accum ] , taking the logarithm of both sides of the accumulated degree of a page , we get : where is some time - invariant offset . hence , the slope of the linear fit of the logarithm of the accumulated degree and time gives node s growth exponent .note that the fitness value is related to the growth exponent of a node by a linear transformation with constant coefficients ( see eq .[ beta ] ) .thus , the distribution characteristics of fitness can be obtained by measuring the growth exponent of each node . the methodology for measuring the distribution of the growth exponentsis described as follows : first , we identify about 10 million webpages that persist through all 13 months from june 2006 to june 2007 .for each of these webpages , the set of in - neighbors are identified for all months .the accumulated in - degree of a node at any month is the sum of the in - neighbors up to that particular month . in accordance with eq .[ eq : k_evol_accum ] , after taking the logarithm of the accumulated in - degree and time ( measured in months ) , the slope of the linear ordinary least - square fit ( i.e. the empirical growth exponent ) along with the pearson correlation coefficient are obtained for each webpage .we will refer to this methodology as the growth method ; in the supporting information , we present an alternative methodology , the direct kernel method , to estimate the fitness of webpages ; the results from the alternative method is consistent with the results from the growth method .we found that a large fraction of webpages do not gain any in - connection at all during the entire 1-year period .we consider a webpage to have a _zero _ growth exponent if its in - degree values increases two times or less during the 13 months .we found that only 6.5% of the webpages have _ nonzero _ growth exponents .we will focus our study on the set of nodes with nonzero growth exponents .note that the set of webpages with zero growth exponents essentially introduces a delta function at the origin in a fitness distribution plot .it is simple to check that the delta function does not impact the derivation of results and hence omitted from discussion for simplicity .an overwhelming fraction of the linear fit produces a correlation coefficient of 0.8 or more , with an average correlation value of 0.89 ( see supporting information ) .thus , our empirical measurement is consistent with the model that the evolution of node in - degree as a function of time follows a power - law as described in eq .[ eq : k_evol_accum ] for majority of the webpages .we plot the distribution of the growth exponents for the set of nodes with correlation coefficient of 0.8 or more in fig .[ fig : web_fit_log ] .the distribution of the growth exponents has a mean of 0.30 and clearly follows an exponential curve with a truncation around and a slope of in the log - linear plot ( i.e. a characteristic parameter of ) . since node fitness and the growth exponent are related by a linear transformation involving the constants and as , the fitness distribution is also well modeled by the same form of a truncated exponential .+ + * examples of high - talent webpages . *we now conduct checks to see if the webpages identified to have a large growth exponent indeed contain interesting or important content that warrants the title of being highly fit or `` talented '' .we manually inspected the several highest - fitness pages in our dataset .one example is a webpage from the john muir trust website that calls on people to explore nature ( http://www.jmt.org/journey ) .many in - links to this page is from other sites on nature and outdoor activities .another example is the webpage that reports the crime rate of the us from 1960 to 2006 ( http://www.disastercenter.com/crime/uscrime.htm ) .this url has many in - links from other sites that discuss different crimes such as murder .+ + * power law degree distribution of the webpages with the same age .* for scientific citation networks , it is known that the in - degree distribution of the papers published in the same year follows a power law ( see the isi dataset in fig .1(a ) in ) .however , no parallel study has been performed for the web .using our temporal web dataset , we studied the in - degree distribution of the set of webpages with the same age .the in - degree distribution is found to follow a power law with an exponent of for over three decades ( see fig .[ fig : same_age ] ) .this result is consistent with the empirical finding by adamic and huberman that the degree distribution of web hosts with the same age has a large variance .furthermore , the power law nature of the in - degree distribution is consistent with our theoretical prediction from eq .[ eq : degdist_sameage ] given the fitness distribution is found to be a truncated exponential ( see _ materials and methods _ ) .in contrast , a network dynamic model that does _ not _ account for fitness has a small variance for the nodes with the same age , which leads to the effect that the `` rich '' node must be the old node .in fact , this is the basis of the issue raised by adamic and huberman .thus , _ the fitness - based model naturally generates the power law degree distribution for the set of nodes with the same age _ , which is not explained by other existing models that do not account for fitness such as .+ + * ad hoc characteristics of the web and the resilience of the power law exponent . *we now discuss the webpage removal process as observed in our dataset . in our analytical model , a nodeis removed uniformly randomly ( i.e. independent of node degree ) .we found empirical evidence to support the uniform random removal assumption : we observed that the degree distribution of the set of removed nodes that disappear in a given month is similar to the degree distribution of all nodes ( see supporting information ) .recall that the turnover rate is defined as the average number of nodes removed per node added . from our dataset , the turnover rate is measured to be ( i.e. for every new webpage inserted , 0.91 webpage is removed per unit time ) .however , this figure is an overestimate of the true turnover rate on the web , since we are examining a fixed set of web hosts .therefore , we also need to account for the growth in the number of web hosts .nevertheless , even after accounting for the source of growth from the insertion of web hosts , web still has a minimum turnover rate of 77% ( see supporting information ) . despite the high rate of node turnovers, the power law degree distribution is found to be very stable ( see fig . [fig : gamma_month ] ) .this finding is consistent with our ad hoc fitness model prediction that the power law exponent of the degree distribution stays constant for varying rates of node deletion for a truncated exponential fitness distribution ( see eq .[ gammais2 ] in _ materials and methods _ ) .the resilience of the power law exponent is in stark contrast to the result obtained for the pa - with - deletion model ( without any fitness variance ) , where the power law exponent is found to diverge rapidly as .thus , _ the natural variation of node fitness provides a self - stabilization force for the power law exponent of the degree distribution under high rate of node turnovers_. + + * talented winners versus experienced losers . * in the _ introduction _ , we proposed the idea of talented winners and experienced losers and how they are identified in our empirical web dataset for a given target degree .for the particular case of , we find that 48% of the winners are talented winners ( see fig .[ fig : sep ] ) , who successfully displaced the experienced losers ( i.e. the nodes with higher initial in - degrees but fail to become a winner ) .this observation is seemingly paradoxical : how can talents emerge to win close to half of the times when talents are exponentially rare ?we seek to understand the interplay between experience and talent through analytical modeling .consider a node with the initial degree in month ( i.e. june , 2006 , the start of our observation period ) . in order for the node to achieve in month ( i.e. june , 2007 , the end of our observation period ) ,the growth exponent of the node must exceed the critical value : .the fraction of nodes that are winners is simply given as : where is the complementary cumulative distribution function ( ccdf ) of the growth exponent distribution and is the initial degree distribution in month .thus , one can find the fraction of winners for a given by performing numerical integration of eq .[ eq : frac_win ] .we now introduce the cutoff : the set of winners with an initial degree are denoted as the _ talented winners _ , since they start with a low initial degree but nevertheless reach the target degree in month ; the set of losers with an initial degree are denoted as the _ experienced losers _ , since they start with a high initial degree but still fail to reach the target degree in month .we can solve for the critical cutoff such that the number of talented winners is equal to the number of experienced losers ( i.e. the talented winners displace the experienced losers ) : the above equation can be solved numerically to obtain .now , the fraction of talented winners or experienced losers is simply given by : . from our empirical web data , the growth exponent of the nodes is distributed according to a truncated exponential function with the parameter and the truncation .the initial degree distribution is a power law with exponent . substituting the empirically obtained and functions into eq .[ eq : frac_win ] and [ eq : kcut ] , we use numerical integration to find the fraction of talented winners for the target degree and obtain the theoretical prediction of 48.8% , which matches well with the empirical measurements obtained as described in fig .[ fig : sep ] ( see supporting information for theoretical and measurement results for different target degrees ) . for a given system with known talent and initial degree distribution, one can now estimate the fraction of talented winners using our analytical model .the competition between experience and talent arises in all aspects of society on a frequent basis : from choosing an applicant to fill a highly coveted job to deciding which political candidate to vote for .although the study of the interplay between experience vs. talent has long interested scientists and investigators , and much on the topic has been theorized about , large - scale empirical study on this topic from a quantitative perspective has been lacking .in this paper , taking advantage of the large , open and dynamic nature of the world wide web , we find an intricate interplay between talent and experience .talents are empirically found to be exponentially rare .however , through empirical measurements and theoretical modeling , we show that the exponentially distributed talent accounts for the following observed phenomena : the heavy - tailed power law in - degree distribution of the web pages born at the same time , the preservation of the low power law exponent even in the face of high rates of node turnovers , and most intriguing of all , talented winners emerge and displace the experienced losers in just slightly less than half of all winning cases ! beyond the interesting findings , we discuss several issues associated with this work .while our data is statistically consistent with the model assumption of a constant fitness for each page , our observation period is over a relatively short period of one year . for longer periods, one would expect the fitness of a page to change .for example , occasionally , a page that has been lying dormant for a while might find its content become topical and , hence , its fitness suddenly increases , allowing it to start accumulating links and becoming popular .such pages can be referred to as sleeping beauties . developing a model that accounts for time - varying fitness can be a subject for future work .in addition , the sample size on the order of tens of millions of nodes used in this study is arguably large , especially in comparison to studies from the social sciences .however , the size of the web is currently on the order of billions of pages .nevertheless , the source of our data , the stanford webbase project , to the best of our knowledge is the largest publicly accessible web archive available for research studies . finally , the statistics on node in - degrees as reported in this work is measured from the crawled web graph ; potential in - links from webpages not included in the crawl are not accounted for .future work focusing on examining larger web samples can mitigate these limitations . on the world wide web ,the problem of search engine bias or the `` entrenchment effect '' ( i.e. the `` rich - get - richer '' mechanism ) has received considerable attention from a broad audience from the popular press to researchers .however , researchers have shown evidence that the `` rich - get - richer '' mechanism might be less dominant than previously thought ; nevertheless , search engine bias and the `` entrenchment effect '' remains a concern .the findings in this paper present an alternative perspective on this problem and show that talents , while being exponentially rare , are frequently afforded the opportunity to overtake more `` entrenched '' web pages and emerge as the winner .currently , for any given query , pages are ranked based on a number of metrics , including the relevancy score of the query keywords in a document , and the document s pagerank , which is computed based on the in - degree ( or experience ) of the page and the hyperlink structure of the web at the time of the crawl . in order to avoid the entitlement bias potentially introduced due to pagerank, a number of researchers have advocated that one should also boost low pagerank pages , for example , by randomly introducing them among the top pages .the fitness of a page could be added as another metric that could influence the ranking .the determination of the exact functional form of how the fitness , , of a page would influence its rank would require considerable experimentation and editorial evaluations , but a promising start would be to multiply the currently computed ranks by , where the exponent is tuned based on quality assessment and testing .this would allow users to find pages that do not have high page rank yet , but are catching up fast .we expect such fitness - based ranking algorithms to have widespread applications beyond the web in other domains that employ ranking algorithms .we will note in passing that as with any other ranking algorithm based on link structure , the proposed ranking scheme must be used in conjunction with link farm detection algorithms to minimize the effect of link spamming that might try to influence the estimation of the fitness factors . besides the web ,the methodologies developed in this work is applicable for studying other complex networks and systems such as the citation network of scientific papers and the actor collaboration social network , where the interplay between `` experience '' and `` talent '' is also interesting .the fitness distribution is arguably an important parameter for dynamically evolving networks .the empirical study and theoretical models presented in this paper pave the road for studying the fitness characteristics of other systems , which will allow us to better understand , characterize and model a broad range of networks and systems .* dataset . * our dataset of the world wide web was obtained from the stanford webbase project .webbase archives monthly web crawls from 2006 to 2007 .we downloaded a total of 13 crawls for a one year period from june 2006 to june 2007 .these crawls track the evolution of 17.5 thousand web hosts with each crawl containing in excess of 22 million webpages .the set of hosts consists of a diverse sample of the web : it contains 5.4 thousand .com hosts , 4.7 thousand .org hosts and 2.6 thousand .edu hosts .this set also includes many foreign hosts , such as hosts from china , india and europe .+ + * a fitness - based model for ad hoc networks . *the existing `` preferential attachment with fitness model '' is specified as follows : at each time step , a new node with fitness joins the network , where is chosen randomly from a fixed fitness distribution ; node joins the network and makes links to nodes .a link is directed to node with probability : where is the in - degree of the node .we extend the fitness model to account for node deletion .the new model , which may be called `` fitness with deletion model '' , has the following extra dynamic added to the original fitness model : at each time step , with probability , a randomly selected node is deleted , along with all of its edges .we present the analysis of the model using the continuous mean - field rate equation approach as introduced in .other approaches would include the generating function method as discussed in and the rigorous mathematical analytical method presented in .however , we prefer the mean - field approach for its simplicity . in addition , the analytical results are verified by simulations . on another note ,since the web is a directed graph , we note that the model can be easily generalized into a dynamic directed network model ( details are discussed in the supporting information ) . in the fitness with deletion model, we show that the evolution of follows a power - law ( see supporting information ) : where the _ growth exponent _ is a function of the fitness : the parameter is given by : where is the maximum fitness in the system .we now examine the case where the fitness distribution is a truncated exponential , which is shown to empirically characterize the fitness distribution of webpages .when is distributed exponentially in the interval $ ] , we have : .the constant can be determined from eq .[ integral ] .for large compared with 1/ , we have , where is negligibly small .thus , according to eq .[ beta ] , the growth exponent is given by for maximum fitness , we have , where is negligibly small . since the power law exponent is dominated by the highest , we invoke the _ scaling relation _ ; we obtain : where is negligibly small .thus , the power law exponent stays at 2 regardless of the deletion rate ( see supporting information for the detailed derivations and justifications on assumptions made ) .this is a rather surprising result . as was shown in , for plain preferential attachment dynamics ( where all nodes have the same fitness ) , the power - law exponent depends on as , and diverges as goes to 1 .the introduction of fitness with a truncated exponential distribution _ stabilizes _ the power - law exponent , in the sense that the exponent remains close to 2.0 and does not diverge , regardless of the value of . to verify the result that the power - law exponent does not depend on the turnover rate , we performed large - scale simulations and confirmed that the power - law exponent stays constant at 2.0 even under high rates of node turnovers ( see supporting information ) .+ + * degree distribution of nodes with the same age . * given the fitness distribution and the degree , , that grows exponentially with fitness for a fixed time interval , we have , where is some constant .the degree distribution of nodes with the same age is given as : .for the case that the fitness distribution is a truncated exponential , the degree distribution follows a power law : where the power law exponent is .effectively , the light - tailed distribution in fitness is _ amplified _ into the heavy - tailed degree distribution for nodes born at the same time through the pa mechanism .the phenomenon of heavy - tailed degree distribution of nodes with the same age has also been observed and analyzed in other contexts .+ + * the evolution of the accumulated node degree . * in our model ,a node would gain neighbors as well as lose neighbors when the neighboring nodes are deleted . as a result ,when we track the evolution of a node s degree over time , the time series shows a number of upward and downward jumps , making it difficult to estimate the growth exponent from eq .[ power ] accurately . in order to reduce noise in the data , we can instead track the evolution of a node s _ accumulated _ degree over time .we define the set of accumulated neighbors of a node to include previous neighbors that have been deleted in addition to the current set of neighbors .thus , the accumulated node degree is the size of the set of accumulated neighbors .it is simple to derive that the evolution of the accumulated degree of node is ( see supporting information ) : where the growth exponent is found to be .note that the growth exponent for the evolution of the accumulated node degree is identical to the growth exponent of node degree as given in eq .[ beta ] ( i.e. ) .the authors would like to thank gary wesley of the stanford webbase project for providing patient help and instructions in downloading the web crawl data . until about , and then levels off to a constant value .thus , the web on the average gives a slight fitness boost to the pages with low experience , but then treats them statistically the same once they have experience above a certain value ., width=307 ] is displayed here .note that the growth exponent is an affine function of the underlying fitness parameter ( see eq .[ beta ] ) ; hence , the fitness distributions are also truncated exponentials . , title="fig:",width=307 ] is displayed here .note that the growth exponent is an affine function of the underlying fitness parameter ( see eq .[ beta ] ) ; hence , the fitness distributions are also truncated exponentials . , title="fig:",width=307 ]
we use sequential large - scale crawl data to empirically investigate and validate the dynamics that underlie the evolution of the structure of the web . we find that the overall structure of the web is defined by an intricate interplay between experience or entitlement of the pages ( as measured by the number of inbound hyperlinks a page already has ) , inherent talent or fitness of the pages ( as measured by the likelihood that someone visiting the page would give a hyperlink to it ) , and the continual high rates of birth and death of pages on the web . we find that the web is conservative in judging talent , and the overall fitness distribution is exponential , showing low variability . the small variance in talent , however , is enough to lead to experience distributions with high variance : the preferential attachment mechanism amplifies these small biases and leads to heavy - tailed power - law ( pl ) inbound degree distributions over all pages , as well as , over pages that are of the same age . the exponential distribution of fitness is also key in countering the potentially destabilizing effect of removal of pages : it stabilizes the exponent of the pl to a low value , and preserves the heavy tail and the resulting hierarchy , even in the face of very high rates of uniform deletion of web pages . the balancing act between experience and talent on the web allows newly introduced pages with novel and interesting content to grow fast and catch up or even surpass older pages who have already built their web presence . in this regard , it is much like what we observe in high - mobility and meritocratic societies : people with entitlement continue to have access to the best resources , but there is just enough screening for fitness that allows for talented winners to emerge and join the ranks of the leaders . finally , the estimates of the fitness of webpages and their distribution have potential practical applications in ranking search engine query results , which can allow users easier access to promising web pages that have not yet become popular .
the notion of mostly contracting center refers to partially hyperbolic diffeomorphisms and means , roughly , that all lyapunov exponents along the invariant center bundle are negative .it was introduced by bonatti , viana as a more or less technical condition that ensured existence and finiteness of physical measures . since then, it became clear that maps with mostly contracting center have several distinctive features , that justify their study as a separate class of systems .for instance , andersson proved that they form an open set in the space of diffeomorphisms , and that the physical measures vary continuously on an open and dense subset .castro and dolgopyat studied the mixing properties of such systems .moreover , dolgopyat obtained several limit theorems in a similar context .in addition , melbourne , matthew proved an almost sure invariance principle ( a strong version of the central limit theorem ) for a class of maps that includes some partially hyperbolic diffeomorphisms with mostly contracting center .burns , dolgopyat , pesin studied maps with mostly contracting center in the volume preserving setting , obtaining several interesting results about ergodic components , stable ergodicity , and other aspects of the dynamics .moreover , burns , dolgopyat , pesin , pollicott studied stable ergodicity of gibbs -states , in the general ( non - volume preserving ) setting . before all that , kan exhibited a whole open set of maps on the cylinder with two physical measures whose basins are both dense in the ambient space .his construction was extended by ilyashenko , kleptsyn , saltykov .see also .as it turns out , these maps have mostly contracting center .this construction can also be carried out in manifolds without boundary , but then it is not clear whether coexistence of physical measures can still be a robust phenomenon .this is among the questions we aim to answer in this paper : we find negative answers in some situations .systems with mostly contracting center have been found by several other authors .let us mention , among others : ma s examples of robustly transitive diffeomorphisms that are not hyperbolic ( see also and ) ; dolgopyat s volume preserving perturbations of time one maps of anosov flows ; volume preserving diffeomorphisms with negative center lyapunov exponents and minimal unstable foliations , see and also ; accessible skew - products over anosov diffeomorphisms which are not rotation extensions , see .new examples will be given in section [ s.geo-explode ] . in what followswe give the precise statements of our results . in this paper , a diffeomorphism called _ partially hyperbolic _ if there is a continuous invariant splitting of the tangent bundle and there are constants and such that * for every and every ( we say that is uniformly expanding ) .* is dominated by : for every nonzero , , and every . the _ unstable bundle _ is automatically uniquely integrable : there exists a unique foliation of with leaves tangent to at every pointunstable foliation _ is invariant , meaning that for every and the leaves are , actually , as smooth as the diffeomorphism itself .we call _ -disk _ any embedded disk contained in a leaf of the unstable foliation . a partially hyperbolic map has _ mostly contracting center _ ( bonatti , viana ) if , given any -disk , one has for every in some positive lebesgue measure subset . a _ physical measure _ for an invariant probability whose _ basin _ has positive volume .bonatti , viana proved that every diffeomorphism with mostly contracting center has a finite number of physical measures , and the union of their basins contains almost every point in the ambient space .see for several related results .the set of lebesgue density points of will be called _ essential basin _ of and will be denoted .let be a partially hyperbolic diffeomorphism with mostly contracting center .we say that a hyperbolic saddle point has _ maximum index _ if the dimension of its stable manifold coincides with the dimension of the center - stable bundle .a _ skeleton _ of is a collection of hyperbolic saddle points with maximum index satisfying * for any there is such that the stable manifold has some point of transversal intersection with the unstable leaf through . * for every , that is , the points in have no heteroclinic intersections .observe that a skeleton may not exist ( for instance if has no periodic points ) .also , the skeleton needs not be unique , when it exists . on the other hand , existence of a skeleton is a -robust property , as we will see in a while .[ t.maina ] let be a diffeomorphism with mostly contracting center .then admits some skeleton .moreover , if is a skeleton then for each there exists a distinct physical measure such that 1 .the closure of and the homoclinic class of the orbit both coincide with , which is the finite union of disjoint -minimal component , i.e. , each unstable leaf in every component is dense in this setting .the closure of coincides with the closure of the essential basin of the measure . in particular, the number of physical measures is precisely .moreover , for . in the proof ( section [ s.srbgeostructure ] ) we just pick , for each physical measure a hyperbolic periodic point with maximum index : such points constitute a skeleton .when their stable manifolds are everywhere dense , we get from part ( b ) of the theorem that there exist several physical measures , whose basins are intermingled .such examples , that generalize the main observation of kan , are exhibited in section [ s.geo-explode ] .theorem [ t.maina ] provides us with a tool to mirror physical measures into hyperbolic periodic points , and this can be used to describe the way physical measures vary when the dynamics is modified . starting from a skeleton for , we may consider its continuation for any nearby .then any maximal subset of satisfying condition ( ii ) is a skeleton for .that is the main content of the following theorem : [ t.mainb ] there exists a neighborhood of such that , for any , any maximal subset of the continuation which has no heteroclinic intersections is a skeleton .consequently , the number of physical measures of is not larger than the number of physical measures of .in fact , these two numbers coincide if and only if there are no heteroclinic intersections between the continuations .moreover , in that case , each physical measure of is close to some physical measure of , in the weak topology .in addition , restricted to any subset of where the number of physical measures is constant , the supports of the physical measures and the closures of their essential basins vary in a lower semi - continuous fashion with the dynamics , both in the sense of the hausdorff topology .of course , this implies that the number of physical measures is an upper semi - continuous function of the dynamics .consequently , this number is locally constant on an open and dense subset of diffeomorphisms with mostly contracting center .these facts had been proved before by andersson .one important point in our approach is that we give a definite explanation for possible `` collapse '' of physical measures : one physical measure is lost for each heteroclinic intersection that is created between the continuations of elements of the skeleton .the precise statements are in propositions [ p.weakcollapse ] and [ p.strongcollapse ] .we also want to explain how the basins of the physical measures vary with the dynamics in the following measure theoretical sense .define the pseudo - distance in the space of measurable subsets of .[ t.mainc ] let be any subset of diffeomorphisms with mostly contracting center such that all the diffeomorphisms in have the same number of physical measures .then their basins vary continuously with , relative to the pseudo - distance . in subsection [ ss.examples ]we will show how this theory can be applied to various examples , including those of kan .in particular , theorem [ t.mainc ] shows that the basins are quite stable from a measure - theoretical point of view .let be a partially hyperbolic diffeomorphism with mostly contracting center .as before , denotes the corresponding invariant splitting and .we call _ gibbs -state _ of any invariant probability absolutely continuous along strong unstable leaves .it follows that the support is -saturated , that is , it consists of entire unstable leaves . the notion of gibbs -state goes back to pesin , sinai and was used by bonatti , viana to construct the physical measures of diffeomorphisms with mostly contracting center .indeed , they showed that such diffeomorphisms have finitely many ergodic gibbs -states , and these are , precisely , the physical measures .gibbs -states also provide an alternative definition of mostly contracting center : has mostly contracting center if and only if all lyapunov exponents along the bundle are negative for every ergodic gibbs -state .this is related to the fact that , given any disk inside an unstable leaf , any cesaro accumulation point of the iterates of ( normalized ) lebesgue measure on is a gibbs -state .in fact , more is true : every accumulation point of is a gibbs -state , for almost every .another useful property is that the space of all gibbs -states is convex and weak compact .the extremal elements are the ergodic gibbs -states .moreover , is an upper semi - continuous function of , in the sense that the set is closed .proofs of these facts can be found in chapter 11 of .the following fact will be used several times in what follows : [ p.diffminimalfoliationinsupport ] if is a diffeomorphism with mostly contracting center then the supports of its physical measures , are pairwise disjoint . moreover, the support of every has finitely many connected components and each connected component is minimal for the unstable foliation ( every unstable leaf is dense ) . the first step is to construct a skeleton : [ p.existenceofgraph ] every partially hyperbolic diffeomorphism with mostly contracting center admits some skeleton . since the center lyapunov exponents are all negative , every physical measure , is a hyperbolic measure ( meaning that all the lyapunov exponents are different from zero ) .so , by katok , there exist periodic points with maximum index and whose stable manifold intersects transversely the unstable leaf of some point in the support of .since the support is -saturated , invariant , and closed , it follows that . for each choose one such periodic point ; we are going to show that is a skeleton for .consider any and let be a disk around inside the corresponding unstable leaf .let be any cesaro accumulation point of the iterates of the volume measure on . as observed before, is a gibbs -state and , hence , may be written as .choose such that is non - zero .let be a neighborhood of small enough that the unstable leaf through any point in intersects the stable manifold transversely .then , because , and so .consequently , there is arbitrarily large such that .this implies that the unstable manifold of intersects transversely . by invariance, it follows that intersects transversely the stable manifold of some iterate of .this proves condition ( i ) in the definition of skeleton . condition ( ii ) is easy to prove .indeed , on the one hand , is contained in .on the other hand , this support can not intersect for any : otherwise , would be in , which would contradict the fact that the supports are pairwise disjoint .thus , there can indeed be no heteroclinic connections .now , we use the skeleton to analyse the physical measures : [ p.numberofmeasures ] let be a diffeomorphism with mostly contracting center .suppose that is a skeleton of . then * coincides with the number of physical measure of ; * the closure of coincides with ; * the closure of coincides with the closure of . to prove claim ( a ) it suffices to show that all skeletons have the same number of elements ( because the claim holds for the skeleton constructed in proposition [ p.existenceofgraph ] )let be any other skeleton .by condition ( i ) in the definition , for each there is some such that intersects transversely .choose any such ( we will see in a while that the choice is unique ) .for the same reason , for this there exists some such that intersects transversely .it follows that accumulates on which , by condition ( ii ) in the definition , can only happen if .thus , and are heteroclinically related to one another . since different elements of either skeleton do not have heteroclinic intersections , this implies that is unique and the map is injective . reversing the roles of the two skeletons , we also get an injective map which , by construction , is the inverse of the previous one .thus , these maps are bijections and , in particular , .now take to be the skeleton obtained in proposition [ p.existenceofgraph ] .up to renumbering , we may assume that the in the previous construction . also by construction , each is contained in the closure of , which coincides with the support of .since the unstable foliation is minimal in each connected component of the support , this implies that the closure of coincides with . to finish the proof of claim ( b ) it remains to show that this coincides with the homoclinic class of .we only have to prove that contains the closure of , since the converse is an immediate consequence of the definition of homoclinic class . to this end, let be any disk contained in the unstable manifold of .let be any cesaro accumulation point of the iterates .this is a gibbs -state and it gives full measure to ( because and the latter is a compact invariant set ) .given that there are finitely many ergodic gibbs -states , and their supports are disjoint , this implies that .then , by the same argument that we used in the previous proposition , there exists some large large such that intersects transversely .since is arbitrary , this means that homoclinic points are dense in the unstable manifold of , which implies the claim .it remains to prove the claim ( c ) .let be any disk contained in . by theorem 11.16 in , for lebesgue almost every sequence converges to some gibbs -state .this gibbs -state must be , because by proposition [ p.diffminimalfoliationinsupport ] and part ( b ) of proposition [ p.numberofmeasures ] , this is the unique ergodic gibbs -state that gives weight to the closure of .this proves that the basin of intersects on a full lebesgue measure subset .we claim that there exists a positive lebesgue measure subset inside that intersection such that the stable set of any point contain an -dimensional disk with uniform size ; moreover , these local stable disks constitute an absolutely continuous lamination ( that is , the holonomy maps of this lamination preserve zero measure sets ) .indeed , let be any compact ( non - invariant ) set with such that every point in has a pesin stable manifold with uniform size , and these stable manifolds constitute an absolutely continuous lamination ( existence of such sets is a classical fact in pesin theory ) .it follows from the previous paragraph that the forward trajectory of almost every accumulates on .thus one can find a neighborhood of inside such that some large iterate intersects on a positive lebesgue measure subset .just take .see also ( * ? ? ?* lemma 6.6 ) for a similar statement .let be a lebesgue density point for inside .since the basin contains the stable sets of all points in , and these are transverse to , it follows that every point in the local stable disk of is also a lebesgue density for the basin in ambient space .in particular , is contained in .since accumulates on and the essential basin is -invariant , it follows that is contained in the closure of .now we prove the converse inequality .let be any lebesgue density point of the basin of in ambient space .using the fact that the unstable foliation is absolutely continuous ( see ) , we can find a small disk around inside the corresponding unstable leaf such that .let be a neighborhood of small enough that intersects transversely , for every .take .while proving part ( b ) we have shown that for such a point there exists arbitrarily large values of such that .then intersects transversely and , hence , intersects .since is arbitrary , it follows that is in the closure of . combining propositions [ p.existenceofgraph ] and [ p.numberofmeasures ] yields theorem [ t.maina ]. it will be convenient to separate the two conditions in the definition of skeleton .let us call _ pre - skeleton _ any finite collection of saddles with maximum index satisfying condition ( i ) , that is , such that every unstable leaf has some point of transverse intersection with for some .thus a pre - skeleton is a skeleton if and only if there are no heteroclinic intersections between any of its points .one reason why this notion is useful is that the continuation of a pre - skeleton is always a pre - skeleton : [ l.robustgraph ] let be a partially hyperbolic diffeomorphism which has a pre - skeleton .let , be the continuation of the saddles for nearby diffeomorphism . then is a pre - skeleton for every in a neighborhood of .this is a really a simple consequence of the fact that the unstable foliation depends continuously on the point and the dynamics .let us detail the argument .given any , take such that the unstable leaf has some transverse intersection with the stable manifold of some point in the orbit of .fix large enough so that is in the interior of the -neighborhood of inside and in the interior of the -neighborhood of the orbit of inside its stable manifold . then , since unstable leaves vary continuously with the point , for any in a small neighborhood of , there exists close to such that and intersect transversely at .let be a finite covering of and let .thus , has some transverse intersection with for every .since unstable leaves also vary continuously with the dynamics , it follows that there is a neighborhood of such that has some transverse intersection with for every and every .another reason why the notion of pre - skeleton is useful to us is that every pre - skeleton contains some skeleton . to prove thisit is convenient to introduce the following partial order relation , which will also be useful later on .for any two elements of a pre - skeleton define : if and only if it follows from the inclination lemma of palis ( ) that is transitive and thus a partial order relation .we say that is a _ maximal element _ if for every such that .two maximal elements and are _ equivalent _ if and .we call _ slice _ of any subset that contains exactly one element in each equivalence class of maximal elements .[ l.existenceskeleton ] let be a partially hyperbolic diffeomorphism which has a pre - skeleton .any slice of is a skeleton .let be a subset as in the statement . begin by noting that is also a pre - skeleton . indeed , since is assumed to be a pre - skeleton , for any there exists such that has some transverse intersection with .moreover , there exists some maximal element of such that . using the -lemma, it follows that has some transverse intersection with .moreover , up to replacing by some other maximal element equivalent to it , we may suppose that .this proves our claim . finally , by definition , there is no heteroclinic intersection between the elements of .so , is indeed a skeleton .now we are ready to give the proof of theorem [ t.mainb ] .the set is a pre - skeleton of , of course .so , by lemma [ l.robustgraph ] , there is a neighborhood of such that is a pre - skeleton for every . since diffeomorphisms with mostly contracting center form a open set ( by andersson ), we may find a neighborhood such that every has mostly contracting center . by lemma [ l.existenceskeleton ] ,every slice of is a skeleton for .since , it follows from theorem [ t.maina ] that the number of physical measures of is not larger than the number of physical measures of .indeed , these two numbers coincide if and only if is a skeleton for , that is , if there are no heteroclinic intersections between the continuations .this proves the first part of the theorem .now let be a sequence of diffeomorphisms converging to in the topology and suppose that is a skeleton of for any large .let be the physical measures ( ergodic gibbs -states ) . by theorem [ t.maina] , we may number these measures in such a way that each is supported on the closure of . up to restricting to a subsequence, we may assume that converges , in the weak topology , to some -invariant measure .by semicontinuity of the space of gibbs -states , every is a gibbs -state for .write as a convex combination of the physical measures of .we claim that . indeed , suppose that there is such that .then by theorem [ t.maina ] , we have that closure of .for large , this implies that has some transverse intersection with , because the unstable manifolds of hyperbolic periodic points vary continuously with the dynamics . using the corresponding fact for stable manifolds ,we conclude that has some transverse intersection with .this contradicts the assumption that is a skeleton of .this proves our claim , which yields the second part of the theorem . by the stable manifold theorem ( see ( * ? ? ?* theorem 6.2 ) and ) , for each , the local invariant manifolds and vary continuously with this implies that their closures vary in a lower semi - continuous fashion with , relative to the hausdorff topology . by parts ( b ) and ( c ) of proposition [ p.numberofmeasures ] , this means that both the supports and the closures of the essential basins of the physical measures vary lower semi - continuously with the dynamics , as claimed in the third part of the theorem .the proof of theorem [ t.mainb ] is complete .our next goal will be to analyse how physical measures and their basins vary with the dynamics .here we find a couple of conditions that ensure continuous dependence .this is a prelude to the next section , where we will analyse how physical measures may collapse as their basins explode .take to be a diffeomorphism with mostly contracting center with a skeleton .let be its continuation for nearby diffeomorphisms .[ c.localsinglestable ] let and be a sequence converging to in such that for every the point is a maximal element of and no other element of is equivalent to .then each has a physical measure on the closure of such that these physical measures converge to in the weak topology as . by lemma [ l.existenceskeleton] , each admits a physical measure supported on the closure of the unstable manifold of .suppose that does not converge to .we may assume that the sequence converges to some measure .then is a gibbs -state of and so we may write it as since , there exists such that . by the same argument as in the proof of theorem [ t.mainb ], we have that intersects transversely at some point , for every large . consequently, if is large enough then has some transverse intersection with .this implies that , which contradicts the assumption that is maximal and its equivalence class is formed by a single point .given and two saddle points and of diffeomorphism , we say that is not _ attainable _ from if there is a neighborhood of such that for any , where and are the analytic continuations of and , respectively .[ c.localrobuststable ] assume that is not attainable from any with .then the physical measure is stable , in the sense that for every in a neighborhood of there exists a physical measure which is close to in the weak topology .let be a neighborhood of as in the definition of non - attainability .let be any slice of .by lemma [ l.existenceskeleton ] , is a skeleton for .the assumption implies that is a maximal element of and its equivalence class consists of a single point .so , the conclusion follows from corollary [ c.localsinglestable ] .we start by giving a geometric and measure - theoretical criterion for a partially hyperbolic diffeomorphism to have mostly contracting center , using the notion of skeleton and a local version of the mostly contracting center property .then we use this criterion to give new examples of diffeomorphisms with any finite number of physical measures , whose basins are all dense in the ambient space .such examples are not stable : the number of physical measures may decrease under perturbation . indeed , for any proper subset of physical measures one can find a small perturbation of the original diffeomorphism for which those physical measures disappear ( their basins are engulfed by the basins of the physical measures that do remain ) .using different perturbations , one can approximate the original diffeomorphism by other diffeomorphisms having a unique physical measure , in such a way that converges to any given gibbs -state of . in particular , such examples are _ statistically unstable _ : the simplex generated by all the physical measures does not vary continuously . take to be a partially hyperbolic diffeomorphism with invariant splitting . as before ,denote .we start with a semi - local version of the notion of mostly contracting center .let be a compact -saturated -invariant subset of .we say that _ has mostly contracting center at _ if the center lyapunov exponents are negative for every ergodic gibbs -state supported on .then , we say that is _ an elementary set _ if there exists exactly one ergodic gibbs -state supported in and it satisfies .the same arguments as in theorem [ t.maina ] also yield a corresponding semi - local statement : if is an elementary set and is the corresponding gibbs -state , then * is a physical measure ; * has finitely many connected components and the unstable foliation is minimal in each connected component ; * if is any hyperbolic saddle with maximum index , then the closure of coincides with the closure of the essential basin of . contains some hyperbolic saddle with maximum index , by arguments in the proof of proposition [ p.existenceofgraph ] .[ p.newcretetion ] let , , be pairwise disjoint elementary sets , , , be the corresponding gibbs -states , and , be hyperbolic saddles with maximum index .if is a pre - skeleton , then it is a skeleton , and has mostly contracting center .moreover , are the physical measures of , and their basins cover a full lebesgue measure subset .if some unstable manifold intersects some stable manifold then , by the inclination lemma , the closure of intersects the closure of . by the definition of elementary sets ,this implies that intersects and , in view of our assumptions , that can only happen if .this proves that is a skeleton .now let us check that has mostly contracting center .it is part of the definition of elementary set that the center lyapunov exponents of are all negative , for every .so , to prove that has mostly contracting center it suffices to show that has no any other ergodic gibbs- states .suppose there exists some ergodic gibbs- state .it follows from the definition that there exists a -disk contained in some unstable leaf that intersects the basin of on a full lebesgue measure set .we claim that there exist and such that intersects the basin of .of course , this contradicts the fact that .thus , we are left to justify our claim .since is a pre - skeleton , there exist and such that intersects transversely at some point ( otherwise , the hausdorff limit of would contain some unstable leaf disjoint from , which would contradict the definition of pre - skeleton ) .again by the definition of gibbs -state , there exists a -disk and a full lebesgue measure subset formed by regular points of .since the center lyapunov exponents are negative , it follows from pesin theory that there exists a lamination whose laminae are local stable manifolds of almost every point .moreover , this _stable lamination _ is absolutely continuous .theorem 11.16 in gives that the time average of lebesgue almost every is a gibss -state .by the definition of elementary set , this gibbs -state must be .moreover , the orbit of any such must accumulate on the whole .in particular , is dense in . assuming that is large enough, is close to and , in particular , it cuts .the intersection is contained in the basin of , since for every .moreover , by absolute continuity of the lamination , the intersection has positive lebesgue measure .this implies that intersects the basin of .[ kan s example ] in this subsection , we use proposition [ p.newcretetion ] to construct new examples of diffeomorphisms with mostly contracting center and several physical measures , such that every basin intersects every open set on a positive measure subset .[ p.kanseveralmeasures ] for any , there is a diffeomorphism such that has mostly contracting center and physical measures such that for some and the basin is dense in , for every .moreover , the same remains true for any diffeomorphism in a -neighborhood which preserves the set for all .let be fixed and be a anosov diffeomorphism with fixed points , denoted as .our example will be a partially hyperbolic skew product map whose center foliation is the vertical foliation by spheres , .it is easy to see that , for any , for and in the same stable manifold of , let be the _stable holonomy _ , defined as the projection along strong stable leaves of .unstable holonomy _ be defined analogously , for and in the same unstable leaf of .assuming that is uniformly close to the identity in the topology , the partially hyperbolic map is _ center bunched _ ( see or ) , so that these holonomy maps are all diffeomorphisms ; moreover , they are close to the identity in the topology .in what follows we consider : the cases are easier .for the time being , take to be even ; the odd case will be treated at the end of the proof .let and be two smooth closed curves in intersecting transversely on exactly points , .take these points to be listed in cyclic order . thenconsider points , such that each lies in the circle segment between and ( with ) .for each , let be a morse - smale vector field on the sphere such that : 1 . ; 2 . is a sink , are saddles and are sources ; 3 . the basin of the attractor is the complement of segment connecting all the saddles and sources .figure [ f.flow ] illustrates the case and : then is just the segment of from to that does contain .morse - smale vector field on the sphere , width=192 ] analogously , consider points , such that each lies in the segment of between and .then let , , be a morse - smale vector field on the sphere satisfying ( i ) , ( ii ) and ( iii ) , with replaced by and replaced by a segment .let us consider a partially hyperbolic skew - product satisfying * are fixed points of for any . * time- map of and time- of , for some small .* is close to .condition ( 1 ) means that each , is an -invariant torus ; clearly , the restriction is an anosov map .it is also clear that the three conditions are compatible , as long as we choose in ( 2 ) sufficiently small .for example , we may take to be the identity map on for every outside small neighborhoods of , , and , , .then , we may modify these maps to make them contracting at each ( preserving the previous three conditions ) , so that * for , where denotes the ( unique ) gibbs -state of .this last condition implies that the center lyapunov exponents of every are negative , and so is an elementary set .[ l.skeleton ] the set is a skeleton .as a first step , we prove that every strong unstable leaf has a point of transverse intersection with the stable manifold of some .observe that .also , intersects the -unstable manifold of any point in transversely ( recall that is anosov ) .it follows that , for any , intersects transversely at some point .there are three possibilities : * ; * for some ; * for some . in case ( a )we are done .as for case ( b ) , we claim that it implies that has some transverse intersection .indeed , the hypothesis implies that the iterates accumulate on the unstable leaf of the fixed point .the latter is contained in the anosov torus , which also contains and its strong stable leaf .in fact , and are transverse inside .thus , it follows that the iterates accumulate on .since has stable index , we get that has some transverse intersection with for every large .taking pre - images , we get our claim . thus , in case ( b ) we are done as well .now , we consider case ( c ) . for large , is close to .let be a point of transverse intersection between and .then the consider the map as observed above , under our assumptions the map is close to the identity map in the second coordinate .so , in view of our conditions on and ( more specifically , the assumption that they meet at only , and they do so transversely ) , we have that .consequently , .this means that the strong unstable leaf has some transverse intersection with .then the same is true for if is large enough .now observe that and are homoclinically related , meaning that the unstable manifold of any point has some transverse intersection with the stable manifold of the other .so , the previous conclusion implies that has some transverse intersection with .this reduces the present situation to case ( a ) .thus , we have shown that is a pre - skeleton .next , notice that is contained in for every .since these tori are pairwise disjoint , and each one of them is fixed under , we have that is in the complement of for every .so , the points can have no heteroclinic intersections .this finishes the proof that is a skeleton .let us proceed with the proof of proposition [ p.kanseveralmeasures ] . applying proposition [ p.newcretetion ] to the elementary sets and the skeleton provided by lemma [ l.skeleton ] , we find that has mostly contracting center with physical measures such that for every .[ l.dense ] is dense in for every . by construction ,the stable manifold of for the flow is dense in the sphere ; recall figure [ f.flow ] .it follows that the stable manifold is dense in .moreover , the latter is dense in because it coincides with and the stable manifold is dense in .this proves the lemma .then , by theorem [ t.maina ] , the basin of each physical measure is dense in .this completes the proof of proposition [ p.kanseveralmeasures ] in what concerns the map .we are left to show that the conclusions extend to any diffeomorphism in a neighborhood which leaves every fixed .begin by observing that is close to and , in particular , it is anosov .it follows that admits a unique gibbs -state supported on ( the physical measure of that anosov diffeomorphism ) and that gibbs -state is close to .the latter ensures that the center lyapunov exponents remain negative , and so remains an elementary set for .each fixed point admits a continuation for . by lemma [ l.robustgraph ], these points form a pre - skeleton for .so , we are still in a position to use proposition [ p.newcretetion ] to conclude that has mostly contracting center and exactly physical measures , , with supported on for every .the proposition also states that is actually a skeleton for .we are left to prove that the basin of every is dense . by theorem [ t.maina ] , it suffices to show that the stable manifold of every is dense .the center foliation of coincides with the trivial fibration , which is normally hyperbolic and smooth .thus , by the stability theorem of hirsch , pugh , shub , the perturbation admits an invariant center foliation of whose leaves are spheres uniformly close to the trivial fibers .in particular , the center leaf through each point is close to .that implies that the restriction of to that center leaf is morse - smale and the stable manifold of is dense in it .so , the stable manifold of is dense in the stable manifold of .the stability theorem also says that there exists a homeomorphism of that maps the center leaves of to the center leaves of and which is a leaf conjugacy : then the stable manifold of is just the image under of the stable manifold of .that guarantees that the stable manifold of is dense in . in this way we have recovered all the ingredients we used for and so at this point our arguments extend to , as claimed .finally , to construct examples with an odd number of physical measures , it suffices to show that one can modify the diffeomorphism above , in such a way that the physical measures of the resulting diffeomorphism are precisely , , .let be a point of transverse intersection between and .let , be a smooth flow on such that 1 . is supported on a small neighborhood of ; 2 . preserves the center foliation ; 3 . for any , the map sends to some with .pick for any .condition ( 3 ) implies that for the unstable manifold of intersects the stable manifold of .so is a pre - skeleton for .conditions ( 1 ) and ( 2 ) ensure that the unstable manifold of each remains unperturbed and thus is still contained in , for .this ensures that the set in is actually a skeleton , and so has exactly physical measures .all the other stated properties are obtained just as in the previous case . in this subsection, we prove that the examples we have just constructed are statistically _ unstable _ : the simplex generated by all the physical measures does not vary continuously with the dynamics , as physical measures may collapse , with their basins of attraction exploding , after small perturbations of the diffeomorphism .in fact , we obtain two different instability results : * for any proper subset of physical measures , one can find a small perturbation of the original diffeomorphism for which those physical measures vanish : their basins are engulfed by the ones of the remaining physical measures .* for any gibbs- state of the original diffeomorphism ( not necessarily ergodic ) , one can find diffeomorphisms converging to , such that every each has a unique physical measure and the sequence converges to in the weak- * topology . in all that follows is a partially hyperbolic diffeomorphism with physical measures , as constructed in the previous section ( the constructions extend to arbitrary in a straightforward way ) .let us first describe our perturbation technique .it is designed to create new heteroclinic intersections , thus reducing the number of saddle points in the skeleton . for distinct ,let be a point of transverse intersection of and .consider a smooth flow on such that : 1 . is supported on a small neighborhood of ; 2 . preserves the center foliation of ; 3 . for any , the map sends to some with .we will always consider perturbations of the original of the form observe that , since , are away from the regions of perturbation . by lemma [ l.robustgraph ], is a pre - skeleton of .denote and and .[ l.perturbation ] the strong unstable leaf has some transverse intersection with , for every . let . by construction, the strong unstable leaf of for contains the point , which is the strong stable leaf of some point in .the latter is in the stable manifold of . clearly , the two manifolds intersect transversely at this point .we are ready to state and prove our first instability result : [ p.weakcollapse ] given any proper subset of the set of physical measures of , one can find arbitrarily close to such that the set of physical measures of is . first , suppose that , say , .consider with .the measures and are still ergodic gibbs- states and physical measures for , since coincides with on the neighborhood of their supports , and .moreover , the unstable manifolds of and are still contained in and , respectively , and so these points have no heteroclinic intersections . on the other hand , by lemma [ l.perturbation ] , .thus , is a skeleton of , by lemma [ l.existenceskeleton ] .so , by theorem [ t.maina ] , the diffeomorphism has exactly two physical measures , and .now suppose that , say , .consider with and .then , just as before , and ( for . then , by lemma [ l.existenceskeleton ] , is a skeleton of and , by theorem [ t.maina ] , the map has a unique physical measure , .the same arguments show that if are all positive then has a unique physical measure ( the points are all heteroclinically related ) , which need not be close to any of the physical measures of the original map .[ p.strongcollapse ] for each gibbs -state of there exists a sequence such that every has a unique physical measure and the sequence converges to as .notice that is an element of the simplex since every gibbs -state is a linear combination of the ergodic gibbs -states and , for , these are precisely the physical measures . clearly , it is no restriction to suppose that belongs to the interior of .let be any continuous affine map from the banach space of finite signed measures on to the affine plane generated by such that .existence of such a map follows from the hahn - banach theorem . for and ,consider the hexagon every triple has at least two positive coordinates .hence , by the same arguments as in the proof of proposition [ p.weakcollapse ] , the corresponding map has exactly one gibbs -state , which is also the unique physical measure .this defines a map with values in the space of probability measures on . by upper semi - continuity of the space of gibbs -states, is continuous on and the image is contained in a neighborhood of the simplex .let be the distance from to the boundary of .we claim that for each there exists such that the image of under is a topological simplex -close to in the space , in the following sense : * the two simplices have the same vertices and * every edge of is contained in the -neighborhood of the corresponding edge of .it follows that for every large the image is a topological simplex -close to in the plane . by a topological degree argument, it follows that contains : otherwise , it would be retractable to the boundary of , which is nonsense .this means that there exists such that .let the definition of implies that when for every .thus , converges to . by upper semi - continuity of the space of gibbs -states , every accumulation point of the sequence contained in . also , by construction , for every .since is continuous and its restriction to is injective , this implies that converges to .we are left to prove the claim above .let and be the boundary segments of , with contained in and contained in ( denote and ) . if then is the unique vanishing parameter and so .this means that for , which gives part ( i ) of the claim .it also follows that is a continuous curve from to . using upper semi - continuity once more, this curve must be contained in the -neighborhood of the space of gibbs -states of with , provided is small enough .to conclude , it suffices to observe that the latter is precisely the edge $ ] of .in this section , we prove theorem [ t.mainc ] . indeed , we prove the following somewhat more explicit fact : [ p.continuousbasin ] let and be a subset of the space of diffeomorphisms of with mostly contracting center such that every has exactly physical measures , , , .let be any sequence in converging to some .then , up to suitable numbering , let be a skeleton of with for each . as we have seen before, the continuations , of the saddle points constitute a skeleton for every in a small neighborhood relative ( because the number of physical measures remains the same ) .we begin by claiming that . for proving this claim, it suffices to consider the case when is ergodic for .notice that has mostly contracting center and is a pre - skeleton for .thus , by theorem [ t.maina ] , the support of contains some .the measure is still -invariant and -ergodic .then , since preserves absolute continuity along unstable manifolds , is still a gibbs -state for .since its support also contains , it follows from theorem [ t.maina ] that and coincide , as claimed .then the measure in is -invariant and , using once more the fact that preserves absolute continuity along unstable manifolds , it is a gibbs -state for , as we wanted to prove .[ l.ref2 ] there exists and for every large there exists a relative neighborhood of such that for every and any gibbs -state of . since has mostly contracting center , the largest center exponent is negative for every . since every gibbs -state is a convex combination of , , , it follows that there exist and such that for every gibbs -state of .now let be any gibbs -state for .it follows from and lemma [ l.ref1 ] that for any and , we have hence , denoting , combining this inequality with , we obtain as long as we take . by upper semi - continuity of the set of gibbs -states , it follows that for any gibbs -state of and any in a neighborhood of .fix to be a multiple of large enough that lemma [ l.ref2 ] is satisfied . for each , choose a small neighborhood of .fix small , such that the -neighborhood of inside its unstable manifold is contained in for every and every . for each and , define to be the subset of points such that let and be the lebesgue measure on the -disk .define for .then let be the set ( compact interval ) of values of over all gibbs -states of .by lemma [ l.ref2 ] , is contained in .then , for each fixed , the claim is contained in the conclusion of ( * ? ? ?* proposition 1 ) ( or ( * ? ? ? * theorem 1 ) , in the special case when there exists a unique gibbs -state ) for ) .moreover , the constants may be taken uniform over all in some neighborhood of : see ( * ? ? ?* section 7 ) for the case when there is a unique gibbs -state , and ( * ? ?* exercise 7 ) for the general case. for each large there exists such that for any the pesin stable manifold of every has uniform size ( meaning that it contains a -disk of radius around ) . indeed ,the uniform bound on the size of the stable manifold follows from the same arguments as ( * ? ? ?* lemma 3.7 ) , applied to the inverse of .moreover , these local pesin stable manifolds define a lamination which is absolutely continuous ( see ) : the corresponding holonomy maps between disks and transverse to the lamination are absolutely continuous , with jacobian given by for .in particular , for any there exists such that the jacobian is bounded above by , for any and any disks and in the -neighborhood of in the topology .let be an upper bound for the distortion of backward iterates of any along unstable disks : for any and any -disk of with radius .fix such that for every . by* theorem 11.16 ) , lebesgue almost every point in is in the basin of some gibbs -state .since is in the support of , by the definition of skeleton , and the supports are disjoint , we get that almost every point in is in .then the same is true for ( every point in the pesin stable manifold through ) almost every point in . by lemma [ l.ref3 ] ,we may fix such that the lebesgue measure of the complement of in is less than . in view of the previous observations , and the fact that the jacobian is bounded by , it follows that the lebesgue measure of the complement of in is less than for any in the -neighborhood of , as claimed .now we apply to the diffeomorphism the local markov construction in ( * ? ? ?* section 4.2 ) : for any small we may find a family of embedded -disks such that and , for any and , either given , fix as in lemma [ l.fixm ] from now on .then , take such that is in the -neighborhood of for every .denote by the union of the disks , . by (* theorem 11.16 ) , lebesgue almost every point in the unstable manifold of is in the basin of some gibbs -state . recalling that is in the support of , by the definition of skeleton , and the supports are disjoint, we get that almost every point in is in the basin .since the basin is saturated by stable sets , and the lamination is absolutely continuous , it follows that since the basin is an invariant set , this implies the inclusion in the statement .the converse is a corollary of ( * ? ? ?* proposition 6.9 ) .indeed , this proposition implies that contains a full lebesgue measure subset of every strong - unstable disk . by the absolute continuity of the strong unstable foliation, this implies that contains a full volume subset of the ambient manifold .since we already know that each is contained in the corresponding basin , and the basins are pairwise disjoint , it follows that up to measure zero .the proof is complete . by lemma [ l.fillin_forf ], we may fix such that in view of our choice of , we may find a neighborhood of such that is contained in some disk in the -neighborhood of for every and . reducing if necessary , and recalling, we may suppose that for any , any and any disk in the -neighborhood of .it is clear that converges to when .thus , recalling our choice of and further reducing if necessary , we may suppose that for any and any .let be a disk in the -neighborhood of and containing .by lemma [ l.fixm ] , and so , then , since the basin is a -invariant set , as claimed .define the _ return time _ of each to be the smallest such that intersects ( and thus is contained in ) some , .observe that is the pairwise disjoint union of the pre - images with and .for each one of these pre - images , lemma [ l.bounded_distortion_basin ] gives that so , by the cavalieri principle , this proves the claim . combining and, we find that since , for both and , the basins are pairwise disjoint and their union has total measure , up to measure zero , for every .thus , it also follows from that the relations and mean that for every , and so the argument is complete .k. burns , d. dolgopyat , and ya . pesin .partial hyperbolicity , lyapunov exponents and stable ergodicity . , 108:927942 , 2002 .dedicated to david ruelle and yasha sinai on the occasion of their 65th birthdays .
we show that every diffeomorphism with mostly contracting center direction exhibits a geometric - combinatorial structure , which we call _ skeleton _ , that determines the number , basins and supports of the physical measures . furthermore , the skeleton allows us to describe how the physical measure bifurcate as the diffeomorphism changes . in particular , we use this to construct examples with any given number of physical measures , with basins densely intermingled , and to analyse how these measures collapse into each other - through explosions of their basins - as the dynamics varies . this theory also allows us to prove that , in the absence of collapses , the basins are continuous functions of the diffeomorphism .
in recent years several attempts have been made to investigate the magnetic field vector distribution in the solar internetwork .initially , these works studied the magnetic field strength is in these regions .some favored magnetic fields of about a few hundred gauss or less ( asensio ramos et al .2007 ; lpez ariste et al . 2007; orozco surez et al .2007a , orozco surez & bellot rubio 2012 ) while others found magnetic fields in the kilo - gauss range ( domnguez cerdea et al .2003 , 2006 ; snchez almeida 2005 ) .these studies were carried out mostly with low spatial resolution data ( 1 `` ) .whenever the spatial resolution increased to better than 1 arcsec , this decreased the signal - to - noise ratio . with the hinode satellite ( kosugi et al .2007 ) it is now possible to obtain spectropolarimetric data ( full stokes vector ) with high spatial resolution ( 0.3 '' ) and low noise ( in units of the continuum intensity ) .thanks to these new data , it is now also possible to investigate not only the module but the three components of the magnetic field vector .this has led to a new controversy about the angular distribution of the magnetic field vector in the quiet sun . while some authors ( orozco surez et al .2007a , 2007b ; lites et al . 2007 , 2008 ) found that the magnetic field is mostly horizontal ( ; with being the inclination of the magnetic field vector with respect to the observer s line - of - sight ) , others favor a quasi - isotropic distribution of magnetic fields ( martnez gonzlez et al .2008 ; asensio ramos 2009 ; stenflo 2010 ) . with a few exceptions ( harvey et al .2007 , lites et al . 2008 andmartnez gonzlez et al . 2008 ) , all previous studies were carried out employing data recorded at disk center only .therefore , to better constrain the angular distribution of the magnetic field vector in the internetwork , we considered spectropolarimetric data recorded at different positions on the solar disk ( section [ section : observations ] ) .+ in addition , asensio ramos ( 2009 ) , stenflo ( 2010 ) , and borrero & kobel ( 2011 ; hereafter referred to as paper i ) warned that the highly inclined magnetic fields obtained by some studies could be caused by the noise in the linear polarization profiles .this yields a distribution of ( component of the magnetic field vector that is perpendicular to the observer s line - of - sight ) with a peak at around 50 - 90 gauss . to avoid this problem, these authors proposed to include only those profiles in the analysis that have a signal - to - noise ratio in the linear polarization ( stokes and ) .although this selection criterion allows one to retrieve reliable distributions for the magnetic field vector , it has the disadvantage of excluding most of the stokes profiles within the field - of - view from the analysis ( see borrero & kobel 2012 ; hereafter referred to as paper ii ; cf .bellot rubio & orozco surez 2012 ) . in this paperwe adopt an alternative approach based on inverting the histograms of the observed stokes vector ( section [ section : pdftheory ] ) over the entire field - of - view instead of inverting the stokes vector at each pixel over the observed region . under a number of simplifying assumptions ,whose limitations are described in section [ section : limitations ] , we were able to reach some important , albeit preliminary , conclusions about the angular distribution of the magnetic field vector in the solar internetwork and its variation across the solar disk ( section [ section : conclu ] ) .the data employed in this work were recorded with the spectropolarimeter ( sp ; ichimoto et al .2008 ) attached to the solar optical telescope ( sot ; tsuneta et al .2008 , suematsu et al .2008 , shimuzu et al .2008 ) onboard the japanese spacecraft hinode ( kosugi et al . 2007 ) .the spectropolarimetric data comprise the full stokes vector around the pair of magnetically sensitive spectral lines 6301.5 ( ) and 6302.5 ( ) . refers to the effective land factor calculated under ls coupling .the spectral resolution of these observations is about 21.5 m per pixel , with 112 pixels in the spectral direction .the spatial resolution of hinode / sp observations is 0.32 " . for this paper we selected three maps at three different heliocentric positions . in all three maps the spectrograph s slitwas kept at the same location on the solar surface for the whole duration of the scan .this means that , while the vertical direction ( -axis or direction along the slit ) contains information about different spatial structures on the solar surface , the horizontal direction ( -axis or direction perpendicular to the spectrograph s slit ) samples the same position at different times .each spectrum was recorded with a 9.6 seconds exposure , yielding a noise of about in units of the quiet - sun continuum intensity .each map records data for a period of time ( hr ) that includes several turnovers of the granulation , thus breaking down the temporal coherence and providing spatial information ( in a statistical sense ) along the -axis .+ in paper i we have demonstrated that photon noise plays an important role in determining the magnetic field vector from spectropolarimetric observations . to further decrease the level of noise in our observations we averaged every seven slit positions ( temporal average of about 67.1 seconds ) , which yields a new noise level of about ( in units of the quiet - sun continuum intensity ) .however , averaging means that the original map is shortened by a factor of seven in the direction that is perpendicular to the slit ( -axis ) .this decreases the number of points available for statistics .fortunately , hinode / sp data have a sufficient number of pixels to ensure good statistics even after averaging ( see section [ section : conclu ] ) . in the followingwe briefly describe each map individually .this map was recorded on february 27 , 2007 between 00:20 ut and 02:20 ut .it originally consists of 727 slits positions , of which 103 remain after temporal averaging .the center of slit was located at approximately the following coordinates on the solar surface : and .this corresponds to a heliocentric position of ( is the heliocentric angle ) and to a latitude of .the noise level is .this map ( original and temporally averaged ) corresponds to maps b and c in paper i , and it was also employed ( with and without temporal averaging ) by lites et al .( 2008 ) and orozco surez et al .( 2007a ) .this map was recorded on february 6 , 2007 between 11:33 ut and 15:51 ut .it originally consists of 1545 slits positions , of which 222 remain after temporal averaging .the center of slit was located at approximately the following coordinates on the solar surface : and .this corresponds to a heliocentric position of and to a latitude of .the noise in this map is very similar to that in map b : .+ this map was recorded on january 17 , 2007 between 07:05 ut and 09:58 ut .it originally consists of 1048 slits positions , of which 149 remain after temporally averaging .the center of slit was located at approximately the following coordinates on the solar surface : and .this corresponds to a heliocentric position of and to a latitude of . here , the noise level is slightly higher than in map a : .we note that some consecutive slit positions in this map show very high noise in the stokes profiles .although we could not relate this effect to the south atlantic anomaly ( increased flux of cosmic rays at certain orbits of the satellite ) we have removed these slit positions from our analysis , which reduced the effective number of slit positions to 120 . + from the inversion of map a ( sect . [ subsection : mapa ] ) .white areas correspond to regions where all three polarization profiles ( stokes , , and ) are below the -level.,width=340 ] but for map b ( sect . [subsection : mapb]).,width=340 ] but for map c ( sect .[ subsection : mapc]).,width=340 ] figures [ figure : invmapa ] , [ figure : invmapb ] , and [ figure : invmapc ] display the magnetic flux density of maps a , b , and c as obtained through the inversion of the full stokes vector with the vfisv ( very fast inversion of the stokes vector ) inversion code ( borrero et al .2010 ) . for better visualizationthe maps in these figures are obtained from the inversion of the original data ( i.e. not temporally averaged ) .this avoids pixelization in the -axis of these plots .however , for the remainder of the paper , our discussions and figures are based only on the temporally averaged ( 67.1 seconds ) data .+ although these previous figures only show the total magnetic flux density , it is worth mentioning that the vfisv code also retrieves the three components of magnetic field vector : is the module of , is the inclination of with respect to the observer s line - of - sight , and is the azimuth of in the plane that is perpendicular to the observer s line - of - sight . in addition , vfisv retrieves the magnetic filling factor as well as the line - of - sight component of the velocity vector and a set of thermodynamic parameters .we note that the magnetic flux density is defined as . for a more detailed overview on milne - eddington inversion codes , which include not only the magnetic field vector but also the thermodynamic and kinematic parameters relevant to the line formation, we refer the reader to del toro iniesta ( 2003 ) , borrero et al .( 2010 ) and references therein .the inversions carried out in the previous section could be employed to obtain histograms of the magnetic flux density , module of the magnetic field vector , and the inclination of the magnetic field vector with respect to the observer s line - of - sight ( ) at different positions on the solar disk .however , in this paper we aimed to infer properties about the distribution of the magnetic field vector by directly studying the histograms of the stokes profiles .figure [ figure : stokhistogram]a presents distribution histograms of the maximum signals of the stokes ( dashed lines ) and stokes and ( solid lines ) normalized to the average quiet - sun intensity over the entire map : .the colors indicate each of the different maps studied : red for map a ( sect .[ subsection : mapa ] ) , green for map b ( sect . [ subsection : mapb ] ) , and blue for map c ( sect . [ subsection : mapc ] ) . figure[ figure : stokhistogram]b displays the cumulative histogram of the pixels in each map that have a ( signal - to - noise ratio ) equal to or higher than a given value .the colors and the line - styles are as in figure [ figure : stokhistogram]a . for instance : 31.6 % of the pixels in map a posses signals in or ( solid - red line ) that are above 4.5 times the noise level. to limit our analysis to the internetwork regions , we excluded from these figures the pixels in maps a , b , and c with a magnetic flux density mx . + [ cols="^,^ " , ]in the previous subsections we have focused on the effect that the probability distribution function of the magnetic field vector has on the observed histograms of the stokes profiles at different positions on the solar disk . in our theoretical analysis a number of simplifying assumptions were made to keep the problem tractable .although they have already been pointed out in section [ section : pdftheory ] , we summarize them here to briefly discuss their implications .+ * we have assumed that the thermodynamic and magnetic parameters are statistically independent of each other .this allowed us to write the total probability distribution function in eq .[ equation : pdftot ] as the product of two distinct probability distribution functions .however , as dictated by the lorentz - force term in the momentum equation in magnetohydrodynamics , the magnetic field affects the thermodynamic structure of the solar atmosphere .it is therefore clear that this assumption does not fully hold in the solar atmosphere .for instance , if we take the commonly accepted picture of intergranular lanes harboring more vertical and stronger magnetic fields than the granular cells , and we consider that intergranular cells have a smoother variation of the temperature with optical depth ( see i.e. fig . 3 in borrero & bellot rubio 2002 ) , we could then have postulated a correlation between the magnetic field vector and the gradient of the source function with optical depth , which is contained in ( eq . [ equation : x ] ) .indeed , the higher is , the stronger will be the polarization profiles , , and ( see eq . 9.45 in del toro iniesta 2003 ) .these correlations could potentially cause the observed histograms of the stokes profiles ( figure [ figure : stokhistogram ] ) to vary with the heliocentric angle , even if the underlying distribution of the magnetic field vector does not depend on .therefore it is important to investigate what an effect they have before conclusively proving that the distribution of the magnetic field vector is not isotropic ( sect .[ subsection : iso ] ) , or that the differences in the observed histograms of the stokes profiles are not due to the viewing angle ( sect . [subsection : triple ] ) .unfortunately , the aforementioned correlations are not known for the solar internetwork simply because it is not clear how magnetic fields are distributed here . in the futurewe will explore this question by employing 3d numerical simulations of the solar atmosphere , because they provide correlations between and that are compatible with the mhd equations .+ * we have also assumed that the probability distribution function of the thermodynamic and kinematic parameters , , does not depend on the position on the solar disk .kinematic parameters ( i.e. line - of - sight velocity ) do not influence our study , since they have no effect on the amplitude of the stokes profiles in fig .[ figure : stokhistogram ] .the same can be argued about other thermodynamic parameters in , such as the source function at the observer s ( affects only stokes ) , and damping parameter ( affects mostly the line width but not its amplitude ) . by far , the most important thermodynamic parameters affecting the amplitude of the stokes profiles under the milne - eddington approximation are the gradient of the source function with optical depth and the continuum - to - line - center absorption coefficient . in a 1d atmosphere boththese parameters are known to decrease as increases , because the line - of - sight samples a thinner vertical - portion of the atmosphere . however , since the dependence of the polarization profiles with and are identical ( see eqs .8.14 , 8.15 and 9.44 in del toro iniesta 2003 ) , one would expect the same drop with increasing heliocentric angle angle in the amplitude of the circular polarization profiles ( stokes ) and linear polarization profiles ( stokes and ) .however , figure [ figure : stokhistogram ] shows that the linear and circular polarization profiles ( solid - color and dashed - color lines ) behave differently , and therefore we can rule out the variations of and/or with as being responsible for the observed histograms in the stokes profiles . of course , this would change in a 3d atmosphere , where the line - of - sight pierces through different inhomogeneous atmospheric layers , thereby opening the door for the possibility of and or to affect the linear and circular polarization profiles differently , instead of and . ] . + * adopting a milne - eddington atmosphere also implies that we are assuming that the magnetic field vector does not vary with optical depth in the photosphere .this can have important consequences , since at larger heliocentric angles the spectral line samples higher atmospheric layers than at disk center , where the probability distribution function of the magnetic field vector can be different .employing the widely used 1d holmu model ( holweger & mller 1974 ) we calculated that the continuum level rises by approximately km from disk center ( map a ; sect .[ subsection : mapa ] ) to ( maps b and c ; sects . [subsection : mapb ] and [ subsection : mapc ] ) .since this vertical shift of the continuum level is rather small , we could argue that the histograms of the stokes profiles in fig .[ figure : stokhistogram ] are not affected by this effect .however , the value of 20 km should be considered only as a lower limit since a 1d model does not take into account the horizontal inhomogeneities present in the solar atmosphere . to properly account for this effect, more sophisticated 3d models should be employed .+ * finally , we have considered in our analysis ( see sect .[ section : pdftheory ] ) .this is equivalent to considering that , at the resolution of the hinode / sp instrument ( 0.32 " ; sect .[ section : observations ] ) , the magnetic structures are spatially resolved .this is , of course , highly unlikely , and therefore it would be important to drop this assumption in the future .its importance can only be quantified with additional assumptions about the scale - distribution of the magnetic structures in the solar photosphere .this topic is , in itself , as controversial as the distribution of the magnetic field strength and inclination , which is the reason why we have refrained from addressing it here .although employing 3d mhd simulations would certainly help to drop the assumption , we are cautious about it since it is not clear whether these simulations are reliable at the smallest physical scales ( snchez almeida 2006 ) .the histograms of the observed stokes profiles at different positions on the solar disk ( fig .[ figure : stokhistogram ] ) are clearly different from each other .one possible interpretation for this is that the distribution of the magnetic field vector in the solar internetwork is not isotropic .we explored this possibility in section [ subsection : iso ] , where we employed an isotropic probability distribution of the magnetic field vector .this distribution yielded , as expected , the same distribution of stokes profiles at all positions on the solar disk ( fig .[ figure : isotropic ] ) .martnez gonzlez et al . (2008 ) have also presented similar histograms but employing the stokes profiles from the fe i line pair at 1.56 m ( observed with the tip2 instrument ; martnez pillet et al .their histograms ( see their figure 2 ) showed no clear variation with the heliocentric angle , which lead them to conclude that the distribution of the magnetic field vector in the quiet sun was isotropic .interestingly , these authors also mentioned after a more detailed analysis that there could indeed be a dependence of the histograms with the heliocentric angle ( as indeed we find here ) .+ in addition to martnez gonzlez et al .( 2008 ) , a number of works have also argued in favor of an isotropic distribution of magnetic fields in the internetwork . in particular ,asensio ramos ( 2009 ) and stenflo ( 2010 ) , employing two different approaches , both concluded that for very weak magnetic fields ( ) the distribution becomes isotropic . with our present datawe can not argue against or in favor of this interpretation .the main reason for this is that , as discussed in section [ section : clvobs ] , any distribution for the magnetic field vector where has a peak below 40 - 70 will produce linear polarization profiles that are dominated by noise ( -level or ). therefore our current approach ( described in section [ section : pdftheory ] ) can not be employed to discern the underlying distribution of the magnetic field vector from these profiles dominated by noise .however , it can be employed to establish that the number of pixels that would follow this hypothetically isotropic distribution can not be much larger than 30 % of the pixels in the internetwork , since this is the amount of pixels that show a peak at the -level in the polarization profiles ( see fig .[ figure : stokhistogram]a ) . for signals above ,the histograms of the stokes profiles deviate significantly from the ones predicted by an isotropic distribution , and thus we can establish that here the distribution of the magnetic field vector can not be isotropic .+ we can use a different argument to further clarify the previous point .our theoretical distributions in section [ section : pdftheory ] apply to all possible values of the module of the magnetic field vector .however , we could have employed distributions pieced together in the following form : where could hypothetically correspond to an isotropic distribution for weak fields : .this would explain the -peak in the linear polarization in figure [ figure : stokhistogram]a ( dashed lines ) .in addition , could be a distribution , valid for larger fields , that would fit the tails of the histogram .the distribution given by equation [ equation : piecepdf ] does not need to be discontinuous because it could be prescribed such that . + in section [subsection : triple ] we employed a triple gaussian ( one for each component of the magnetic field vector ) distribution function and found that , under this assumption and at disk center , the best fit to the observed histograms of the stokes profiles is produced by a distribution in which the mean value of the magnetic field vector component that is parallel to the solar surface is lower than the mean value of the magnetic field vector component that is perpendicular to the solar surface : .this yields a distribution function where the magnetic field vector is highly inclined , in agreement with previous findings from orozco et al .( 2007a , 2007b ) and lites et al .( 2007 , 2008 ) .however , this distribution does not fit well the histograms of the stokes profiles at other positions on the solar disk .in fact , in that section we found that it is not possible to fit the observed histograms for the stokes profiles at different heliocentric angles employing a theoretical distribution function for the magnetic field vector prescribed in the local reference frame that only changes due to the viewing angle .the reason for this is that , for an underlying distribution where the magnetic field vector is mostly horizontal at ( disk center ) , the amount of linear polarization slightly decreases when increases , while the amount of circular polarization would significantly increase as increases .however , the observed histograms of the stokes profiles ( fig .[ figure : stokhistogram ] ) show that , although the amount of linear polarization decreases when increases , the circular polarization does not particularly increase ( see also discussion in lites et al .this can not be explained in terms of a simple rotation of the viewing angle , and therefore we interpreted this fact , in section [ subsection : other ] , as proof that the underlying ( i.e. in the local reference frame ) distribution of the magnetic field vector must depend on the position on the solar disk .+ under the assumption that the distribution of the underlying magnetic field vector depends on the latitude ( see sect . [ subsection : other ] ) , we were able to find a theoretical distribution of the magnetic field vector ( eq . [ equation : othercartesian ] ) that fits quite well the observed histograms of the stokes profiles at different positions on the solar disk ( figure [ figure : other ] ) . among other properties ,this distribution features a magnetic field whose mean value decreases toward the poles . we note here that this does not mean that this is the real distribution for the magnetic field vector present in the quiet sun .one reason for this is that the fit is far from perfect ( see discrepancies mentioned in sect .[ subsection : other ] ) , but most importantly , that we do not know whether this solution is unique because there can be other theoretical distributions that fit the observed stokes profiles equally well , or even better .more work is indeed needed to confirm or rule out eq .[ equation : othercartesian ] as the real distribution of the magnetic field vector present in the sun .in particular , a better fit to the observed histograms of the stokes profiles is desirable .in addition , it is important to have maps at more latitudes to further constrain the possible distribution functions . + moreover , it is important to bear in mind that the conclusions above are not necessarily the only possible interpretations , because postulating a probability distribution function of the thermodynamic and kinematic parameters , , that varies with the heliocentric angle , or postulating a correlation between the thermodynamic ( ) and magnetic ( ) parameters might also help explain the observed differences between the histograms of the stokes profiles at different positions on the solar disk .another effect that has not been accounted for is that magnetic field vector can vary with optical depth in the solar photosphere . since the stokes profiles sampleincreasingly higher atmospheric layers as the heliocentric angle increases , the distribution of the magnetic field vector can be different for different values of , even if the probability distribution of the magnetic field vector is the same at all positions on the solar disk at a fixed geometrical depth .all these effects could be properly accounted for by means of 3d mhd simulations of the solar photosphere ( schssler & vgler 2008 ; steiner et al .2008 , 2009 ; danilovic et al .2010 ) . + in the futurewe expect to employ such simulations to either rule out or confirm our results in this paper .consequently , our conclusions at this point should be regarded as preliminary only . instead, the main purpose in this paper is to illustrate the methodology detailed in sections [ section : clvobs ] and [ section : pdftheory ] to study the distribution of the magnetic field vector in the quiet sun , by directly inverting the histograms of the stokes profiles in entire maps instead of inverting the stokes profiles at each spatial position in a given map .our method has great potential to investigate several aspects of the photospheric magnetism in the solar internetwork .for instance , it can be used , as in sect . [subsection : other ] , to confirm whether the mean value of the distribution of the magnetic field vector changes from disk center toward the poles ( cf .zwang 1987 ; ito et al .this will have important consequences for theoretical models that explain the torsional oscillations in the butterfly diagram in terms of a geostrophic flow model ( spruit 2003 ) , which requires a significant amount of magnetic flux at high latitudes at the beginning of the sunspot cycle .in addition , and although in this work we have restricted ourselves to variations in latitude ( ) , additional observations from disk center toward the solar limbs could be employed to investigate whether the properties of the magnetic field in the internetwork change also in longitude .this is already predicted by non - axisymmetric dynamo models ( moss 1991 , moss et al .1999 , bigazzi & ruzmaikin 2004 , charbonneau 2005 ) and can provide important clues about the strength of the differential rotation ( rdiger & elstner 1994 ; zhang et al .+ we would like to thank luis bellot rubio , mariam martnez gonzlez and oskar steiner for fruitful discussions in the subject .many thanks also to an anonymous referee , who pointed out an error in the probability distribution function of section [ subsection : triple ] in an early version of this manuscript .this work analyzes data from the hinode spacecraft .hinode is a japanese mission developed and launched by isas / jaxa , collaborating with naoj as a domestic partner , nasa and stfc ( uk ) as international partners .scientific operation of the hinode mission is conducted by the hinode science team organized at isas / jaxa .this team mainly consists of scientists from institutes in the partner countries .support for the post - launch operation is provided by jaxa and naoj ( japan ) , stfc ( u.k . ) , nasa , esa , and nsc ( norway ) .this work has also made use of the nasa ads database .asensio ramos , a. , martnez gonzlez , m.j ., lpez ariste , a. , trujillo bueno , j. & collados , m. 2007 , , 659 , 829 asensio ramos , a. 2009 , , 701 , 1032 bellot rubio , l.r . &orozco surez , d. 2012 , , 757 , 19 bigazzi , a. , ruzmaikin , a. 2004 , , 604 , 944 borrero , j.m . &bellot rubio , l.r .2002 , a&a , 385 , 1056 borrero , j.m . , tomczyk , s. , kubo , m. et al .2011 , solar physics , 273 , 267 borrero , j.m . & kobel , p. 2011 , a&a , 527 , 29 , paper i borrero , j.m . & kobel , p. 2012 , a&a , 547 , 89 , paper ii charbonneau , p. 2005 , living .solar phys . , 2 .( cited on 2010 ) danilovic , s. , schssler , m. & solanki , s.k .2010 , a&a , 513 , 1 domnguez cerdea , i. , snchez almeida , j. & kneer , f. 2003 , , 582 , 55 domnguez cerdea , i. , snchez almeida , j. & kneer , f. 2006 , , 636 , 496 harvey , j.w . , bratson , d. , henney , c.j & keller , c.u 2007 , , 659 , l177 henney , c.j . & harvey , j.w .2002 , , 207 , 199 holweger , h. & mller , e.a . 1974 , , 39 , 19 ichimoto , k. , lites , b.w . ,elmore , d. et al . 2008 , solar physics , 249 , 233 ito , h. , tsuneta , s. , shiota , d. , tokumaru , m. & fukiri , k. 2010 , , 719 , 131 kosugi , t. , matsuzaki , k. , sakao , t. 2007 , solar physics , 243 , 3 leva , j.l .1992 , acm transactions of mathematical software , volume 18 , number 4 , p 454 - 455 lites , b.w ., socas - navarro , h. , kubo , m. et al .2007 , pasj , 59 , 571 lites , b.w . ,kubo , m. , socas - navarro , h. et al .2008 , , 672 , 1237 lpez ariste , a. , martnez gonzlez , m.j & ramrez vlez , j.c .2007 , a&a , 464 , 351 martnez gonzlez , m. , asensio ramos , a. , lpez ariste , a. & manso - sainz , r. 2008 , a&a , 479 , 229 martnez pillet , v. , collados , m. , snchez almeida , j. et al . 1999 ,aspc , 183 , 264 moss , d. , brandenburg , a. & tuominen , i. 1991 , a&a , 347 , 576 moss , d. 1999 , mnras , 306 , 300 orozco surez , d. , bellot rubio , l.r . , del toro iniesta , j.c .2007a , , 670 , l61 orozco surez , d. , bellot rubio , l.r ., del toro iniesta , j.c .et al . 2007b , pasj , 59 , 837 orozco surez , d. & bellot rubio , l.r .2012 , , 751 , 1 rdiger , g & elstner , d. 1994 , a&a , 281 , 46 ruiz cobo , b. & del toro iniesta , j.c .1992 , , 398 , 375 snchez almeida , j. 2005 , a&a , 438 , 727 snchez almeida , j. 2006 , a&a , 450 , 1199 snchez almeida , j. & martnez gonzlez , m.j .2011 in : proceedings of the solar polarization workshop 6 .eds : j. r. kuhn , d. m. harrington , h. lin , s. v. berdyugina , j. trujillo - bueno , s. l. keil , and t. rimmele .san francisco : astronomical society of the pacific , 2011 ., p.451 schssler , m. & vgler , a. 2008 , a&a , 481 , l5 steiner , o. , rezaei , r. , schaffenberger , w. & wedemeyer - bhm , s. 2008 , , 680 , l85 steiner , o. , rezaei , r. & schlichenmaier , r. 2009 in : proceedings for the second hinode science meeting .eds : b. lites , m. cheung , t. nagara , j. mariska and k. reeves .vol 415 , 67 .shimizu , t. , nagata , s. , tsuneta , s. et al .2008 , solar physics , 249 , 221 spruit , h. 2003 , , 213 , 1 stenflo , j.o .2010 , a&a , 517 , 37 suematsu , y. , tsuneta , s. , ichimoto , k. et al .2008 , solar physics , 249 , 197 del toro iniesta , j.c .introduction to spectropolarimetry .cambridge , uk : cambridge university press , april 2003 .isbn : 0521818273 tsuneta , s. , suetmatsu , y. , ichimoto , k. et al .2008 , solar physics , 249 , 167 zhang , k. , chan , k.h . ,zou , j. , liao , x. & schubert , g. 2003 , , 596 , 663 zwang , c. 1987 , ann .astrophys . , 25 , 83
recent investigations of the magnetic field vector properties in the solar internetwork have provided diverging results . while some works found that the internetwork is mostly pervaded by horizontal magnetic fields , other works argued in favor of an isotropic distribution of the magnetic field vector . motivated by these seemingly contradictory results and by the fact that most of these works have employed spectropolarimetric data at disk center only , we have revisited this problem employing high - quality data ( noise level in units of the quiet - sun intensity ) at different latitudes recorded with the hinode / sp instrument . instead of applying traditional inversion codes of the radiative transfer equation to retrieve the magnetic field vector at each spatial point on the solar surface and studying the resulting distribution of the magnetic field vector , we surmised a theoretical distribution function of the magnetic field vector and used it to obtain the theoretical histograms of the stokes profiles . these histograms were then compared to the observed ones . any mismatch between them was ascribed to the theoretical distribution of the magnetic field vector , which was subsequently modified to produce a better fit to the observed histograms . with this method we find that stokes profiles with signals above ( in units of the continuum intensity ) can not be explained by an isotropic distribution of the magnetic field vector . we also find that the differences between the histograms of the stokes profiles observed at different latitudes can not be explained in terms of line - of - sight effects . however , they can be explained by a distribution of the magnetic field vector that inherently varies with latitude . we note that these results are based on a series of assumptions that , although briefly discussed in this paper , need to be considered in more detail in the future .
a key challenge in artificial intelligence is how to effectively learn to make a sequence of good decisions in stochastic , unknown environments .reinforcement learning ( rl ) is a subfield specifically focused on how agents can learn to make good decisions given feedback in the form of a reward signal . in many important applications such as robotics , education , and healthcare, the agent can not directly observe the state of the environment responsible for generating the reward signal , and instead only receives incomplete or noisy observations .one important measure of an rl algorithm is its sample efficiency : how much data / experience is needed to compute a good policy and act well .one way to measure sample complexity is given by the probably approximately correct framework ; an rl algorithm is said to be pac if with high probability , it selects a near - optimal action on all but a number of steps ( the sample complexity ) which is a polynomial function of the problem parameters .there has been substantial progress on pac rl for the fully observable setting , but to our knowledge there exists no published work on pac rl algorithms for partially observable settings .this lack of work on pac partially observable rl is perhaps because of the additional challenge introduced by the partial observability of the environment . in fully observable settings ,the world is often assumed to behave as a markov decision process ( mdp ) .an elegant approach for proving that a rl algorithm for mdps is pac is to compute finite sample error bounds on the mdp parameters .however , because the states of a partially observable mdp ( pomdp ) are hidden , the naive approach of directly treating the pomdp as a history - based mdp yields a state space that grows exponentially with the horizon , rather than polynomial in all pomdp parameters .on the other hand , there has been substantial recent interest and progress on method of moments and spectral approaches for modeling partially observable systems .the majority of this work has focused on inference and prediction , with little work tackling the control setting .method of moments approaches to latent variable estimation are of particular interest because for a number of models they obtain global optima and provide finite sample guarantees on the accuracy of the learned model parameters .inspired by the this work , we propose a pomdp rl algorithm that is , to our knowledge , the first pac pomdp rl algorithm for episodic domains ( with no restriction on the policy class ) .our algorithm is applicable to a restricted but important class of pomdp settings , which include but are not limited to information gathering pomdp rl domains such as preference elicitation , dialogue management slot - filling domains , and medical diagnosis before decision making .our work builds on method of moments inference techniques , but requires several non - trivial extensions to tackle the control setting . in particular, there is a subtle issue of latent state alignment : if the models for each action are learned as independent hidden markov models ( hmms ) , then it is unclear how to solve the correspondence issue across latent states , which is essential for performing planning and selecting actions .our primary contribution is to provide a theoretical analysis of our proposed algorithm , and prove that it is possible to obtain near - optimal performance on all but a number of episodes that scales as a _ polynomial_ function of the pomdp parameters .similar to most fully observable pac rl algorithms , directly instantiating our bounds would yield an impractical number of samples for a real application .nevertheless , we believe understanding the sample complexity may help to guide the amount of data required for a task , and also similar to pac mdp rl work , may motivate new practical algorithms that build on these ideas .the inspiration for pursuing pac bounds for pomdps came about from the success of pac bounds for mdps . while algorithms have been developed for pomdps with finite sample bounds , unfortunately these bounds are not pac as they have an exponential dependence on the horizon length .alternatively , bayesian methods are very popular for solving pomdps . for mdps, there exist bayesian methods that have pac bounds ; however there have been no pac bounds for bayesian methods for pomdps .that said , bayesian methods are optimal in the bayesian sense of making the best decision given the posterior over all possible future observations , which does not translate to a frequentist finite sample bound .we build on method of moments ( mom ) work for estimating hmms in order to provide a finite sample bound for pomdps .mom is able to obtain a global optimum , and has finite sample bounds on the accuracy of their estimates , unlike the popular expectation - maximization ( em ) that is only guaranteed to find a local optima , and offers no finite sample guarantees .mle approaches for estimating hmms also unfortunately do not provide accuracy guarantees on the estimated hmm parameters .as pomdp planning methods typically require us to have estimates of the underlying pomdp parameters , it would be difficult to use such mle methods for computing a pomdp policy and providing a finite sample guarantee - length observation sequences has a bounded kl - divergence from the true probability of the sequence under the true parameters , which is expressed as a function of the number of underlying data samples used to estimate the hmm parameters .we think it may be possible to use such estimates in the control setting when modeling hidden state control systems as psrs , and employing a forward search approach to planning ; however , there remain a number of subtle issues to address to ensure such an approach is viable and we leave this as an interesting direction for future work . ] . aside from the mom method in ,another popular spectral method involves using predictive state representations ( psrs ) , to directly tackle the control setting ; however it only has asymptotic convergence guarantees and no finite sample analysis .there is also another method of moments approach to transfer across a set of bandits tasks , but the latent variable estimation problem is substantially simplified because the state of the system is unchanged by the selected actions .fortunately , due to the polynomial finite sample bounds from mom , we can achieve a pac ( polynomial ) sample complexity bound for pomdps .we have provided a pac rl algorithm for an important class of episodic pomdps , which includes many information gathering domains . to our knowledgethis is the first rl algorithm for partially observable settings that has a sample complexity that is a polynomial function of the pomdp parameters .there are many areas for future work .we are interested in reducing the set of currently required assumptions , thereby creating pac porl algorithms that are suitable to more generic settings .such a direction may also require exploring alternatives to method of moments approaches for performing latent variable estimation .we also hope that our theoretical results will lead to further insights on practical algorithms for partially observable rl .
many interesting real world domains involve reinforcement learning ( rl ) in partially observable environments . efficient learning in such domains is important , but existing sample complexity bounds for partially observable rl are at least exponential in the episode length . we give , to our knowledge , the first partially observable rl algorithm with a polynomial bound on the number of episodes on which the algorithm may not achieve near - optimal performance . our algorithm is suitable for an important class of episodic pomdps . our approach builds on recent advances in method of moments for latent variable model estimation .
in this paper , we consider the task of monocular depth estimation_i.e ._ , recovering scene depth from a single color image .knowledge of a scene s three - dimensional ( 3d ) geometry can be useful in reasoning about its composition , and therefore measurements from depth sensors are often used to augment image data for inference in many vision , robotics , and graphics tasks .however , the human visual system can clearly form at least an approximate estimate of depth in the absence of stereo and parallax cues_e.g ._ , from two - dimensional photographs and it is desirable to replicate this ability computationally .depth information inferred from monocular images can serve as a useful proxy when explicit depth measurements are unavailable , and be used to refine these measurements where they are noisy or ambiguous .the 3d co - ordinates of a surface imaged by a perspective camera are physically ambiguous along a ray passing through the camera center .however , a natural image often contains multiple cues that can indicate aspects of the scene s underlying geometry .for example , the projected scale of a familiar object of known size indicates how far it is ; foreshortening of regular textures provide information about surface orientation ; gradients due to shading indicate both orientation and curvature ; strong edges and corners can correspond to convex or concave depth boundaries ; and occluding contours or the relative position of key landmarks can be used to deduce the coarse geometry of an object or the whole scene . while a given image may be rich in such geometric cues , it is important to note that these cues are present in different image regions , and each indicates a different aspect of 3d structure .we propose a neural network - based approach to monocular depth estimation that explicitly leverages this intuition .prior neural methods have largely sought to directly regress to depth with some additionally making predictions about smoothness across adjacent regions , or predicting relative depth ordering between pairs of image points .in contrast , we train a neural network with a rich distributional output space .our network characterizes various aspects of the local geometric structure by predicting values of a number of derivatives of the depth map at various scales , orientations , and of different orders ( including the derivative , _i.e. _ , the depth itself)at every image location . however , as mentioned above , we expect different image regions to contain cues informative towards different aspects of surface depth .therefore , instead of over - committing to a single value , our network outputs parameterized distributions for each derivative , allowing it to effectively characterize the ambiguity in its predictions .the full output of our network is then this set of multiple distributions at each location , characterizing coefficients in effectively an overcomplete representation of the depth map . to recover the depth map itself, we employ an efficient globalization procedure to find the single consistent depth map that best agrees with this set of local distributions .we evaluate our approach on the nyuv2 depth data set , and find that it achieves state - of - the - art performance . beyond the benefits to the monocular depth estimation task itself , the success of our approach suggests that our network can serve as a useful way to incorporate monocular cues in more general depth estimation settings_e.g ._ , when sparse or noisy depth measurements are available .since the output of our network is distributional , it can be easily combined with partial depth cues from other sources within a common globalization framework .moreover , we expect our general approach of learning to predict distributions in an overcomplete respresentation followed by globalization to be useful broadly in tasks that involve recovering other kinds of scene value maps that have rich structure , such as optical or scene flow , surface reflectances , illumination environments , etc .interest in monocular depth estimation dates back to the early days of computer vision , with methods that reasoned about geometry from cues such as diffuse shading , or contours .however , the last decade has seen accelerated progress on this task , largely owing to the availability of cheap consumer depth sensors , and consequently , large amounts of depth data for training learning - based methods .most recent methods are based on training neural networks to map rgb images to geometry .et al . _ set up their network to regress directly to per - pixel depth values , although they provide deeper supervision to their network by requiring an intermediate layer to explicitly output a coarse depth map .other methods use conditional random fields ( crfs ) to smooth their neural estimates. moreover , the network in also learns to predict one aspect of depth structure , in the form of the crf s pairwise potentials .some methods are trained to exploit other individual aspects of geometric structure .et al . _ train a neural network to output surface normals instead of depth ( eigen _ et al . _ do so as well , for a network separately trained for this task ) . in a novel approach ,zoran _ et al . _ were able to train a network to predict the relative depth ordering between pairs of points in the image whether one surface is behind , in front of , or at the same depth as the other .however , their globalization scheme to combine these outputs was able to achieve limited accuracy at estimating actual depth , due to the limited information carried by ordinal pair - wise predictions .in contrast , our network learns to reason about a more diverse set of structural relationships , by predicting a large number of coefficients at each location .note that some prior methods also regress to coefficients in some basis instead of to depth values directly .however , their motivation for this is to _ reduce _ the complexity of the output space , and use basis sets that have much lower dimensionality than the depth map itself .our approach is different our predictions are distributions over coefficients in an _ overcomplete _ representation , motivated by the expectation that our network will be able to precisely characterize only a small subset of the total coefficients in our representation .our overall approach is similar to , and indeed motivated by , the recent work of chakrabarti _ et al . _ , who proposed estimating a scene map ( they considered disparity estimation from stereo images ) by first using local predictors to produce distributional outputs from many overlapping regions at multiple scales , followed by a globalization step to harmonize these outputs. however , in addition to the fact that we use a neural network to carry out local inference , our approach is different in that inference is not based on imposing a restrictive model ( such as planarity ) on our local outputs .instead , we produce independent local distributions for various derivatives of the depth map .consequently , our globalization method need not explicitly reason about which local predictions are `` outliers '' with respect to such a model .moreover , since our coefficients can be related to the global depth map through convolutions , we are able to use fourier - domain computations for efficient inference .we formulate our problem as that of estimating a scene map , which encodes point - wise scene depth , from a single rgb image , where indexes location on the image plane .we represent this scene map in terms of a set of coefficients at each location , corresponding to various spatial derivatives .specifically , these coefficients are related to the scene map through convolution with a bank of derivative filters , _i.e. _ , for our task , we define to be a set of 2d derivative - of - gaussian filters with standard deviations pixels , for scales .we use the zeroth order derivative ( _ i.e. _ , the gaussian itself ) , first order derivatives along eight orientations , as well as second order derivatives along each of the orientations , and orthogonal orientations ( see fig . [fig : teaser ] for examples ) .we also use the impulse filter which can be interpreted as the zeroth derivative at scale 0 , with the corresponding coefficients gives us a total of filters .we normalize the first and second order filters to be unit norm .the zeroth order filters coefficients typically have higher magnitudes , and in practice , we find it useful to normalize them as to obtain a more balanced representation .to estimate the scene map , we first use a convolutional neural network to output distributions for the coefficients , for every filter and location .we choose a parametric form for these distributions , with the network predicting the corresponding parameters for each coefficient .the network is trained to produce these distributions for each set of coefficients by using as input a local region centered around in the rgb image .we then form a single consistent estimate of by solving a global optimization problem that maximizes the likelihood of the different coefficients of under the distributions provided by our network .we now describe the different components of our approach ( which is summarized in fig .[ fig : teaser])the parametric form for our local coefficient distributions , the architecture of our neural network , and our globalization method .our neural network has to output a distribution , rather than a single estimate , for each coefficient .we choose gaussian mixtures as a convenient parametric form for these distributions : where is the number of mixture components ( 64 in our implementation ) , is a common variance for all components for derivative , and the individual component means . a distribution for a specific coefficient then characterized by our neural network by producing the mixture weights , , for each from the scene s rgb image . prior to training the network, we fix the means and variances based on a training set of ground truth depth maps .we use one - dimensional k - means clustering on sets of training coefficient values for each derivative , and set the means in above to the cluster centers .we set to the average in - cluster variance however , since these coefficients have heavy - tailed distributions , we compute this average only over clusters with more than a minimum number of assignments .depth derivatives at each location , using a color image as input .the distributions are parameterized as gaussian mixtures , and the network produces the mixture weights for each coefficient .our network includes a local path ( green ) with a cascade of convolution layers to extract features from a patch around each location ; and a scene path ( red ) with pre - trained vgg-19 layers to compute a single scene feature vector .we learn a linear map ( with x32 upsampling ) from this scene vector to per - location features .the local and scene features are concatenated and used to generate the final distributions ( blue ) . ]our method uses a neural network to predict the mixture weights of the parameterization in from an input color image .we train our network to output numbers at each pixel location , which we interpret as a set of -dimensional vectors corresponding to the weights , for each of the distributions of the coefficients .this training is done with respect to a loss between the predicted , and the best fit of the parametric form in to the ground truth derivative value .specifically , we define in terms of the true as : and define the training loss in terms of the kl - divergence between these vectors and the network predictions , weighting the loss for each derivative by its variance : where is the total number of locations .our network has a fairly high - dimensional output space corresponding to numbers , with degrees of freedom , at each location .its architecture , detailed in fig .[ fig : narch ] , uses a cascade of seven convolution layers ( each with relu activations ) to extract a -dimensional _ local _ feature vector from each local patch in the input image . to further add scene - level semantic context, we include a separate path that extracts a single -dimensional feature vector from the entire image using pre - trained layers ( upto _ pool5 _ ) from the vgg-19 network , followed downsampling with averaging by a factor of two , and a fully connected layer with a relu activation that is trained with dropout .this global vector is used to derive a -dimensional vector for each location a learned layer that generates a feature map at a coarser resolution , that is then bi - linearly upsampled by a factor of to yield an image - sized map .the concatenated local and scene - level features are passed through two more hidden layers ( with relu activations ) .the final layer produces the -vector of mixture weights , applying a separate softmax to each of the -dimensional vector .all layers in the network are learned end - to - end , with the vgg-19 layers finetuned with a reduced learning rate factor of compared to the rest of the network .the local path of the network is applied in a `` fully convolutional '' way during training and inference , allowing efficient reuse of computations between overlapping patches . applying our neural network to a given input imageproduces a dense set of distributions for all derivative coefficients at all locations .we combine these to form a single coherent estimate by finding the scene map whose coefficients have high likelihoods under the corresponding distributions .we do this by optimizing the following objective : where , like in , the log - likelihoods for different derivatives are weighted by their variance .the objective in is a summation over a large ( times image - size ) number of non - convex terms , each of which depends on scene values at multiple locations in a local neighborhood based on the support of filter . despite the apparent complexity of this objective, we find that approximate inference using an alternating minimization algorithm , like in , works well in practice .specifically , we create explicit auxiliary variables for the coefficients , and solve the following modified optimization problem : + \frac{\beta}{2}\sum_{i , n } \left(w_i(n)-(k_i*y)(n ) \right)^2 + \frac{1}{2}\mathcal{r}(y).\ ] ] note that the second term above forces coefficients of to be equal to the corresponding auxiliary variables , as .we iteratively compute , by alternating between minimizing the objective with respect to and to , keeping the other fixed , while increasing the value of across iterations .note that there is also a third regularization term in , which we define as using laplacian filters , at four orientations , for .in practice , this term only affects the computation of in the initial iterations when the value of is small , and in later iterations is dominated by the values of .however , we find that adding this regularization allows us to increase the value of faster , and therefore converge in fewer iterations . each step of our alternating minimization can be carried out efficiently . when fixed , the objective in can be minimized with respect to each coefficient independently as : where is the corresponding derivative of the current estimate of . since is a mixture of gaussians , the objective incan also be interpreted as the ( scaled ) negative log - likelihood of a gaussian - mixture , with `` posterior '' mixture means and weights : while there is no closed form solution to , we find that a reasonable approximation is to simply set to the posterior mean value for which weight is the highest .the second step at each iteration involves minimizing with respect to given the current estimates of .this is a simple least - squares minimization given by note that since all terms above are related to by convolutions with different filters , we can carry out this minimization very efficiently in the fourier domain .we initialize our iterations by setting simply to the component mean for which our predicted weight is highest .then , we apply the and minimization steps alternatingly , while increasing from to , by a factor of at each iteration .we train and evaluate our method on the nyu v2 depth dataset . to construct our training and validation sets ,we adopt the standard practice of using the raw videos corresponding to the training images from the official train / test split .we randomly select 10% of these videos for validation , and use the rest for training our network .our training set is formed by sub - sampling video frames uniformly , and consists of roughly 56,000 color image - depth map pairs .monocular depth estimation algorithms are evaluated on their accuracy in the crop of the depth map that contains a valid depth projection ( including filled - in areas within this crop ) .we use the same crop of the color image as input to our algorithm , and train our network accordingly .we let the scene map in our formulation correspond to the reciprocal of metric depth , _i.e. _ , . while other methods use different compressive transform ( _ e.g. _ , regress to ) , our choice is motivated by the fact that points on the image plane are related to their world co - ordinates by a perspective transform .this implies , for example , that in planar regions the first derivatives of will depend only on surface orientation , and that second derivatives will be zero .we use data augmentation during training , applying random rotations of and horizontal flips simultaneously to images and depth maps , and random contrast changes to images .we use a fully convolutional version of our architecture during training with a stride of pixels , yielding nearly 4000 training patches per image .we train the network using sgd for a total of 14 epochs , using a batch size of only one image and a momentum value of .we begin with a learning rate of , and reduce it after the , , , , and epochs , each time by a factor of two .this schedule was set by tracking the post - globalization depth accuracy on a validation set .first , we analyze the informativeness of individual distributional outputs from our neural network .figure [ fig : unc ] visualizes the accuracy and confidence of the local per - coefficient distributions produced by our network on a typical image . for various derivative filters ,we display maps of the absolute error between the true coefficient values and the mean of the corresponding predicted distributions . alongside these errors, we also visualize the network s `` confidence '' in terms of a map of the standard deviations of .we see that the network makes high confidence predictions for different derivatives in different regions , and that the number of such high confidence predictions is least for zeroth order derivatives .moreover , we find that all regions with high predicted confidence ( _ i.e. _ , low standard deviation ) also have low errors . figure [ fig : unc ] also displays the corresponding global depth estimates , along with their accuracy relative to the ground truth .we find that despite having large low - confidence regions for individual coefficients , our final depth map is still quite accurate .this suggests that the information from different coefficients predicted distributions is complementary .[ tab : ablat ] to quantitatively characterize the contribution of the various components of our overcomplete representation , we conduct an ablation study on 100 validation images . with the same trained network ,we include different subsets of filter coefficients for global estimation leaving out either specific derivative orders , or scales and report their accuracy in table [ tab : ablat ] .we use the standard metrics from for accuracy between estimated and true depth values and across all pixels in all images : root mean square error ( rmse ) of both and , mean relative error ( ) and relative square error ( ) , as well as percentages of pixels with error below different thresholds .we find that removing each of these subsets degrades the performance of the global estimation method with second order derivatives contributing least to final estimation accuracy .interestingly , combining multiple scales but with only zeroth order derivatives performs worse than using just the point - wise depth distributions .finally , we evaluate the performance of our method on the nyu v2 test set .table [ tab : rtest ] reports the quantitative performance of our method , along with other state - of - the - art approaches over the entire test set , and we find that the proposed method yields superior performance on most metrics .figure [ fig : qual ] shows example predictions from our approach and that of .we see that our approach is often able to better reproduce local geometric structure in its predictions ( desk & chair in column 1 , bookshelf in column 4 ) , although it occasionally mis - estimates the relative position of some objects ( _ e.g. _ , globe in column 5 ) . at the same time , it is also usually able to correctly estimate the depth of large and texture - less planar regions ( but , see column 6 for an example failure case ) .our overall inference method ( network predictions and globalization ) takes 24 seconds per - image when using an nvidia titan x gpu . the source code for implementation , along with a pre - trained network model , are available at http://www.ttic.edu/chakrabarti/mdepth .[ tab : rtest ]in this paper , we described an alternative approach to reasoning about scene geometry from a single image . instead of formulating the task as a regression to point - wise depth values , we trained a neural network to probabilistically characterize local coefficients of the scene depth map in an overcomplete representation .we showed that these local predictions could then be reconciled to form an estimate of the scene depth map using an efficient globalization procedure .we demonstrated the utility of our approach by evaluating it on the nyu v2 depth benchmark .its performance on the monocular depth estimation task suggests that our network s local predictions effectively summarize the depth cues present in a single image . in future work, we will explore how these predictions can be used in other settings_e.g ._ , to aid stereo reconstruction , or improve the quality of measurements from active and passive depth sensors .we are also interested in exploring whether our approach of training a network to make overcomplete probabilistic local predictions can be useful in other applications , such as motion estimation or intrinsic image decomposition .ac acknowledges support for this work from the national science foundation under award no .iis-1618021 , and from a gift by adobe systems . ac andgs thank nvidia corporation for donations of titan x gpus used in this research .
a single color image can contain many cues informative towards different aspects of local geometric structure . we approach the problem of monocular depth estimation by using a neural network to produce a mid - level representation that summarizes these cues . this network is trained to characterize local scene geometry by predicting , at every image location , depth derivatives of different orders , orientations and scales . however , instead of a single estimate for each derivative , the network outputs probability distributions that allow it to express confidence about some coefficients , and ambiguity about others . scene depth is then estimated by harmonizing this overcomplete set of network predictions , using a globalization procedure that finds a single consistent depth map that best matches all the local derivative distributions . we demonstrate the efficacy of this approach through evaluation on the nyu v2 depth data set . = 1
is it possible to observe the whole universe ? is the object studied in what is claimed to be observational cosmology really all of space or just a tiny bit of space ? in order to answer these questions , it is necessary to measure the mathematical properties of _ local geometry _ ( such as the curvature ) and _ global geometry _ ( such as the topology ) , which together describe the ` shape ' and size of space , under the assumption that the curvature is nearly constant everywhere in space . since a century ago , schwarzschild ( 1900 ) , de sitter ( 1917 ) , friedmann ( 1924 ) and lematre ( 1958 ) have realised that the spatial part of our universe could correspond to a space ( a 3manifold ) which may have either a non zero curvature and/or a non trivial topology . the measurement of these properties ( one local and the other global ) from surveys obtained at telescopes of different sorts , such as the gmrt , the aat , the vlt , xmm , map and planck surveyor , should enable us to find out if our cosmological observations are global in the sense of measuring the whole of space , or whether they simply measure a tiny fraction of the universe : our observable sphere .tests for measuring curvature or topology are dependent to differing extents on assumptions of the cosmological model adopted .most tests are evaluated in terms of the most popular model , i.e. the ` hot big bang ' model , or in other words , the perturbed friedmann lematre model , but as long as the cosmological expansion interpretation of redshifts is retained , many of the tests involving ` standard candles ' or ` standard rulers ' should also be valid for the quasi steady state cosmology model ( ) . in order to aid the non specialist , some reminders on curvature and topology are provided in [ s - definegeom ] .the application of these geometrical concepts to the standard hot big bang model , to extrapolations of the standard model and to the quasi steady state cosmology model are presented in [ s - models ] .what do the observations tell us ?serious observational work with what may be hoped to be sufficiently deep surveys to determine the global geometry of the universe have only just started in the last decade , and the race is on to obtain the first significant results . a brief glance at the various strategies using different astrophysical objects or radiation sources and tentative results is described in [ s - obsvns ] .comoving coordinates are used to describe space throughout this review .[ s - definegeom ]in the standard friedmann lematre cosmology , the model of space time is locally based on the hilbert einstein equations , where local geometry ( curvature ) is equated to local physical content ( density ) of the universe .such a space time has spatial sections ( i.e. hypersurfaces at constant cosmological time ) which are of constant curvature . in order to intuitively understand curvature , it is useful to use a two dimensional analogy .an example of a flat , or curvature zero , two dimensional space is the euclidean plane ( ) .two examples of non zero ( but constant ) curvature two dimensional spaces ( or surfaces ) are the sphere ( ) and the hyperboloid ( ) , which are of positive and negative curvature respectively . these three spaces are _ simply connected _, i.e. any closed loop on their surfaces can be continuously contracted to a point .this would not be the case if there was a ` handle ' added to one of these surfaces , because in that case any loop circling the hole of the handle ( for example ) would not be contractible to a point .a space for which there exist non contractible loops is called _multiply connected_. an example of a flat , multiply connected space is the _ flat torus _ ( ) .there are three different ways to think of this space , each useful in different ways , explained further below and in fig .[ f - topo4 ] : ( ) as a sort of ` doughnut ' shape by inserting it in a three dimensional euclidean space , but retaining its flat metric ( rule for deducing distances between two close points ) , as a rectangle of which one physically identifies opposite sides , or as an apparent space , i.e. as a tiling of the full euclidean plane by multiple apparent copies of the single physical space .the polygon ( or in three dimensions , the polyhedron ) of ( ii ) is termed the _ fundamental polyhedron _ ( or _ dirichlet domain _ ) . the apparent space ( iii )is termed the _ universal covering space _ , or _covering space _ for short . representation ( i )is not generally useful for analysis of observations .one can shift between ( i ) and ( ii ) by cutting ( i ) the ` doughnut ' shape twice and unrolling to obtain ( ii ) , or by rolling and sticking together opposite sides of ( ii ) the rectangle in order to obtain ( i ) .these operations help us to see why and are _ locally _ identical , i.e. both have curvature zero , since the former can be constructed from the latter by cutting a piece of the latter and pasting , but that _ globally _ they are different , since has a finite surface area ( is ` compact ' ) , without having any edges , but is infinite .this can now be put in a cosmological context ( imagining a two dimensional universe ) , by thinking of a photon which makes several crossings of the torus , i.e. of a universe . in the three ways of thinking of ,this can be thought of as ( i ) looping the torus several times , ( ii ) crossing the rectangle , say , from left to right several times or ( iii ) in the covering space ( apparent space ) , crossing many copies of the rectangle before arriving at the observer .of course , this is only possible if the time needed to cross the rectangle is less than the age of the universe . in three dimensions ,the three simply connected constant curvature spaces corresponding to those listed above are the 3d euclidean space , , the hypersphere ( ) and the 3hyperboloid ( ) , and the equivalent of the torus is the hypertorus ( ) , which can be obtained by identifying opposite faces of a cube .as for the two - dimensional case , and are locally identical but globally different .there exist many other multiply connected 3-d spaces of constant curvature .these can each be represented by a fundamental polyhedron ( like the rectangle for the case of ) embedded in the simply connected space of the same curvature ( i.e. in , or ) , of which the faces of the polyhedron are identified in pairs in some way .the simply connected space is then the covering space , and can be thought of in format ( iii ) as above , as an apparent space which is tiled by copies of the fundamental polyhedron , just as the mosaic floor of a temple may be ( in certain cases ) tiled by a repeated pattern of a single tile .if the physical universe corresponds to a multiply connected space which is small enough , i.e. which is finite and for which photons have the time to cross the universe several times , then the ( apparent ) observable universe would be a part of the covering space and would contain several copies of the fundamental polyhedron . in other words , in apparent space , there could be multiple apparent copies of the single physical universe .the possibility of seeing several times across the universe provides the basic principle of nearly all the methods capable of constraining or detecting the topology of the universe : a single object ( or a 3-d region of black - body plasma ) should be seen in different sky directions and at different distances ( hence different emission epochs ) .these multiple images , such as the three images which , according to the observationally inspired hypothesis of , could be three images of a single cluster of galaxies ( coma ) seen at three different redshifts , are called _topological images_. for a thorough introduction to the subject ( but prior to the recent surge in observational projects ) , see .for more recent developments , see and , and workshop proceedings in and .[ s - models ]so , in order to begin to know the ` shape ' of the universe , both the curvature _ and _ the topology need to be known . however , virtually all of the observational estimates of cosmological parameters have been estimates of local cosmological parameters .the curvature parameters ( present value of the matter density parameter expressed in units of the density which would imply zero curvature if the cosmological constant is zero ) and ( present value of the dimensionless cosmological constant ) and ( the hubble constant , which sets a time scale ) are each defined locally at a point in space .estimates of the values of these parameters are now honing in rapidly , and a convergence from multiple observational methods for the three parameters is likely to signal a new phase in observational cosmology .however , as explained above , this will leave unanswered the basic question : how big is the universe ? good estimates of the curvature parameters and of will help search for cosmic topology , and will constrain the families of spaces ( 3-manifolds ) possible , but will be insufficient to answer the question .observations such as the cosmic microwave background ( cmb ) , the abundance of light elements and numerous observational statistics of collapsed objects as a function of redshift lend support to the standard flrw or hot big bang model as a good approximation to the real universe . according to this model, the age of the universe is finite .this condemns us to live in an observable universe which is finite , in which we are situated right at the centre , from the point of view of the universal covering space .the observable universe can be defined as the interior of a sphere ( in the covering space ) of which the radius is the distance travelled by a photon that takes nearly the age of the universe to arrive in our telescopes . the value of this radius , the horizon distance , is to within an order of magnitude , depending on which distance definition one uses and on the curvature parameters .this explains the common misconception according to which the value of sets the size of the universe .the scale is not the size of the universe , it is just the order of magnitude size of the _observable , _ non - copernican universe. in comoving coordinates ( in which galaxies are , on average , stationary , where the expansion of the universe is represented by a multiplicative factor ) , and using the ` proper distance ' ( weinberg , 1972 , eq.14.2.21 ) , the horizon radius is in the range for a range of curvature parameters including those which at least some cosmologists think are consistent with observationkm s mpc ) . ] .note that the observable universe is very non - copernican : we are at the centre of a spherical universe .of course , the underlying model implies that the complete covering space is ( probably ) much larger : finite for positive curvature , infinite for non - positive curvature , and in neither case does the covering space have a centre .note also that the 2-sphere does not have a centre which is part of .the centre of a 2-sphere embedded in exists in but is not part of the 2-sphere . can be very easily defined as a mathematical object independently of .the embedding in is certainly a useful mathematical tool , and an aid to intuition , but is not at all necessary .so , if corresponds to a physical object , this does not imply that has physical meaning , nor that the ` -centre ' of has any physical meaning .the exactly corresponding arguments apply to relative to .if robertson and walker s implicit hypothesis that the topology of the universe is trivial were correct ( the hypothesis according to which , for example , the 3-torus is a priori excluded ) , then , since the observations seem to indicate that the universe is either negatively curved ( hyperbolic ) or flat , not only would the covering space be infinite , but the universe itself would be ! this would imply that the fraction of the universe which is observable would be zero , since the observable universe is finite. it would also imply ( for a constant average density , the standard assumption ) that the mass of the universe is infinite .this may or may not be correct .atoms have finite masses , as do photons , trees , people , planets and galaxies .if the universe is a physical object , then extrapolation from better known physical objects would suggest that it should also have a finite mass .both theoretical and observational methods can be used to examine the hypothesis of trivial topology .many theoretical cosmologists and physicists work on extensions to the standard model , to epochs preceding that during which the cosmic microwave background black - body radiation was emitted ( e.g. see the early universe , topological defect and superstring cosmology papers in ) .inflation ( an accelerated expansion of the universe at an early epoch , e.g. when the age of the universe was ) and other theoretical ideas regarding the ` early ' universe do nt invalidate the standard big bang model as a good approximation for post - recombination observations ( i.e. probably all observations so far ) , even if some now include ` no big bang ' boundary conditions at the quantum epoch . on the contrary, they extrapolate from the standard model . among these various scenarios ,some treat the universe as having infinite volume , some as finite , and many do not state either way .if we consider one of the early universe models in which the volume is infinite or , else , say , the universe is globally a hypersphere with radius times that of the horizon , and if we assume that the topology of the universe is trivial , then a more or less serious question of credibility arises : is the extrapolation from the observable universe to the entire universe times or infinitely many times bigger justified ?is an extrapolation from an ` infinitesimal ' ( i.e. zero ) fraction to the whole justified ? whether these questions lie in the domain of physics or of the philosophy of science will not be dealt with further here , except to remark that for the sake of precision , it would be best to make it clear in literature for the non - specialist when one is studying the ` observable universe ' or the ` local universe ' , and not leave the term ` universe ' without an appropriate qualifying adjective .it is clear that if the topology is assumed to be trivial , then the measured values of local parameters such as and would be ` local ' in more than one sense of the word : local as a physical quantity , and local since the values are averaged over an ` infinitesimal ' fraction or , say , a ten billionth of the total volume of the universe .how does the assumption of trivial topology relate to the quasi steady state cosmology model , which is a model of many ` mini ' big bangs averaging out to a constant density ( in space and time ) universe ?trivial topology seems to be an implicit ( though probably not necessary ) assumption of the model .the zero curvature version provides a universe model which is globally infinite in both space and time if the topology is trivial , without any preferred epochs , satisfying the ` perfect cosmological principle ' .if observations significantly showed that the topology of the universe were non - trivial , i.e. if photons were shown to have ` wrapped ' many times and in different directions around the universe in less than its present age , then this simplest version of the quasi steady state model would have significant problems : the universe would be finite in at least one ( spatial ) direction . if a quasi steady state model ( of any curvature ) were multiply connected , then a characteristic length scale would exist . if this scale were observable at the present , despite the overall exponential expansion of the model since an infinite past ( an overall hyperbolic sine or hyperbolic cosine contraction and expansion in the curved models ) , then this would imply that we happen to live at a special epoch in the infinite history of the universe , which would contradict the original motivations for these models .one possible solution might be for topological evolution to occur at the minima of each short time scale expansion cycle , so that at least one closed geodesic is visible during each cycle .if the whole fundamental polyhedron is found to be observable , then a model in which the universe snaps off into several independent fundamental polyhedra ( universes ) at the minimum of each cycle might be sufficient to match the observations .however , topological change would presumably require quantum effects , i.e. would require the universe to be dense enough to go through a planck epoch ( where quantum mechanics and general relativity both need to be applied ) at each cycle minimum .since one of the motivations for the quasi steady state model was the avoidance of the conventional explanation of the cosmic microwave background as photons coming from a horizon scale high density state , the introduction into the model of a global , much higher density state would again be problematic .[ s - obsvns ]the hypothesis that the universe is simply connected is just a hypothesis .if this hypothesis is dropped , then the whole universe may well be _smaller _ than the ` observable universe ' !the latter would then form a part of the universal covering space , and would constitute the ` apparent universe ' containing many copies of the entire physical universe .multiple connectedness does not necessarily imply that multiple copies would be visible ( one or all dimensions might be bigger than the horizon diameter ) , but certainly implies this as a physical possibility . as mentioned above , awareness that measurement of topology would be required in order to characterise the geometry of space has been around for at least a century , and has been discussed by several of the symbols of modern cosmology .although measuring curvature , essentially via estimates of the density parameter , , and the cosmological constant , , has sustained much more attention and observational analyses than measurement of topology , some discussion of the latter both theoretically and in relation to the status of continually growing observational catalogues of extragalactic objects was made in the 1970 s and 1980 s , in particular by ellis , sokoloff and schvartsman , zeldovich , fang and sato , gott and fagundes [ see for a detailed reference list , also , e.g. ] . since the release of data from the cobe satellite , several papers were quickly published to make statements about spatial topology with respect to the cobe data .the publication of a major review paper further prompted interest in the subject , so that there are now several dozen researchers in europe , north america , brazil , china , japan and india actively working on various observational methods for trying to measure the topology of the universe .see ( 1999 , section 5 ) for a detailed discussion of the recently developed observational methods , apart from new work which is cited below . for earlier work , which showed by various methods that the size of the physical universe should be at least a few 100 ,see .most of the methods depend either directly or indirectly on multiple topological imaging of either collapsed astrophysical objects or of photon - emitting regions of plasma .other methods are the statistical incompatibility between observable topological defects and observable cosmic topology , and the suggestion of which postulates a physical and geometrical link between the feature in large scale structure and global topology .the direct methods are those for which photons are expected to travel across the universe in different directions from a single object or plasma region and arrive at a single observer .they may leave the object or plasma region at different cosmological times .the indirect methods suppose that regions which are nearby to one another have correlated physical properties , so that although an object or plasma region is not strictly speaking multiply imaged , close by regions are approximately multiply imaged .this approach is subject to the validity of the assumptions regarding correlations over ` close ' distances .researchers in france and brazil ( ; lehoucq , luminet & lachize rey 1996 ; roukema 1996 ; roukema & blanloeil 1998 ; roukema & bajtlik 1999 ; roukema & luminet 1999 ; ) work principally on direct three - dimensional methods , i.e. study various statistical techniques which analyse the spatial positions of all known astrophysical objects at large distances inside of the observational sphere .just as for traditional observational estimates of the local cosmological parameters and the idealised methods have to be adapted in practice to cope with the fact that we observe objects in the past and with astronomical selection effects .different classes of objects have different constraints on their evolution with cosmological time , are seen to different distances and are observed in a combination of wide shallow surveys and narrow deep surveys .the result is that the search for cosmic topology or claimed proofs of simple connectedness below a given length scale are just as difficult as the attempts to measure the curvature parameters and the hubble constant , efforts which have taken more than half a century in order to start coming close to convergent results. one way to find a very weak signal in a population such as quasars which are likely to evolve strongly over cosmological time scales can be seen schematically in fig .[ f - topo4 ] .this is the search for rare local isometries .although the properties of individual quasars may have changed completely between the high redshift and low redshift images , the relative three - dimensional spatial positions of a configuration of quasars should remain approximately constant in comoving coordinates .even if such isometries are rare , use of a large enough catalogue may be enough to detect enough isometries to generate testable 3-manifold candidates , whose predictions of multiple topological imaging can be tested by other means . in a population composed of good standard candles and negligible selection effects , the ` cosmic crystallography ' method of ( or its variations ) could be applied . by plotting a histogram of pair separations ( in the covering space ) of the objects ( i.e. an unnormalised two - point correlation function ), sharp ` spikes ' should occur at distances corresponding to the sizes of the vectors representing the isometries between the copies of the fundamental polyhedron ( _ generators)_. the original version of cosmic crystallography is valid only for flat spaces .however , extrapolated the search for local isometries [ discussed and calculated for quintuplets in ] to the case of isometries of pairs , and defined a single statistic based on the number of ` isometric ' pairs .this ` collecting - correlated - pair ' statistic should have a high value in the presence of multiple topological imaging in a population composed of good standard candles , whether space is hyperbolic , flat or spherical , but only for the correct values of the curvature parameters and .most of the three - dimensional methods avoid having to make an a priori hypothesis regarding the precise 3-manifold ( space ) , its size and orientation .since there are infinitely many 3-manifolds possible , this is a considerable advantage for hypothesis testing .however , if carried out to sufficient precision and generality to guarantee a detection , in the case that an observational data set is homogeneous and deep enough and the topology of the universe really is non - trivial , the methods generally require a lot of computing power , generally in cpu rather than in disk space . various ideas to improve the speed of the calculations are suggested by some of these authors .alternatively , if some observations can be used to suggest candidate 3-manifolds ( spaces ) , then the methods can be used on different , independent observational data sets to attempt to refute the suggested candidates relatively rapidly .both direct and indirect approaches have been suggested for using cmb measurements , those of cobe and future measurements by map and planck surveyor .the direct method results from an elegant geometrical result discovered by a north american group ( cornish , spergel & starkman 1996 , 1998 ) .this is known as the identified circles principle .the microwave backgrounds seen by two observers in the _ universal covering space _ separated by a comoving distance lying in the range consist of two spheres in the covering space , which intersect in a circle in the covering space .if the two observers are multiple topological images of a single observer , then what is a single circle in the covering space as seen by two observers is equivalent to two circles seen in different directions by a single copy of the observer ( see fig .[ f - howcir2 ] ) .this implies that for a single observer , and for locally isotropic radition on the surface of last scattering , the temperature fluctuations seen around a circle centred at one position in a cmb sky map should be identical to those seen around another circle ( in a certain direction ) , apart from measurement uncertainty .the positions and radii of the circles are not random , they are determined by the shape , size and orientation of the fundamental polyhedron , or equivalently , by the set of generators . intend to use the map satellite data to apply this principle in a generic search for the topology of the universe .it has been shown that the principle can be applied to four - year data from the cobe satellite despite its poor resolution and poor signal - to - noise ratio , either to refute a given topology candidate motivated by three - dimensional data or to show that a flat ` 2-torus ' ( ) model a tenth of the horizon diameter can easily be found which is consistent with the cobe data ( see fig .[ f - circles2 ] here , or for details ) . for the identified circles principle to be applied in a general way ,i.e. to search for the correct 3-manifold rather than to test a specific hypothesis , the computing power required would again be very high , as for the three - dimensional methods .improvement in the speed of calculation will be required for the application to map and planck surveyor data .another direct method , which has been tested to some degree via simulations , and which should in principle be derivable from the identified circles principle , is that of searching for patterns of ` spots ' in the cmb .many researchers have tried to use the indirect approach , i.e. via introducing assumptions on the density perturbation spectrum ( or equivalently the correlation function of density perturbations ) and making statements regarding ensembles of possible universes , as opposed to direct observational refutation .most of these authors made simulations of the perturbations in order to obtain statements of statistical significance .various subsets of flat 3-manifolds were tested and it was suggested that flat 3-manifolds up to 40% of the horizon diameter were inconsistent with the cobe data . as mentioned above , direct tests applying the identified circles principle show that a more conservative constraint would have to be around 10% of the horizon diameter .a canadian based group ( ) has tested individual hyperbolic candidates applying the perturbation statistics approach to cobe data .since fourier power spectra are , strictly speaking , incorrect in hyperbolic space , and since eigenmodes are difficult to calculate in compact hyperbolic spaces , these authors used correlation functions instead .although these authors use some simulations in their figures , the perturbation statistics approach is applied by them without relying on simulations .this bypasses potential errors and numerical limitations which could be introduced by the transition from perturbation statistics to simulations to final statistical statements ( though it does not avoid the original assumptions ) .inoue ( 1999 ) , aurich ( 1999 ) and calculated eigenmodes in compact hyperbolic spaces in order to apply the perturbation simulational approach .they showed that the ( spherical harmonic ) statistic for cobe four - year data was consistent with their models , taking into account the fact that for low density universe models , i.e. for , the gravitational redshifts between the observer and the surface of last scattering , known as the integrated sachs - wolfe effect ( or the rees - sciama effect ) makes refutation of 3-manifold candidates using cmb data more difficult than if the universe were flat and the cosmological constant were zero .applications of the perturbation statistics approach ( with or without simulations ) to testing multiply connected models have the property that they depend on the assumptions regarding density fluctuation statistics . the latterare observationally supported by most ( but not all ) analyses of cobe data on large scales if the universe is assumed to be simply connected , which may not be a valid assumption if the universe is multiply connected . for more discussion on this question , see section 1.2 of .is it possible to show by observations that the global universe is observable ? and that observational cosmology is something more than just extragalactic astrophysics ? for a recent summary of observational results , see table of .the analyses using different observational data sets and different methods have so far answered these questions with ` no ' on scales which are much smaller than the horizon .all the observations point to the universe being simply connected up to a scale of , i.e. about a tenth of the horizon diameter . at the scale and larger ,definitive answers have not yet been obtained . on the contrary ,some candidate 3-manifolds consistent with several observational data sets have been suggested by , and .specific testing of these large 3-manifolds might show that one of these makes correct predictions of three - dimensional positions ( celestial coordinates and redshift ) of multiple topological images of previously known objects .strong hopes are put in the map satellite which should be launched in the next year or two to map the cmb , but the better resolution and the ability to measure polarisation information in the cmb by the planck surveyor might be needed to extract the signal from the noise .theory may be slow in catching up .quantum cosmology studies regarding the evolution of topology during the quantum epoch are nowhere near making predictions for the present - day topology of space , though some interesting work has begun ( e.g. ) .not only would a significant measurement of cosmic topology show that the universe is spatially finite in at least one direction , but it would add topological ` lensing ' to the tool presently finding a great variety of useful applications : gravitational lensing .the two are similar in that the geometry of the universe generates multiple images in both cases .they differ in that the former uses the whole universe as a ` lens ' , does not magnify the image and generates images at ( in general ) widely differing redshifts and angles , all of which contrast with the latter . blanlil v. , roukema b. f. , 2000 , e - proceedings of the _ cosmological topology in paris 1998 _ workshop , ( obs .paris / iap , paris , 14 dec 1998 ) , + _programme.html_ ( arxiv : astro - ph/0010170 ) fagundes h. v. , 1996 , , 470 , 43 fagundes h. v. , 1998 , physletta , 238 , 235 ( arxiv : astro - ph/9704259 ) fagundes h. & gausmann e. , 1997 , physletta , 235 , 238 ( arxiv : astro - ph/9906046 ) fagundes h. & gausmann e. , 1999a , in proceedings of the xix texas symp .cosmology , cd - rom version ( arxiv : astro - ph/9811368 ) fagundes h. & gausmann e. , 1999b , arxiv : astro - ph/9906046 friedmann a. , 1924 , zeitschr.fr phys . , 21 , 326 gomero g. i. , teixeira a. f. f. , rebouas m. j. , bernui a. , 1999a , arxiv : gr - qc/9811038 gomero g. i. , rebouas m. j. , teixeira a. f. f. , 2000 , phys. lett . a , in press , ( arxiv : gr - qc/9909078 ) gomero g. i. , rebouas m. j. , teixeira a. f. f. , 1999c , arxiv : gr - qc/9911049 hoyle f. , burbidge g. , narlikar j.v . , 1993 , , 410 , 437 lachize rey m. , luminet j.p ., 1995 , phys . rep ., 254 , 136 ( arxiv : gr - qc/9605010 ) lehoucq r. , luminet j.p ., lachize rey m. , 1996 , , 313 , 339 ( arxiv : gr - qc/9604050 ) lehoucq r. , luminet j.p ., uzan j.ph . , 1999 , , 344 , 735 ( arxiv : astro - ph/9811107 ) lematre g. , 1958 , dans la structure et levolution de lunivers , onzime conseil de physique solvay , ed. stoops r. , ( brussels : stoops ) , p1 levin j. , scannapieco e. , de gasperis , g. , silk j. , barrow , j.d . , 1999 , arxiv : astro - ph/9807206 levin j. , scannapieco e. , silk j. , 1998 , phys.rev.d , 58 , 103516 ( arxiv : astro - ph/9802021 ) luminet j.p . , 1998 , acta cosmologica , 24 , ( arxiv : gr - qc/9804006 ) luminet j.p . , roukema b. f. , 1999 , ` topology of the universe : theory and observations ' , cargse summer school ` theoretical and observational cosmology ' , ed .lachize - rey m. , netherlands : kluwer , p117 ( arxiv : astro - ph/9901364 ) roukema b. f. , 1996 , , 283 , 1147 ( arxiv : astro - ph/9603052 ) roukema b. f. , 1999 , jaf , 59 , 12 roukema b. f. , 2000a , , 312 , 712 ( arxiv : astro - ph/9910272 ) roukema b. f. , 2000b , submitted , , _ large scale structure constraints on cosmic topology : nested crystallography ?_ roukema b. f. , 2000c , , 17 , 3951 ( arxiv : astro - ph/0007140 ) roukema b. f. , bajtlik , s. , 1999 , , 308 , 309 ( arxiv : astro - ph/9903038 ) roukema b. f. , blanlil v. , 1998 , , 15 , 2645 ( arxiv : astro - ph/9802083 ) roukema b. f. , edge a. c. , 1997 , , 292 , 105 ( arxiv : astro - ph/9706166 )
the hilbert - einstein equations are insufficient to describe the geometry of the universe , as they only constrain a local geometrical property : curvature . a global knowledge of the geometry of space , if possible , would require measurement of the topology of the universe . since the subject was discussed in 1900 by schwarzschild , observational attempts to measure global topology have been rare for most of this century , but have accelerated in the 1990 s due to the rapidly increasing amount of observations of non - negligible fractions of the observational sphere . a brief review of basic concepts of cosmic topology and of the rapidly growing gamut of diverse and complementary observational strategies for measuring the topology of the universe is provided here . -5 mm
the single most important variable in an electrophysiological whole - cell model is membrane potential , defined as the potential difference across the cell membrane .it drives both gating of ion channels and fluxes of ions through the membrane .the basis for electrophysiological single - cell modeling was first provided by the hodgkin - huxley model of the neuronal action potential ( 1952 ) . in their model ,membrane potential was defined by an ordinary differential equation in which the time - derivative of membrane potential equals the sum of all ion currents through the membrane divided by membrane capacitance .their model assumed constant ionic concentrations and dynamic concentrations were added in later models ( difrancesco and noble , 1985 ) . while tracking of concentrations made the models more realistic , doing so introduced new problems , including drift of concentrations under repeated stimuli ( see fig .[ fig : figure1 ] ) , over - determined initial conditions and an infinite number of steady states ( guan et al . , 1997 ; hund et al . , 2001 ; kneller et al . ,. long - term drift of concentrations is present in the difrancesco - noble ( guan et al . ,1997 ) and luo - rudy models ( luo and rudy , 1994 ) .using numerical simulations , hund et al .( 2001 ) showed that the luo - rudy model has a steady state and no concentration drift , when it is assumed that the externally applied stimulus current is carried by potassium ( ) ions and that this ion flux contributes to the rate of change in intracellular concentration .similarly , kneller et al .( 2002 ) demonstrated that this finding also holds true in their model .however , neither the reason for nor the origin of ion concentration drift has been addressed quantitatively . in this study, we explain the mechanism of concentration drift ( sec . [sec : s31 ] ) and examine the number of steady states present in the above models ( sec .[ sec : steady ] ) .we also investigate alternative formulations of membrane potential which preserve charge - conservation and electroneutrality .in particular , we compare what we refer to as the differential and capacitor formulations of membrane potential ( sec .[ sec : s2 ] ) , and consider issues affected by the formulation of membrane potential ( sec .[ sec : s3 ] ) . two examples ( sections [ sec : case1 ] and [ sec : case2 ] )show how membrane potential may be formulated rigorously in a computational model of the cardiac ventricular myocyte developed by winslow et al .( 1999 ; abbreviated wrj ) .our results show that the capacitor formulation provides a transparent , well - defined formulation of membrane potential in a spatially homogeneous myocyte model .following hodgkin and huxley ( 1952 ) , a spatially homogeneous single - cell model is described as a parallel rc - circuit model .given an initial value , membrane potential is then defined by the differential equation where is the sum of all outwardly directed membrane currents and is total membrane capacitance .this formulation of membrane potential is referred to as the _ differential formulation_. membrane potential can be defined by assuming that a cell is a capacitor , consisting of an ideal conductor representing the interior of the cell , a dielectric representing the ability of the cell membrane to separate charge and a second ideal conductor representing the extracellular space surrounding the cell . in this _ capacitor formulation _ , membrane potential is defined as the ratio of charge to total membrane capacitance : charge includes contribution from all charged particles in a cell .more specifically , membrane potential of a model containing a single intracellular compartment with ion species is given by ,\ ] ] where is the valence of species , is volume of myoplasm and is faraday s constant .a biophysically detailed model often describes a myocyte using more than two compartments . the capacitor formulation ( [ eq : capacitor ] ) is extended to multiple compartments by describing the cell as a network of capacitors ( fig .[ fig : intracellular]a ) , where each dielectric corresponds to an interface between two compartments .the interior of a compartment is considered to be an ideal conductor insulated from other compartments by a membrane .the membrane potentials can be expressed as functions of charges of compartments and of capacitances of interfaces . in particular , a compartment that is completely enclosed within a larger compartment influences the membrane potential of the surrounding compartment only via charge , geometry is irrelevant ( griffiths , 1989 ) .for example , figures [ fig : intracellular]a and [ fig : intracellular]b show graphical and capacitor representations of a three - compartment cell model .membrane potentials are given by , and , each depending only on net charge of the enclosed volume .membrane potential is generated by a small number of ions , the bulk ionic concentration is electroneutral ( hille , 2001 ) .we require that ( 1 ) a model is electroneutral as a whole , that is , the net charge is zero ; and that ( 2 ) the concentrations are spatially homogeneous ( i.e. well - mixed ) in each compartment . since such requirements are not necessarily satisfied in the presence of charged particles , we will show this to be the case for the model presented here . in an ideal conductor , induced charges balance electric field in such a way that potential is constant inside the conductor .hence the interior of each compartment is exactly electroneutral in the capacitor approximation and all net charge is located on the compartment boundary .this is consistent with narrow physiological range of membrane potentials , which also requires that sum ] mv limits induced charge to correspond to a monovalent ion concentration in range ] ( notation as in ( [ eq : genv ] ) ) , is non - zero , then the es has infinite charge which implies infinite membrane potential .hence , the es must have _ zero _ charge density , but typically _ non - zero _ charge .the paradox arises from infinite volume : if the charge of the es is finite , the average charge density is zero because no finite flow of ions can change the concentrations within an infinite volume eventhough charge is altered . as with all other compartments ,all net charge is located on the boundary of the es , and the remainder of the compartment is exactly electroneutral . in conclusion ,the capacitor approximation of the cell is consistent with the requirement of electroneutrality and the assumption that ions are homogeneously distributed in a compartment .next , we will study the exact connection between the two formulations of membrane potential . in the differential formulation, membrane potential is defined as an independent variable and does not depend on ion concentrations , whereas in the capacitor formulation , membrane potential is a function of concentrations .hence , in the differential formulation initial conditions are over - determined , since initial conditions are assigned independently for interdependent variables .this issue can be resolved by introducing implicitly - defined ion concentrations in the differential formulation , as will be shown in the following .for example , assume a one intracellular compartment model with concentrations {\text{m}}}\}_{m=1}^m ] .such current are assigned to an implicitly - defined monovalent concentration {\text{i}}} ] is determined by the assumption regarding initial charge density .equation ( [ eq : siv ] ) shows the exact connection of the differential formulation ( [ eq : no5 ] ) to the capacitor formulation ( [ eq : capacitor ] ) ( in the two - compartment case , but the derivation can be generalized to any number of compartments ) .the major difference between the formulations is {\text{i}}} ] is constant and all movement of charge is captured by the currents .concentration {\text{i}}} ] and a constant anion concentration {\text{i}}} ] if it is to modify membrane potential .charge - conservation and electroneutrality require that the stimulus current originates from the es . in the differential formulation ,stimulus current has not historically been assigned to a particular ion concetration .formulating the above model in this manner yields a model comparable to the one defined above defined by { \frac{dv}{dt}}&=-{\frac{1}{c_m}}(i_k+{i_{\text{stim}}}),\\ { \frac{d{[\text{k}]_{\text{i } } } } { dt}}&=-\frac{1}{fv_{myo}}i_k , \end{aligned}\ ] ] where is current through the cell membrane .note that concentration {\text{i}}} ] ( no information on valence of species is contained in equation ( [ eq : vforsimple_diff ] ) ) , {\text{i}}}}{dt}}=\frac{1}{fv_{myo}}{i_{\text{stim}}},\ ] ] and equation ( [ eq : siv ] ) implies that {\text{i } } } -{[\text{e}]_{\text{i}}})/c_m.\ ] ] since all currents are explicitly accounted for in the concentrations , equation ( [ eq : vforke ] ) shows that the model defined by equation ( [ eq : vforsimple_diff ] ) is charge - conservative , when concentration {\text{i}}} ] monotonically and eventually makes it negative , that is , {\text{i}}} ]must be compensated for by a decrease of {\text{i } } } ] monotonically . to keep membrane potential in the physiological range , charge density {\text{m}}} ] , which is manifested as a monotonic drift of at least one of the concentrations {\text{m}}} ] .how does the drift depend on pacing rate ?since drift is relative to the charge transported by the stimulus current , higher pacing rate leads to more rapid decrease of {\text{i}}} ] , consistent with numerical simulations of hund et al .( 2001 ) .an infinite number of steady states were observed in simulations of guan et al .( 1997 ) , when initial concentrations were varied .if voltage is kept constant but concentrations are changed in the differential formulation , concentration {\text{i}}} ] remains constant during a simulation , different values of {\text{i}}} ] yield an infinite number of steady states . on the other hand , {\text{i}}} ] is non - zero , in which case the model may not have a steady state . in the capacitor formulation , {\text{i}}} ] equal to extracellular concentration {\text{o}}} ] is an instant function of the state of the channel ( essentially a rea ) : if the channel is closed {\text{i } } } = 0 ] .membrane potential {\text{i } } } -{[\text{k}]_{\text{o}}})/c_m ] .since each individual channel allows non - zero flux only at the time of channel closing and opening , the total current seems to mostly be zero .however , the ensemble current through the cell membrane due to rea is {\text{o}}}n ] ; conductances of background calcium current , background current and - pump were adjusted to balance concentrations in the long term ; and , and stationary anion concentrations were added to compartments if not already present .we reformulated membrane potential using the capacitor formulation , in which membrane potential is given by /c_m,\ ] ] where charges are defined through {\text{i } } } + { [ \text{na}]_{\text{i}}}+2{[\text{ca}]_{\text{i}}^{\text{tot}}}-[s]_i),\\ & q_{\text{nsr}}=f{v_{{\text{nsr}}}}([\text{k}]_{{\text{nsr}}}+[\text{na}]_{{\text{nsr}}}+2{[\text{ca}]_{{\text{nsr}}}}-[\text{s}]_{{\text{nsr}}}),\\ & q_{\text{jsr}}=f{v_{{\text{jsr}}}}([\text{k}]_{{\text{jsr}}}+[\text{na}]_{{\text{jsr}}}+2{[\text{ca}]_{{\text{jsr}}}^{tot}}-[\text{s}]_{{\text{jsr}}}),\\ & q_{\text{ss}}=f{v_{{\text{ss}}}}([\text{k}]_{{\text{ss}}}+[\text{na}]_{{\text{ss}}}+2{[\text{ca}]_{{\text{ss}}}^{tot}}-[\text{s}]_{{\text{ss } } } ) , \end{split}\ ] ] where is volume of compartment , concentration ] and {\text{i } } } ] and {\text{i } } } ] and {\text{i } } } -1 ] phase space are identical within numerical accuracy , showing that the steady state is unique , given parameters including pacing rate . figure [ fig : start_zero ] illustrates long - term changes in the model , sampled at 1 hz .the simulation is started from a state with no concentration gradients , figure [ fig : start_zero]a shows diastolic membrane potential .figure [ fig : start_zero]b exhibits the increase of {\text{i}}} ] ) and the decrease of {\text{i } } } ] ) to their steady state values .homeostasis is approached approximately exponentially with a time constant of 90 seconds .figure [ fig : start_zero]c shows the decrease of diastolic {\text{i}}} ] emerge : first at roughly 800 seconds with small , subthreshold oscillations ; second with large amplitude oscillations at roughly 1,400 seconds .figure [ fig : start_zero]d shows the non - trivial time evolution of apd during the simulation .intracellular compartments are important for proper myocyte function . in particular , the sarcoplasmic reticulum ( sr ) stores ions . while the process of uploading of into the sr is electrogenic, it does not seem to influence sr membrane potential that is observed to be small in amplitude ( ; bers , 2001 ) .pure diffusion of ions through sr membrane can not explain , since it would balance ion species separately without consideration to .a small requires that movement of counter - ions , likely and , balances the potential difference generated by movement ( pollock et al . , 1998 ; kargacin et al . , 2001 ) .indeed , sr membrane is known to have channels gating according to ( townsend and rosenberg , 1995 ) , suggesting that does affect handling unless kept at almost zero voltage .somewhat counter - intuitively , zero requires currents driven by . to better understand the basis for and the concentrations of ions in sr , we extend the model of case study 1 to describe intracellular membrane potentials . in this case, the cell is described as a network of capacitors shown in fig .[ fig : intracellular]b .membrane potential of interface ( fig .[ fig : intracellular]b ) can be expressed as a function of capacitance of and charge bound to interface by given net charges ] charge is given by electroneutrality , .an ion channel senses voltage across the membrane in which the channel is located .then the flux of ions of species between compartments and is given by the goldman equation ( hille , 2001 ) {a}e^{zvf / rt}-[s]_{b}}{e^{zvf / rt}-1}.\ ] ] where is the diffusion coefficient , valence , temperature and universal gas constant .ion channels ( other than channels ) between intracellular compartments are assumed to always be open and to be selective to a single ion species . in particular , myoplasm - ss and jsr - nsr interfaces have large - conductance , permanently - open pores .each compartment contains and stationary anion concentrations . to include the effect of voltage modulating transport to nsr, we derived a model for the serca2a pump assuming michaelis - menten kinetics , that serca2a achieves equilibrium with instantaneously , and that rates and out of the intermediate michaelis - menten state are related by , where is free energy of atp breakdown .the flux through serca2a is {\text{i}}}^2/k_i^2-{[\text{ca}]_{{\text{nsr}}}}^2/k_{nsr}^2}{1+{[\text{ca}]_{\text{i}}}^2/k_i^2+{[\text{ca}]_{{\text{nsr}}}}^2/k_{nsr}^2},\ ] ] where , in which , is elementary charge , / l .the model has a realistic dependence of apd on pacing rate , as evidenced by figure [ fig : case2_v]a .intracellular membrane potentials are consistently small but non - zero ( fig .[ fig : case2_v]b ) . membrane potentials and between myoplasm and ss and nsr are essentially zero . membrane potentials and between jsr and other compartments are non - zero , roughly 1 mv , as a result of buffering by calsequestrin in jsr , the absence of which would reduce these membrane potentials to essentially zero .external membrane potential of ss ( ) tracks closely external membrane potential of myoplasm , .when diffusion rates between myoplasm and sr are reduced , intracellular membrane potentials are increased ( fig .[ fig : case2_v]c ) .the maximal nsr membrane potential that serca can overcome by employing energy of atp hydrolysis ( 56 kj / mol ; bers , 2001 ) is 237 mv . without the movement of counter - ions , / l would be the maximal increase of {{\text{nsr}}}} ] , while the measured concentration difference is roughly 0.7 mmol / l ( bers , 2001 ) .since cells are sensitive to any disruption in handling , even a minor sr membrane potential has functional consequences .case study 2 demonstrates that ( 1 ) intracellular membrane potentials can be incorporated in a cell model , and that ( 2 ) they give physical constraints on intracellular concentrations and require delicate balance of charges ; ( 3 ) the magnitude of sr membrane potential can be explained by movement of counter - ions ; and ( 4 ) buffering of affects jsr membrane potential .the capacitor formulation expresses membrane potential as a function of charge and capacitance , whereas the differential formulation uncouples membrane potential from concentrations .the main differences between the two formulations are an integration constant ( the differential formulation is time derivative of the capacitor equation ) , and `` independence '' of membrane potential from concentrations in the differential formulation .these two issues imply the presence of a dynamic , implicitly - defined ion concentration(s ) in the differential formulation ( sec . [sec : s31 ] ) , which makes interpreting simulation results difficult and prone to errors , in addition to the presence of spurious drift of concentrations and issues with steady state .the differential formulation is equivalent to a particular capacitor formulation , which , however , may not be the one intended due to the presence of implicit concentrations .the capacitor formulation requires `` almost '' electroneutrality and carefully chosen initial conditions due to sensitivity of membrane potential to net charge .however , these are physical constraints , since even a small additional charge can have a drastic effect on membrane potential .the capacitor formulation provides a transparent formulation of membrane potential , and requires no implicitly - defined concentrations .the differential formulation is best viewed as a shorthand notation for the more complete and better - defined capacitor formulation .guan et al .( 1997 ) show that the difrancesco - nobel model has infinite number of steady states , when initial concentrations are varied . to address this issue, they suggest that concentrations should be treated as parameters .however , they neglected the implied concentration in their study , inclusion of which resolves the issue .consistent with our explanation ( sec .[ sec : s31 ] ) , kneller et al .( 2002 ) observed that if the sum of concentration changes in initial conditions is zero , the steady state stays the same .the sinoatrial node model by endresen et al . ( 2000 ) uses the capacitor equation for membrane potential .however , the reasoning behind their definition is different from ours .the membrane potential of a simplified model described in sec .[ sec : s31 ] is given by equation {\text{i } } } -{[\text{b}]_{\text{i}}})/c_m ] represents intracellular concentration of an anion concentration required for electroneutrality . in endresenet al . ( 2000 ) , {\text{i}}}/c_m ] is sufficient to remove concentration drift in the luo - rudy model ( luo and rudy , 1994 ) .they interpret the drift as a consequence of `` non - charge - conservative '' formulation of the model .however , we proved that a `` non - charge - conservative '' formulation is actually charge - conservative ( sec . [sec : s21 ] ) , when the implicit concentrations is taken into account , even in the presence of spurious concentration drift ( sec .[ sec : s31 ] ) .in particular , this means that the equations defining the differential formulation require that the system is always charge - conservative . on the other hand , a model can be `` non - charge - conservation '' , which , however , does not necessarily indicate the presence of concentration drift .hence , explanation of concentration drift requires study of charge densities implied by , but not explicitly included in , the differential formulation ( sec . [ sec : s31 ] ) .hund et al . ( 2001 ) further state that the differential and capacitor formulations are always equivalent .however , that is true only if the currents present in the model capture all movement of charge , which is not necessarily the case in , e.g. , rapid equilibrium approximation ( sec .[ sec : rea ] ) . in this article, we showed that the capacitor formulation provides a physically consistent , well - defined formulation of membrane potential , and it avoids the problems all too often found in the differential formulation . in conclusion, we see little reason to use the differential formulation as the definition of membrane potential in a spatially homogeneous myocyte model .the cell models are available at website of the center for cardiovascular bioinformatics and modeling ( http://www.ccbm.jhu.edu/ ) . at wishes to thank dr .reza mazhari , dr .sonia cortassa and tabish almas for helpful discussions .this study was supported by nih ( ro1 hl60133 , ro1 hl61711 , p50 hl52307 ) , the falk medical trust , the whitaker foundation and ibm corporation .the work of et was funded by the national research council and academy of finland .: : bers , d.m . , 2001 .excitation - contraction coupling and cardiac contractile force , second edition .kluwer academic publishers . dordrecht .: : difrancesco , d. , noble , d. , 1985 .a model of cardiac electrical activity incorporating ionic pumps and concentration changes .307:353 - 398 .: : endresen , l.p . , hall , k. , hye , j.s . ,myrheim , j. , 2000 . a theory for the membrane potential of living cells .j. 29:90103 .: : griffiths , dj . , 1989 .introduction to electrodynamics , second edition .prentice - hall , englewood cliffs nj .: : guan , s. , lu , q. , huang , k. 1997 . a discussion about the difrancesco - noble model .biol . 189(1):27 - 32 .: : hille , b. , 2001 .ion channels of excitable membranes , third edition .sinauer , sunderland ma .: : hinch , r. , greenstein , j.l . ,tanskanen , a.j . , xu , l. , winslow , r.l . , 2004 . a simplified local control model of calcium induced calcium release in cardiac ventricular myocytes .j. 87(6):3723 - 36 .: : hodgkin , a.l . , huxley , a.f . , 1952 .a quantitative description of membrane current and its application to conduction and excitation in nerve . j. physiol .( lond . ) 117 , 500544 .: : hund , t.j . , kucera , j.p . ,otani , n.f . , rudy , y. , 2001 .ionic charge concervation and long - term steady state in luo - rudy dynamic cell model .j. 81 , 33243331 .: : kargacin , g.j . , ali , z. , zhang , s.j . ,pollock , n.s . ,kargacin , m.e . , 2001 .iodide and bromide inhibit uptake by cardiac sarcoplasmic reticulum .j. physiol .heart circ .280(4):h1624 - 34 : : kneller , j. , ramirez , r.j . , chartier , d. , courtemanche , m. , nattel , s. , 2002 .time - dependent transients in an ionically based mathematical model of the canine atrial action potential . am .j. physiol .heart circ .282 : h1437h1451 . : : luo , c.h ., rudy , y. , 1994 . a dynamic model of the cardiac ventricular action potential .i. simulations of ionic currents and concentration changes .: : mazhari , r. , greenstein , j.l . ,winslow , r.l . ,marban , e. , nuss , h.b . , 2001 .molecular interactions between two long - qt syndrome gene products , herg and kcne2 , rationalized by in vitro and in silico analysis .: : pollock , n.s . ,kargacin , m.e . ,kargacin , g.j . , 1998. chloride channel blockers inhibit uptake by smooth muscle sarcoplasmic reticulum .j. 75:17591766 .: : townsend , c. , rosenberg , r.l . , 1995 .characterization of a chloride channel reconstituted from cardiac sarcoplasmic reticulum .j. membr .biol . 147(2):121 - 36 .: : varghese , a. , sell , ar ., 1997 . a conservation principle and its effect on the formulation of na - ca exchanger current in the cardiac cells .j. theor .189 , 3340 .: : winslow , r.l . , rice , j. , jafri , s. , marban , e. , orourke b. , 1999 . mechanisms of altered excitation - contraction coupling in canine tachycardia - induced heart failure , ii : model studies . circ. res . 84(5):571 - 86 .concentration in case study 1 model , the model has a limit cycle ( `` steady state '' ; grey cycle ) in {\text{i } } } , { [ \text{na}]_{\text{i}}}) ] phase space in the case study 1 model .sodium concentration ( ordinate ; mmol / l ) is plotted against membrane potential ( abscissa ; mv ) ; ( d ) action potentials simulated using the differential ( solid grey line ) and capacitor formulations ( dashed dark grey line ) and ( e ) the potential difference between the formulations in greenstein et al .( 2005 , submitted to biophys .j. ) model.,width=415 ]
membrane potential in a mathematical model of a cardiac myocyte can be formulated in different ways . assuming a spatially homogeneous myocyte that is strictly charge - conservative and electroneutral as a whole , two methods will be compared : ( 1 ) the differential formulation of membrane potential used traditionally ; and ( 2 ) the capacitor formulation , where membrane potential is defined algebraically by the capacitor equation . we examine the relationship between the formulations , assumptions under which each formulation is consistent , and show that the capacitor formulation provides a transparent , physically realistic formulation of membrane potential , whereas use of the differential formulation may introduce unintended and undesirable behavior , such as monotonic drift of concentrations . we prove that the drift of concentrations in the differential formulation arises as a compensation for failure to assign all currents in concentrations . as an example of these considerations , we present an electroneutral , explicitly charge - conservative formulation of winslow et al . model ( 1999 ) , and extend it to describe membrane potentials between intracellular compartments . + , * e.i . tanskanen * , * j.l . greenstein * and * r.l . winslow * + + + +
information , not so long ago , used to always mean knowledge _ about something_. even today , under layers of abstraction , that s still the usual meaning .sure , an agent can be informed of a string of bits ( via some signal ) without knowing what the bits refer to , but at minimum the agent has been informed about the physical signal itself . quantum theory , however , has led many to question this once - obvious connection between knowlege / information and an underlying reality . not only is our information about a quantum system indistinguishable from our best physical description , but we have failed to come up with a realistic account of what might be going on _ independent _ of our knowledge .this blurring between information and reality has led to a confusion as to which is more fundamental .the remarkable `` it from bit '' idea that _ information _ is more fundamental than reality is motivated by standard quantum theory , but this is a bit suspicious .after all , there s a long `` instrumentalist '' tradition of only using what we can measure to describe quantum entities , rejecting outright any story of what might be happening when we re not looking . using a theory that only comprises our knowledge of measurement outcomes to justify knowledge as fundamental is almost like wearing rose - tinted glasses to justify that the world is tinted red .but any such argument quickly runs into the counterargument : then answer the question : what _ is _ the ( objective ) reality that our information of quantum systems is actually _ about _ ? " without an answer to this question ( that differs from our original information ) , it from bit " proponents can perhaps claim to win the argument by default .the only proper rebuttal is to demonstrate that there is some plausible underlying reality , after all .this is generally thought to be an impossible task , having been ruled out by various `` no - go '' theorems .but such theorems are only as solid as their premises , and they all presume a particular sort of independence between the past and the future. this presumption may be valid in a universe that uses dynamical laws to evolve some initial state into future states , but if `` the universe is not a computer '' , there is a natural alternative to this dynamic viewpoint .as argued in last year s essay , instead of the universe solving itself one time - slice at a time , it s possible that it only looks coherent when solved `` all - at - once '' .this essay aims to demonstrate how this all - at - once perspective naturally recasts our supposedly - complete information about quantum systems into _ incomplete _ information about an underlying , spacetime - based reality .after some motivation in the next section , a simple model will demonstrate how the all - at - once perspective works for purely spatial systems ( without time ) .then , applying the same perspective to spacetime systems will reveal a framework that can plausibly serve as a realistic explanation for quantum phenomena .the result of this analysis will be to dramatically weaken the it from bit " idea , showing that it s possible to have an underlying reality , even in the case of quantum theory .we may still choose to reject this option , but the mere fact that it is on the table might encourage us _ not _ to redefine information as fundamental especially as it becomes clear just how poorly - informed we actually are .the case for discarding dynamics in favor of an all - at - once analysis is best made by analyzing quantum theory , but it s also possible to frame this argument using the _ other _ pillar of modern physics : einstein s theory of relativity .the relevant insight is that there is no objective way to slice up spacetime into instants , so we must not assign fundamental significance to any particular slice .figure 1 is a standard spacetime diagram ( with one dimension of space suppressed ) .if run forward in time like a movie , this diagram represents two spatial objects that begin at a common past ( c.p . ) and then move apart .but if viewed all - at - once , the figure instead shows two orange worldtubes " that intersect in the past . in relativity ,as we are about to see , it is best to analyze this picture all - at - once .the most counter - intuitive feature of special relativity is that there is no objective `` now '' .simultaneous events for one observer are not simultaneous for another .no observer is right or wrong ; `` now '' is merely subjective , not an element of reality .an illustration of this can be seen in figure 1 .observer # 1 has a now " that slices the worldtubes into two white ovals , while observer # 2 has a now " that slices the worldtubes into two black ovals .clearly , they disagree .this fact implies that any dynamical movie made from a spacetime diagram will incorporate a subjective choice of how to slice it up .one way to purge this subjectivity is to simply view a spacetime diagram as a single 4d block .after all , with no objective `` now '' , there is no objective line between the past and the future , meaning there can be no objective difference between them .such a claim is counter - intuitive , _ but this is a central lesson of relativity . _the only difference between the future and the past , in this view , is subjective : we do nt ( yet ) know any of our future .arguments such as but the future is nt real _ now _ " are no more meaningful than arguing over there is nt real right here " . a more reasonable fallback for the dynamicist is not to deny that spacetime _ can _ be viewed as a single 4d block , but rather to note that if dynamical equations govern the universe then _ any _ complete spacelike slice suffices to generate the rest of the block ( via dynamical equations ) .so while no one slice is special , they re all equally valid inputs from which the full universe can be recovered .taken to an extreme , this viewpoint leads to the notion that the 4d block is filled with redundant permuted copies of the same 3d slice .it also forbids a number of solutions allowed by general relativity , spacetime geometries warped to such an extent that they only make sense all - at - once .the other problem with this sliced perspective is that it all but gives up on objectivity . even if it s_ possible _ to generate the block from a single slice ( a point i ll dispute later on ) , how can one 3d slice truly generate the others if it is a subjective choice ? in figure 1 , if _ both _the white ovals and the black ovals are different complete descriptions of the same reality , it s the 4d worldtubes they generate that makes them consistent .the clearest objective reality requires a bigger picture .this point becomes even clearer when one introduces ( subjective ) uncertainty .suppose each of the worldtubes in figure 1 represent a ( temporally extended ) shoebox , each containing a single shoe .also suppose that you knew the shoes were a matched pair , but not which shoe ( r or l ) was in which box ( 1 or 2 ) .to represent your information about the two boxes after they had separated ( say , the white ovals in figure 1 ) , you might use an equal - probability mix of both possibilities : ] .but what is lost in this viewpoint is the mechanism for the updating ; if our entire description is that of the 3d white ovals , this updating process might appear nonlocal , as if some spooky influence at box 1 is influencing the reality over at box 2 .sure , we know that nothing spooky is going on in the case of shoes , but that s only because we already know there s an underlying reality of which represents ( subjective ) _information_. if the existence of an underlying reality is doubt ( as in quantum theory ) , then analysis of the 3d state can not address whether anything spooky is happening . to resolve that question, one has to look at the entire 4d structure .all at once .in the all - at - once viewpoint , after finding the left shoe in box 1 we update our local knowledge to ( updating occurs when we learn new information ) . but thinking in 4d , we also update our knowledge of the _ past _ ; we now know that that back in the c.p .this in turn implies back in the c.p , and this allows us to update our knowledge of in the present .it s the continuous link , via the past , that proves that we did not change the contents of box 2 ; it contained the right shoe all along .throw away the analysis of the 4d link , and there s no way to be sure . before moving on , it s worth noting that this classical story can not explain all quantum correlations ; in fact , it s exactly the story ruled out by a no - go theorem .such theorems generally start from the classical premise that we can assign subjective probabilities to possible 3d realities , .states of classical information then naturally take the form ] , with the crucial caveat that the s are now micro__histories _ _ , spanning 4d instead of 3d .so long as one does not additionally impose dynamical laws , there is no theorem that one of these microhistories can not be real . still , qualitative arguments are one thing ; the analogy between the above model and the double slit experiment can only be pushed so far .and one can go _ too _ far in the no - dynamics direction : considering _ all _ histories , as in the path integral , would lead to the conclusion that the future would be almost completely uncorrelated with the past , contradicting macroscopic observations .but this approach can be made much more quantitative .the key is to only consider a large natural subset of possible histories , such that classical dynamics is usually recovered as a general guideline in the many - particle limit .better yet , for at least one model , the structure of quantum probabilities naturally emerges . and as with any deeper - level theory that purports to explain higher - level behavior , intriguing new predictions are also indicated . even if the arguments presented in this essay are not a convincing reason to discard fundamental dynamical equations , they nevertheless serve as a strong rebuttal to the `` it from bit '' proponents . whether or not one _ wants _ to give up dynamics , the point is that one _ can _ give up dynamics , in which case quantum information can plausibly be _ about something real_. instead of winning the argument by default , then , `` it from bit '' proponents now need to argue that it s _ better _ to give up reality .everyone else need simply embrace entities that fill ordinary spacetime no matter how you slice it .timpson , _ quantum information theory & the foundations of quantum mechanics _ , oxford ( 2013 ) . j.a .wheeler , information , physics , quantum : the search for links " in _ complexity , entropy and the physics of information _ , w. h. zurek ( ed . ) , addison - wesley ( 1990 ) . j. s. bell , on the einstein podolsky rosen paradox " , physics * 1 * , 195 ( 1964 ) .s. kochen and e.p .specker , the problem of hidden variables in quantum mechanics " , j. math . mech . * 17 * , 59 ( 1967 ) . m.f .pusey , j. barrett , and t. rudolph , `` on the reality of the quantum state '' , nature physics * 8 * , 475 ( 2012 ) .k. wharton , `` the universe is not a computer '' , arxiv:1211.7081 ( 2012 ) .d. bohm , a suggested interpretation of the quantum theory in terms of hidden variables " , phys .85 * , 166 ( 1952 ) .h. everett , `` relative state formulation of quantum mechanics '' , rev .* 29 * , 454 ( 1957 ) .wharton , lagrangian - only quantum theory " , arxiv:1301.7012 ( 2013 ) .evans , h. price , and k.b .wharton , new slant on the epr - bell experiment " , brit .* 64 * , 297 ( 2013 ) .feynman , _ the character of physical law _ ,bbc publications ( 1965 ) .a. shimony , _ sixty - two years of uncertainty : historical , philosophical , and physical inquiries into the foundations of quantum mechanics , _ plenum , new york , ( 1990 ) .wharton , d.j .miller , and h. price , action duality : a constructive principle for quantum foundations " , symmetry * 3 * , 524 ( 2011 ) .the model in figure 2 ( reproduced below ) has the following rules. each circle can be in the state heads ( h ) or tails ( t ) , and each line connects two circles .each line has one of three internal colors ; red ( r ) , green ( g ) , or blue ( b ) , but these colors are _ unobservable_. the model s only law " is that red lines must connect opposite - state circles ( or ) , while blue and green lines must connect similar - state circles ( or ) .when analyzing the state - space , the key is to remember that connecting links between same - state circles have two possible internal colors ( g or b ) , while links between opposite - state circles only have one possible color ( r ) .combined with the equal _ a priori _ probability of each complete microstate ( both links and circles ) , this means that for an isolated 2-circle system , the circles are twice as likely to be the same as they are to be different . in figure 2a , given that the bottom circle is h , there are 4 different microstates compatible with an h on the left and an h on the right .this is because there are two links , and they can each be either blue or green .( specifically , listing the states of the three circles and the two links , the 4 possible hh " microstates are hbhbh , hbhgh , hghbh , and hghgh . ) according to the fundamental postulate of statistical mechanics , an hh will be four times as likely as a tt , for which only red links are possible ( trhrt ) . the full table for figure 2a is :
in order to reject the notion that information is always _ about something _ , the `` it from bit '' idea relies on the nonexistence of a realistic framework that might underly quantum theory . this essay develops the case that there _ is _ a plausible underlying reality : one actual spacetime - based history , although with behavior that appears strange when analyzed dynamically ( one time - slice at a time ) . by using a simple model with _ no _ dynamical laws , it becomes evident that this behavior is actually quite natural when analyzed `` all - at - once '' ( as in classical statistical mechanics ) . the it from bit " argument against a spacetime - based reality must then somehow defend the importance of dynamical laws , even as it denies a reality on which such fundamental laws could operate .
information retrieval ( ir ) systems decide about the relevance under conditions of uncertainty . as a measure of uncertainty is necessary , a probability theory defines the event space and the probability distribution .the research in probabilistic ir is based on the classical theory of probability , which describes events and probability distributions using , respectively , sets and set measures obeying the usual axioms stated in .set theory is not the unique way to define probability though .if subsets and set measures are replaced by vector subspaces and space - based measures , we obtain an alternative theory called , in this paper , _ vector probability_. although this theory stems from quantum theory , we prefer to use `` vector '' because vectors are sufficient to represent events like sets represent events within classical probability , the latter being the feature of our interest , whereas the `` quantumness '' of ir is out of the scope of this paper , which explains that the replacement of classical with vector probability is crucial to ranking . ranking is an essential task in ir .indeed , it should not come as a surprise that the probability ranking principle ( prp ) reported in is by far the most important theoretical result to date because it is an incisive factor in effectiveness .although probabilistic ir systems reach good results , ranking is far from being perfect because irrelevant documents are often ranked at the top of , or useful units are missed from the retrieved document list . besides the definition of weighting schemes and ranking algorithms , new results can be achieved if the research in ir views problems from a new theoretical perspective .we propose vector probability to describe the events and probabilities underlying an ir system .we show that ranking in accordance with vector probability is more effective than ranking in accordance with classical probability , given that the same evidence is available for probability estimation .the effectiveness is measured in terms of probability of correct decision or , equivalently , of probability of error .the result is proved mathematically and verified experimentally .although the use of the mathematical apparatus of quantum theory is pivotal in showing the superiority of vector probability ( at least in terms of retrieval effectiveness ) , this paper does not necessarily end in an investigation or assertion of quantum phenomena in ir .rather , we argue that vector probability and then quantum theory is sufficient to go beyond the state of the art of ir , thus supporting the hypothesis stated in according to which quantum theory may pave the way for a breakthrough in ir research .we organize the paper as follows .the paper gives an intuitive view of the our contribution in section [ sec : intuitive - view ] and sketches how an ir system built on the premise of quantum theory can outperform any other system .section [ sec : prob - relev ] briefly reviews the classical probability of relevance before introducing the notion of vector probability in section [ sec : vect - prob ] .section [ sec : optim - observ ] is one of the central sections of the paper because it introduces the optimal vectors which are exploited in section [ sec : vect - prob - relev-1 ] where we provide our main result , that is , the fact that a system that ranks documents according to the probability of occurrence of the optimal vectors in the documents is always superior to a system which ranks documents according to the classical probability of relevance which is based on sets .section [ sec : getting - beyond - state ] addresses the case of bm25 and how it can be framed within the theory .an experimental study is illustrated in section [ sec : experiments ] for measuring the degree to which vector probabilistic models outperforms classical probabilistic models if a realistic test collectio is used .the feasibility of the theory is strongly dependent on the existence of an oracle which tells whether optimal vector occur in documents ; this issue is discussed in section [ sec : regarding - oracle ] . after surveying the related work in section [ sec : related - work ] ,we conclude with section [ sec : conclusions ] .the appendix includes the definitions used in the paper and the proofs of the theoretical results .before entering into mathematics , figure [ fig : ir - decision ] depicts an intuitive view of what is illustrated in the rest of the paper .suppose that relevance and non - relevance are two events occurring with prior probability and , respectively .document a is in either relevant or non - relevant .let s view a as an emitter of binary symbols referring to presence or absence of a given index term .( we use the binary symbol and relevance for the sake of clarity . ) on the other side , an ir system b acts as a detector which has to decide whether a symbol comes out from either a relevant or a non - relevant document .the b s decision is taken on the basis of some feedback mechanism and on the relevance and non - relevance probability distributions which has been appropriately estimated on received symbols .an ir system that implements classical probability decides about relevance without any transformation of the received symbols ( figure [ ir - decision - a ] ) whereas a ir system that implements vector probability decides about relevance after a transformation of the received symbols carried out by an oracle which outputs new symbols which can not be straightforwardly derived from the received symbols ( figure [ ir - decision - b ] ) but can be defined as vectors . in this paper , we show that when b is equipped with such an oracle , then it does significantly outperform any other ir system which implements any classical probabilistic model .we theoretically measure the improvement in effectiveness on the basis of a mathematical proposition which holds for every ir system described in figure [ fig : ir - decision ] .an ir system performing like detector b of figure [ ir - decision - a ] computes the probabilities that a symbol ( e.g. , an index term ) occurs in relevant documents and in non - relevant documents . according to the intuitive view in terms of emitters and detectors provided in section [ sec : intuitive - view ] , the probability that a symbol ( e.g. , an index term ) occurs in relevant documents and in non - relevant documents is called , respectively , _ probability of detection _ ( ) and _ probability of false alarm _ ( ) .these probabilities are also known as expected recall and fallout , respectively .the system decides whether a document is retrieved by where is an appropriate threshold , and ranks the retrieved documents by using the left side of .for instance , when indipendent bernoulli random variables are used , we have that where are the probabilities that term occurs in relevant , non - relevant documents and the s belong to the region of acceptance .depending on the available evidence the probabilities are estimated as accurately as possible and are transformed into weights ( e.g. , the binary weight or the bm25 illustrated in ) .the probability ranking principle ( prp ) defines the optimal document subsets in terms of expected recall and fallout .thus , the optimal document subsets are those maximizing effectiveness .the prp states that , if a cut - off is defined for expected fallout , that is , probability of false alarm , we would maximize expected recall if we included in the retrieved set those documents with the highest probability of relevance , that is , probability of detection .when a collection is indexed , each document belongs to subsets labeled by the document index terms and the documents in a subset are indistinguishable .in fact , optimally ranks subsets whose documents are represented in the same way ( e.g. , the documents which are indexed by a given group of terms or share the same set of feature values ) . in terms of decision , if fallout is fixed , the prp permits to decide whether a document ( subset ) should be retrieved with the minimum probability of error .when using classical probability term occurrence would correspond to disjoint document subsets ( i.e. , a subset corresponds to an index term occurring in every document of the subset ) .when using vector probability , term occurrence corresponds to a document vector subspace which is spanned by the orthonormal vector either or representing , respectively , absence and presence of a given term .( for the sake of clarity , we consider a single term , binary weights and binary relevance as depicted in figure [ fig : ir - decision ] . ) as relevance is an event , two vectors represent binary relevance : a relevance vector represents non - relevance state and an orthogonal relevance vector represents relevance state .relevance vectors and occurrence vectors belong to a common vector space and thus can be defined in terms of a given orthonormal basis of that space . in a vector space ,a random variable is a collection of values and of vectors ( or projectors ) .the vectors are mutually _ orthonormal _ and 1:1 correspondence with the values .let be a random variable value ( e.g. , term occurrence ) and be a conditioning event ( e.g. , relevance ) .in quantum theory , is also known as state vector and is a specialization of a density operator , that is , a hermitian and unitary trace operator .the vector probability that is observed given is .when a density operator and an event is represented by projector , the vector probability of the event conditioned to the density operator is given by born s rule , that is , .when and , vector probability is a specialization of born s rule .( see . )it is possible to show that [ sec : vect - prob - relev ] a classical probability distribution can be equivalently expressed using vector probability .the proof is in the appendix .in this section , we reformulate the prp by replacing subsets with vector subspaces , namely , we replace the notion of optimal document subset with that of optimal vectors ( or , vector subspaces ) .such a reformulation allows us to compute the optimal vectors that are more effective than the optimal document subsets . to this end, we define a density matrix representing a probability distribution that has no counterpart in , but that is an extension of classical probability .such a density matrix is the outer product of a relevance vector by itself .when classical probability is assumed , a decision under uncertainty conditions taken upon this density matrix is equivalent to as illustrated in .( see the appendix and as for the details . )when vector probability is assumed , a decision under uncertainty conditions taken upon this density matrix is based upon a different region of acceptance . hence , we leverage the following helstrom s lemma because it is the rule to compute the optimal vectors .[ the : helstrom ] let be the relevance vectors .the optimal vectors at the highest probability of detection at every probability of false alarm is given by the eigenvectors of whose eigenvalues are positive .see .the optimal vectors always exist due to the spectral decomposition theorem ; therefore they are mutually orthogonal because are eigenvectors of ; moreover , they can be defined in the space spanned by the relevance vectors .the angle between the relevance vectors determines the geometry of the decision of the emitter of figure [ ir - decision - b ] geometry means the probability distributions of the events .therefore , the probability of correct decision and the probability of error are given by the angle between the two relevance vectors and by the angles between the vectors and the relevance vectors .figure [ fig : geometry - a ] depicts the geometry of the optimal vectors .( the figure is in the two - dimensional space for the sake of clarity , but the reader should generalize to higher dimensionality than two . )suppose are two any other vectors .the angles between the vectors and the relevance vectors are related with the angle between because the vectors are always mutually orthogonal and then the angle is .the optimal vectors are achieved when the angles between an vector and a relevance vector are equal to the rotation of the non - optimal vectors such that holds , yields the optimal vectors as figure [ fig : geometry - b ] illustrates : the optimal vectors are `` symmetrically '' located around the relevance vectors . if any two vectors are rotated in an optimal way , we can achieve the most effective document vector subspaces ( or , vectors ) in terms of expected recall and fallout . these vectors can not be ascribed to the subsets yielded by dint of the prp , the latter impossibility being called incompatibility .in this section , we leverage lemma [ the : helstrom ] to introduce the optimal vectors in ir . we define and as : note that , according to born s rule and , thus reproduce the classical probability distributions .if the oracle of figure [ ir - decision - b ] exists , an ir system performing like detector b computes the probabilities that the transformation of a binary symbol referring to an index term , into or occurs in relevant documents and in non - relevant documents .the former is called _ vector probability of detection _ ( ) and the latter is called _ vector probability of false alarm _ ( ) .these probabilities are the analogous of .but , if are the mutually exclusive symbols yielded by the oracle , we have that the latter expression is _ not _ the same probability distribution as because it refers to different events . according to , we have that the _ vector probability of error _ and the _ vector probability of correct decision _ are defined , respectively , as both probabilities depend on which is a measure of the distance between two probability distributions as proved in . as the probability distributions refer to relevance and non - relevance , is a measure of the distance between relevance and non - relevance .an example may be useful .suppose , for example , that the probability distributions are , .if relevance and non - relevance are equiprobable , and .when , the optimal vectors are the eigenvectors of that is , these vectors can be computed in compliance with .hence , , which is less than . the following theorem that is our main result shows that the latter example is not an exception . [ sec : optim - prob - rank-1 ] if are two arbitrary probability distributions conditioned to , the latter indicating the probability distribution of term occurrence in non - relevant documents and in relevant documents , respectively , then see section [ sec : proof - prop - refs ] .hence , if we were able to find the optimal vectors , retrieval performance of the detector b of figure [ fig : geometry - b ] would _ always _ be higher than retrieval performance of the detector b of figure [ fig : geometry - a ] .the development of the theory assumed binary weights for the sake of clarity . in the event of non - binary weights ,e.g. , bm25 , we slightly fit the theory as follows .if we do the development of the bm25 illustrated in in reverse , we can find that the underlying probability distribution is thus is the probability that for term is observed under state .( is just a normalization factor . )if estimates , the theory still works because theorem [ sec : optim - prob - rank-1 ] is independent of the estimation of .figure [ fig : ir - decision - c ] depicts an ir system equipped with an oracle based on vector probability estimated with bm25 .otherwise . the system b trains the probability distributions using the saturation values .rather that applying logarithms and computing the bm25 weights , b invokes the oracle that produces the vectors . ]in this section we report an experimental study .our study differs from the common studies conducted in usual ir evaluation because : ( 1 ) theorem [ sec : optim - prob - rank-1 ] already proves that an ir system working as the detector of figure [ fig : geometry - b ] will always be more effective than any other system , therefore , if the former were available , every test would confirm the theorem ; ( 2 ) as an experimentation that compare two systems using , say , mean average precision requires the implementation of the oracle , which can not be at present implemented , what we can measure is only the degree to which an ir system working as the detector of figure [ fig : geometry - b ] will outperform any other system .we have tested the theory illustrated in the previous sections through experiments based on the tipster test collection , disks 4 and 5 .the experiments aimed at measuring the difference between and by means of a realistic test collection . to this end , we have used the trec-6 , 7 , 8 topic sets .the queries are topic titles .we have implemented the following test : has been computed for each topic , query word and by means of the usual relative frequency of the word within relevant ( ) or non - relevant ( ) documents . in particular , means presence , means absence .thus , is the estimated probability of occurrence in non - relevant documents and is the estimated probability of occurrence in relevant documents .we have shown in section [ sec : getting - beyond - state ] that the improvement is independent of probability estimation and then of term weighting .consider word ` crime ` of topic no .301 ; we have that and .hence , .( relevance and non - relevance probability distributions are very close to each other . )estimation has taken advantage of the availability of the relevance assessments and thus it has been computed on the basis of the explicit assessments made for each topic .figure [ fig : t301-crime ] depicts as function of the prior probability . is always greater than for every prior probability .the vertical distance between the curves is due to the value of , which also yields the shape of the curve , meaning that ` crime ` discriminates between relevant and non - relevant documents to an extent depending on and .the average curves computed over all the query words and depicted in figure [ fig : t301 ] give an idea of the overall discriminative power of the topic .in particular , if the total frequencies within relevant and non - relevant documents are computed for each query word and a given topic , average probability of error is computed , for each prior probability . when is close to , the curves are indistinguishable because is very close to .the situation radically changes when topic no .344 is considered because ; indeed , figure [ fig : t344 ] confirms that when .and plotted against for word ` crime ` of topic .,width=566 ] and plotted against for topic .,width=566 ] and plotted against for topic .,width=566 ] we have also investigated the event that explicit relevance assessment can not be used because of the lack of reliable judgements for a suitable number of documents . in this event , it is customary to state that and .although pseudo - relevance data is assumed , and can still be computed as function of because the latter are still valid estimations . in particular , we have that thus , we can analyze the probabilities of error as functions of and .figure [ fig : noqrel ] depicts how change with and ; this plot does not depend on a topic , it rather depends on , which is a measure of discrimination power of a query word since .the plot confirms the intuition that increases when increases , that is , when idf decreases .in particular , are close to each other when little information about the proportion of relevant documents is available ( i.e. , ) and the idf is not large enough to make a term discriminative .nevertheless , if some information about the proportion of relevant documents is available ( i.e. , approaches either or ) , becomes much smaller than even when the idf is small ( see the bottom - right side of the plot of figure [ fig : noqrel ] ) .table [ tab : avrelfreq - topic ] reports the average relative topic word frequency for each topic computed over the query words .the relative frequency gives a measure of query difficulty and can be used to `` access '' to the plot of figure [ fig : noqrel ] to have an idea of when using a classical probabilistic ir model and of the improvement that can be achieved through an oracle which can produce the optimal vectors on the basis of the same available evidence as that used to estimate the s .this section explains why the design of the oracle is difficult . to this end, the section refers to some results of logic in ir reported in some detail in .when binary term occurrence is considered , there are two mutually exclusive events , i.e. , either presence ( ) or absence ( ) .the classical probability used in ir is based on neyman - pearson s lemma which states that the set of term occurrences can be partitioned into two disjoint regions : one region includes all the frequencies such that relevance will be accepted ; the other region denotes rejection . if a term is observed from documents and only presence / absenceis observed , the possible regions of acceptance are . when using vectors , the regions of acceptance are which are the projectors to , respectively , the null subspace , the subspace spanned by , the subspace spanned by , and the entire space .consider the symbols emitted by the oracle .vector probability is based on lemma [ the : helstrom ] which states that the set of symbols can be partitioned into two disjoint regions : one region includes all the symbols such that relevance will be accepted ; the other region denotes rejection . if a symbol is observed from the oracle , the regions of acceptance are . and yields the vertical plane , which is spanned by and too .if is intersected by the plane , the result is .but , if is intersected by and then by , the span of the two intersections is .that is , the distributive law is not admitted by vectors . ]the problem is that the subspaces spanned by can not be defined in terms of _ set _ operations on the subspaces spanned by .vector subspaces are equivalent to subsets and they then can be subject to set operations _ if _ they are mutually orthogonal . to explain the incompatibility between sets and vectors , we illustrate the fact that the distributive law can not be admitted in vector spaces as it is in set spaces .figure [ fig : subspaces ] shows a three - dimensional vector space spanned by .the ray ( i.e. one - dimensional subspace ) is spanned by , the plane ( i.e. two - dimensional subspace ) is spanned by .note that and so on . according to ,consider the vector subspace provided that means `` intersection '' and means `` span '' ( and not set union ) . since , .however , because and , therefore thus meaning that the distributive law does not hold , hence , set operations can not be applied to vector subspaces .incompatibility , that is , the invalidity of the distributive law , is due to the obliquity of the vectors .thus , the optimal vectors can not be defined in terms of occurrence vectors due to obliquity and a new `` logic '' must be searched .the precedent example points out the issue of the measurement of the optimal vectors .measurement means the actual finding of the presence / absence of the optimal vectors via an instrument or device .the measurement of term occurrence is straightforward because term occurrence is a physical property measured through an instrument or device .( a program that reads texts and writes frequencies is sufficient . )the measurement of the optimal vectors is much more difficult because to our knowledge any physical property does not correspond to an optimal vector . despite the difficulty of measuring optimal vectors ,one of the main advantages of vector probability and the results reported in this paper is that the effort to design the oracle for any other medium than text is comparable to the effort to design the oracle for text because the limits to observability of the features corresponding to the optimal vectors are actually those undergone when the informative content of images , video and music must be represented .thus , the question is : what should we observe from a document so that the outcome corresponds to the optimal vector ?the question is not futile because the answer(s ) would effect automatic indexing and retrieval .van rijsbergen s book is the point of departure of our work .it introduced a formalism based on the hilbert spaces for representing the ir models within a uniform framework .as the hilbert spaces have been used for formalizing quantum theory , the book has also suggested the hypothesis that quantum phenomena have their analogues in ir . in this paper , we are not much interested in investigating whether quantum phenomena have their analogues in ir , in contrast , we use hilbert vector spaces for describing probabilistic ir models and for defining more powerful retrieval functions .the latter use of vector spaces in our paper hinges on helstrom s book , which provides the theoretical foundation for the vector probability and the optimal vectors .in particular , it deals with optical communication and the detectability of optical signals and the improvement of the radio frequency - based methods with which their parameters can be estimated . within this domain, helstrom provides the foundations of quantum theory for deciding among alternative probability distributions ( e.g. relevance _ versus _ non - relevance , in this paper ) . in this paper, we point to a parallel between signal detection and relevance detection by corresponding the need to ferret weak signals out of random background noise to the need to ferret relevance out of term occurrence .thus , in this paper , quantum theory plays the role of enlarging the horizon of the possible probability distributions from the classical mixtures used to define classical distributions to quantum superpositions , although decision under conditions of uncertainty can still be treated by the theory of statistical decisions developed by , for example , and used in ir too .eldar and forney s paper gives an algorithm for computing the optimal vectors and obtains a new characterization of optimal measurement , and prove that it is optimal in a least - squares sense . is the distance between densities defined in and is implemented as the squared cosine of the angle between the subspaces corresponding to the relevance vectors .the justification of viewing as a distance comes from the fact that `` the angle in a hilbert space is the only measure between subspaces , up to a constant factor , which is invariant under all unitary transformations , that is , under all possible time evolutions . '' the latter is the justification given in of the use of born s rule for computing what we call vector probability .hughes book is an excellent introduction to quantum theory .in particular , it addresses incompatibility between observables we have used that explanation to illustrate the difficulty in implementing the oracle of figure [ ir - decision - b ] . however , in , there is no mention of optimal vectors .an introduction to quantum phenomena ( i.e. , interference , superposition , and entanglement ) and information retrieval can be found in .in contrast , we do not address quantum phenomena because our aims is to leverage vector space properties in conjunction with probability . in the ir literature, quantum theory is receiving more and more interest . in the authorspropose quantum formalism for modeling some ir tasks and information need aspects .in contrast , our paper does not limit the research to the application of an abstract formalism , but exploits the formalism to illustrate how the optimal vectors significantly improve effectiveness . in , the authors propose for modifying probability of relevance ; in conjunction with a cosine of the angle of a complex number are intended to model quantum correlation ( also known as interference ) between relevance assessments .the implementation of interference is left to the experimenter and that paper provides some suggestions .while shows that vector probability induces a different prp ( called quantum prp ) , this paper shows that vector probability always induces a more powerful ranking than prp .the research in ir has been traditionally concentrated on extracting and combining evidence as accurately as possible in the belief that the observed features ( e.g. , term occurrence , word frequency ) have to ultimately be scalars or structured objects .the quest for reliable , effective , efficient retrieval algorithms requires to implement the set of features as best one can .the implementation of a set of features is thus an `` answer '' to an implicit `` question '' , that is , which is the best _ set _ of features for achieving effectiveness as high as possible ?however , the research in ir often yields incremental results , thus arising the need to achieve an even better answer . to this end, we suggest to ask another `` question '' : which is the best _ vector subspace _ ?b. piwowarski , i. frommholz , m. lalmas , and k. van rijsbergen .what can quantum theory bring to information retrieval . in _ proceedings of the 19th acm international conference on information and knowledge management _ , cikm 10 , pages 5968 , new york , ny , usa , 2010 .acm .g. zuccon and l. azzopardi . using the quantum probability ranking principle to rank interdependent documents . in _ proceedings of the european conference on information retrieval research ( ecir ) _ ,pages 357369 , 2010 .the subsets of values can be defined by means of the set operations ( i.e. , intersection , union , complement ) .thus , one can compute , for instance , the set of relevant documents with a given term frequency .it is the set of the observable values that induce the system to decide for relevance .the most powerful region of acceptance yields the maximum probability of detection for a fixed probability of false alarm . of course , . in the following ,we adopt the dirac notation to write vectors so that the reader may refer to the literature on quantum theory ; a brief illustration of the dirac notation is in .a vector space over a field is a set of vectors subject to linearity , namely , a set such that , for every vector , there are three scalars and three vectors of the same space such that and . if is a vector , is its transpose , is the _ inner product _ with and is the _ outer product _ with .a projector is a linear operator acting on a vector space such that for every . in particular, is the _ projector _ to the subspace spanned by . if , the vector is normal .if , the vectors are mutually orthogonal .a subspace is a _ span _ of one or more subspaces if its projector is a linear combination of the projectors of the latter ; for example , a ray is a span of a vector , a plane is a span of two rays ( or vectors ) , and so on .suppose that is the probability that frequency is observed given a parameter corresponding to relevance .note that may refer to more than one parameter .however , we assume that is scalar for the sake of clarity . in the event of binary relevance , is either ( non - relevance ) or ( relevance ) .the expressions establish the relationship between classical probability distributions and vector probability , namely , between the parameters , the relevance vectors and the observable .the sign of is chosen so that the orthogonality between the relevance vectors is retained . moreover , the orthogonality of the relevance vectors and the following expression establish the relationship between classical and vector probability of relevance . consider figures [ fig : geometry - a ] and [ fig : geometry - b ] .a probability of detection and a probability of false alarm defines the coordinates of and with a given orthonormal basis ( that is , an observable ) : the coordinates are expressed in terms of angles : provided that is the angle between and .
according to the probability ranking principle , the document set with the highest values of probability of relevance optimizes information retrieval effectiveness given the probabilities are estimated as accurately as possible . the key point of this principle is the separation of the document set into two subsets with a given level of fallout and with the highest recall . if subsets of set measures are replaced by subspaces and space measures , we obtain an alternative theory stemming from quantum theory . that theory is named after vector probability because vectors represent event like sets do in classical probability . the paper shows that the separation into vector subspaces is more effective than the separation into subsets with the same available evidence . the result is proved mathematically and verified experimentally . in general , the paper suggests that quantum theory is not only a source of rhetoric inspiration , but is a sufficient condition to improve retrieval effectiveness in a principled way .
quantum simulation of physical systems on a qc has acquired importance during the last years since it is believed that qcs can simulate quantum physics problems more efficiently than their classical analogues : the number of operations needed for deterministically solving a quantum many - body problem on a classical computer ( cc ) increases exponentially with the number of degrees of freedom of the system . in quantum mechanics ,each physical system has associated a language of operators and an algebra realizing this language , and can be considered as a possible model of quantum computation . as we discussed in a previous paper , the existence of one - to - one mappings between different languages ( e.g. , the jordan - wigner transformation that maps fermionic operators onto spin-1/2 operators ) and between quantum states of different hilbert spaces , allows the quantum simulation of one physical system by any other one .for example , a liquid nuclear magnetic resonance qc ( nmr ) can simulate a system of atoms ( hard - core bosons ) because an isomorphic mapping between both algebras of observables exists .the existence of mappings between operators allows us to construct quantum network models from sets of elementary gates , to which we map the operators of our physical system .an important remark is that these mappings can be performed efficiently : we need a number of steps that scales polynomially with the system size .however , this fact alone is not sufficient to establish that any quantum problem can be solved efficiently .one needs to show that all steps involved in the simulation ( i.e. , preparation of the initial state , evolution , measurement , and measurement control ) can be performed with polynomial complexity .for example , the number of different eigenvalues in the two - dimensional hubbard model scales exponentially with the system size , so qc algorithms for obtaining its energy spectrum will also require a number of operations that scales exponentially with the system size .typically , the degrees of freedom of the physical system over which we have quantum control constitute the model of computation . in this paper, we consider the simulation of any physical system by the standard model of quantum computation ( spin-1/2 system ) , since this might be the language needed for the practical implementation of the quantum algorithms ( e.g. , nmr ) .therefore , the complexity of the quantum algorithms is analyzed from the point of view of the number of resources ( elementary gates ) needed for their implementation in the language of the standard model . had another model of computation being used , one should follow the same qualitative steps although the mappings and network structure would be different .the main purpose of this work is to show how to simulate any physical process and system using the least possible number of resources .we organized the paper in the following way : in section [ section2 ] we describe the standard model of quantum computation ( spin-1/2 system ). section [ section3 ] shows the mappings between physical systems governed by a generalized pauli s exclusion principle ( fermions , etc . ) and the standard model , giving examples of algorithms for the first two steps ( preparation of the initial state and evolution ) of the quantum simulation . in section [ section4 ]we develop similar steps for the simulation of quantum systems whose language has an infinite - dimensional representation , thus , there is no exclusion principle ( e.g. , canonical bosons ) . in section [ section5 ]we explain the measurement process used to extract information of some relevant and generic physical properties , such as correlation functions and energy spectra .we conclude with a discussion about efficiency and quantum errors ( section [ section6 ] ) , and a summary about the general statements ( section [ section7 ] ) .in the standard model of quantum computation , the fundamental unit is the _ qubit _ , represented by a two level quantum system . for a spin-1/2 particle , for example ,the two `` levels '' are the two different orientations of the spin , and . in this model , the algebra assigned to a system of -qubits is built upon the pauli spin-1/2 operators , and acting on the -th qubit ( individual qubit ) .the commutation relations for these operators satisfy an algebra defined by ( ) =2i\delta_{jk}\epsilon_{\mu \nu \lambda } \sigma_{\lambda}^j , \ ] ] where is the totally anti - symmetric levi - civita symbol .sometimes it is useful to write the commutation relations in terms of the raising and lowering spin-1/2 operators any operation on a qc is represented by a unitary operator that evolves some initial state ( boot - up state ) in a way that satisfies the time - dependent schrdinger equation for some hamiltonian .any unitary operation ( evolution ) applied to a system of qubits can be decomposed into either single qubit rotations by an angle about the axis or two qubits ising interactions .this is an important result of quantum information , since with these operations one can perform universal quantum computation .it is important to mention that we could also perform universal quantum computation with single qubit rotations and c - not gates or even with different control hamiltonians .the crucial point is that we need to have quantum control over those elementary operations in the real physical system . in the following, we will write down our algorithms in terms of single qubit rotations and two qubits ising interactions , since this is the language needed for the implementation of the algorithms , for example , in a liquid nmr qc .again , had we used a different set of elementary gates our main results still hold but with modified quantum networks .as an example of such decompositions , we consider the unitary operator , where represents a time - independent hamiltonian . after some simple calculations we decompose into elementary gates ( one qubit rotations and two qubits interactions ) in the following way this decomposition is shown in fig .1 , where the quantum network representation is displayed . in the same way , we could also decompose an operator using similar steps , by replacing in the right hand side of eq .[ decomp1 ] . into elementary single qubit rotations and two qubits interactions .time increases from left to right.,width=370 ]as discussed in the introduction , quantum simulations require simulations of systems with diverse degrees of freedom and particle statistics .fermionic systems are governed by pauli s exclusion principle , which implies that no more than one fermion can occupy the same quantum state at the same time . in this way ,the hilbert space of quantum states that represent a system of fermions in a solid is finite - dimensional ( for spinless fermions , where is the number of sites or modes in the solid ) , and one could think in the existence of one - to - one mappings between the fermionic and pauli s spin-1/2 algebras . similarly ,any language which involves operators with a finite - dimensional representation ( e.g. , hard - core bosons , higher irreps of , etc .) can be mapped onto the standard model language . in the second quantization representation , the ( spinless )fermionic operators ( ) are defined as the creation ( annihilation ) operators of a fermion in the -th mode ( ) . due to the pauli s exclusion principle and the antisymmetric nature of the fermionic wave function under the permutation of two fermions ,the fermionic algebra is given by the following commutation relations where denotes the anticommutator .the jordan - wigner transformation is the isomorphic mapping that allows the description of a fermionic system by the standard model where are the pauli operators defined in section [ section2 ] .one can easily verify that if the operators satisfy the commutation relations ( eq . [ su2 ] ) , the operators and obey eqs .[ fermcom ] .we now need to show how to simulate a fermionic system by a qc . just as for a simulation on a cc , the quantum simulation has three basic steps : the preparation of an initial state , the evolution of this state , and the measurement of a relevant physical property of the evolved state .we will now explain the first two steps , postponing the third until section [ section5 ] . in the most general case, any quantum state of fermions can be written as a linear combination of slater determinants where with the vacuum state defined as the state with no fermions . in the spin language , .we can easily prepare the states by noticing that the quantum gate , represented by the unitary operator when acting on the vacuum state , produces up to a phase factor . making use of the jordan - wigner transformation ( eqs .[ jw ] , [ jw2 ] ) , we can write the operators in the spin language the successive application of similar unitary operators will generate the state up to an irrelevant global phase . a detailed preparation of the fermionic state can be found in a previous work .the basic idea is to use extra ( ancilla ) qubits , then perform unitary evolutions controlled in the state of the ancillas , and finally perform a measurement of the -component of the spin of the ancillas . in this way, the probability of successful preparation of is .( we need of the order of trials before a successful preparation . )another important case is the preparation of a slater determinant in a different basis than the one given before where the fermionic operators s are related to the operators through the following canonical transformation with , , and is an hermitian matrix . making use of thouless s theorem , we observe that one slater determinant evolves into the other , , where the unitary operator can be written in spin operators using the jordan - wigner transformation and can be decomposed into elementary gates , as described in section [ section2 ] .since the number of gates scales polynomially with the system size , the state can be efficiently prepared from the state .the second step in the quantum simulation is the evolution of the initial state .the unitary evolution operator of a time - independent hamiltonian is . in general , with representing the kinetic energy and the potential energy .since we usually have \neq 0 ] , with defining the statistical angle . in particular , mod( ) corresponds to canonical spinless fermions , while mod( ) represents hard - core bosons . in order to simulate this problem with a qc made of qubits, we need to apply the following isomorphic and efficient mapping between algebras \ \sigma_+^j , \nonumber \\ a_j & = & \prod\limits_{i < j } [ \frac { e^{i \theta } + 1}{2 } + \frac { e^{i \theta } -1}{2 } \sigma_z^i ] \\sigma_-^j , \\ n_j & = & \frac{1}{2 } ( 1 + \sigma_z^j ) , \nonumber\end{aligned}\ ] ] where the pauli operators where defined in section [ section2 ] , and since they satisfy eq . [ su2 ] , the corresponding commutation relations for the anyonic operators ( eqs . [ anyoncom1 ] ) are satisfied , too. we can now proceed in the same way as in the fermionic case , writing our anyonic evolution operator in terms of single qubit rotations and two qubits interactions in the spin-1/2 language .as we already mentioned , anyon statistics have fermion and hard - core boson statistics as limiting cases .we now relax the hard - core condition on the bosons .quantum computation is based on the manipulation of quantum systems that possess finite number of degrees of freedom ( e.g. , qubits ) . from this point of view, the simulation of bosonic systems appears to be impossible , since the non existence of an exclusion principle implies that the hilbert space used to represent bosonic quantum sates is infinite - dimensional ; that is , there is no limit to the number of bosons that can occupy a given mode .however , sometimes we might be interested in simulating and studying properties such that the use of the whole hilbert space is unnecessary , and only a finite sub - basis of states is sufficient .this is the case for physical systems with interactions given by the hamiltonian where the operators ( ) create ( destroy ) a boson at site , and is the number operator .the space dimension of the lattice is encoded in the parameters and .obviously , the total number of bosons in the system is conserved , and we restrict ourselves to work with a finite sub - basis of states , where the dimension depends on the value of . the respective bosonic commutation relations ( in an infinite - dimensional hilbert space ) are =0 , [ b_i , b^{\dagger}_j]=\delta_{ij}.\ ] ] however , in a finite basis of states represented by with , where is the maximum number of bosons per site , the operators can have the following matrix representation where indicates the usual tensorial product between matrices , and the dimensional matrices and are it is important to note that in this finite basis , the commutation relations of the bosons differ from the standard bosonic ones ( eq . [ bosoncom ] ) =0 , \mbox { } [ \bar{b}^{\;}_i,\bar{b}^{\dagger}_j]=\delta_{ij } \left [ 1- \frac{n_p+1}{n_p ! } ( \bar{b}^{\dagger}_i)^{n_p}(\bar{b}^{\;}_i)^{n_p } \right ] , \ ] ] and clearly .as we mentioned in the introduction , our idea is to simulate any physical system in a qc made of qubits . for this purpose , we need to map the bosonic algebra into the spin-1/2 language . however , since eqs .[ bosoncom2 ] imply that the linear span of the operators and is not closed under the bracket ( commutator ) , a direct mapping between the bosonic algebra and the spin-1/2 algebra ( such as the case of the jordan - wigner transformation between the fermionic and spin-1/2 algebra ) is not possible .therefore , we could think in a one - to - one mapping between the bosonic and spin-1/2 quantum states , instead of an isomorphic mapping between algebras .let us show a possible mapping of quantum states .we start by considering only the -th site in the chain .since this site can be occupied with at most bosons , it is possible to associate an qubits quantum state to each particle number state , in the following way where denotes a quantum state with bosons in site . therefore , we need qubits for the simulation ( where is the number of sites ) . in fig .2 we show an example of this mapping for a quantum state with 7 bosons in a chain of 5 sites . by definition ( see eqs. [ bosonprod ] , [ bosonrep ] ) , so the operator where the pair indicates the qubit that represents the -th site , acts in the qubits states of eqs .[ bosonmap ] as .then , its matrix representation in this basis is the same matrix representation of in the basis of bosonic states .similarly , the number operator can be written and act as .notice that =0 ] ) , i.e. , into single qubit rotations and two qubits interactions .time increases from left to right.,width=359 ] in general , is a sum of non commuting terms of the form , and we need to perform another first order trotter approximation to decompose it into elementary gates ( in the spin language ) .then , a typical term when mapped onto the spin language ( eq .[ bosonmap2 ] ) gives ] , \end{aligned}\ ] ] where is the number of bosons .the terms in the exponent of eq .[ decomp3 ] commute with each other , so the decomposition into elementary gates becomes straightforward . as an example ( see fig.3 ) , we consider a system of two sites with one boson .we need then qubits for the simulation , and eq .[ bosonmap2 ] implies that and .then , becomes where the decomposition of each of the terms in eq .[ bosexamp ] in elementary gates can be done using the methods described in previous works . in particular , in fig .3 we show the decomposition of the term , where the qubits were relabeled as ( e.g. , ) . on the other hand , it is important to mention that the number of operations involved in the decomposition is not related to the distance between the sites and , as in the fermionic case .in previous work we introduced an efficient algorithm for the measurement of correlation functions in quantum systems .the idea is to make an indirect measurement , that is , we prepare an ancilla qubit ( extra qubit ) in a given initial state , then interact with the system whose properties one wants to measure , and finally we measure some observable of the ancilla to obtain information about the system . particularly , we could be interested in the measurement of dynamical correlation functions of the form where and are unitary operators ( any operator can be decomposed in a unitary operator basis as , ) , is the time evolution operator of a time - independent hamiltonian , and is the state of the system whose correlations one wants to determine .if we were interested in the evaluation of spatial correlation functions , we would replace the evolution operator by the space translation operator . in fig .4 we show the quantum algorithm ( quantum network ) for the evaluation of . as explained before , the initial state ( ancilla plus system ) has to be prepared in the quantum state ( where denotes the ancilla qubit and ) .additionally , we have to perform an evolution ( unitary operation ) in the following three steps : i ) a controlled evolution in the state of the ancilla , ii ) a time evolution , and iii ) a controlled evolution in the state of the ancilla .finally we measure the observable ..,width=264 ] on the other hand , sometimes we are interested in obtaining the spectrum ( eigenvalues ) of a given observable ( i.e. , an hermitian operator ) . a quantum algorithm ( network ) for this purposewas also given in previous work .again , the basic idea is to perform an indirect measurement using an extra qubit ( see fig .basically , we prepare the initial state ( ancilla plus system ) , then apply the evolution , and finally measure the observable .since the initial state of the system can be written as a linear combination of eigenstates of , , where are complex coefficients and are eigenstates of with eigenvalue , the classical fourier transform applied to the function of time gives us without loss of generality , we can choose , with some particular hamiltonian .it is important to note that in order to obtain the different eigenvalues of , the overlap between the initial state and the eigenstates of must be different from zero .one can use different mean - field solutions of as initial states depending on the part of the spectrum one wants to determine with higher accuracy .an algorithm is considered efficient if the number of operations involved scales polynomially with the system size , and if the effort required to make the error in the measurement of a relevant property smaller , scales polynomially with .while the evolution step involves a number of unitary operations that scales polynomially with the system size ( such is the case for the trotter approximation ) whenever the hamiltonian is physical ( e.g. , is a sum of a number of terms that scales polynomially with the system size ) , the preparation of the initial state could be inefficient .such inefficiency would arise , for example , if the state defined in eq .[ slater1 ] or eq .[ prodstate ] is a linear combination of an exponential number of states ( , with the number of sites in the system and a positive number ) .however , if we assume that is a finite combination of states ( scales polynomially with ) , its preparation can be done efficiently .( any ( perelomov - gilmore ) generalized coherent state can be prepared in a number of steps that scales polynomially with the number of generators of the respective algebra . ) on the other hand , the measurement process described in section [ section5 ] is always an efficient step , since it only involves the measurement of the spin of one qubit , despite the number of qubits or sites of the quantum system .errors come from gate imperfections , the use of the trotter approximation in the evolution operator , and the statistics in measuring the spin of the ancilla qubit ( sections [ section3.2 ] , [ section4.2 ] , and [ section5 ] ) .a precise description and study of the error sources can be found in previous work .the result is that the algorithms described here , for the simulation of physical systems and processes , are efficient if the preparation of the initial state is efficient , too .we studied the implementation of quantum algorithms for the simulation of an arbitrary quantum physical system on a qc made of qubits , making a distinction between systems that are governed by pauli s exclusion principle ( fermions , hard - core bosons , anyons , spins , etc . ) , and systems that are not ( e.g , canonical bosons ) .for the first class of quantum systems , we showed that a mapping between the corresponding algebra of operators and the spin-1/2 algebra exists , since both have a finite - dimensional representation . on the other hand ,the operator representation of quantum systems that are not governed by an exclusion principle is infinite - dimensional , and an isomorphic mapping to the spin-1/2 algebra is not possible .however , one can work with a finite set of quantum states , setting a constraint , such as fixing the number of bosons in the system . then, the representation of bosonic operators becomes finite - dimensional , and we showed that we can write down bosonic operators in the spin-1/2 language ( eq . [ bosonmap2 ] ) , mapping bosonic states to spin-1/2 states ( eq . [ bosonmap ] ) .we also showed how to perform quantum simulations in a qc made of qubits ( quantum networks ) , giving algorithms for the preparation of the initial state , the evolution , and the measurement of a relevant physical property , where in the most general case the unitary operations have to be approximated ( sections [ section3.2],[section4.2 ] ) .the mappings explained are efficient in the sense that we can perform them in a number of operations that scales polynomially with the system size .this implies that the evaluation of some correlation functions in quantums states that can be prepared efficiently is also efficient , showing an exponential speed - up of these algorithms with respect to their classical simulation .however , these mappings are insufficient to establish that quantum networks can simulate any physical problem efficiently . as we mentioned in the introduction , this is the case for the determination of the spectrum of the hamiltonian in the two - dimensional hubbard model , where the signal - to - noise ratio decays exponentially with the system size .
if a large quantum computer ( qc ) existed today , what type of physical problems could we efficiently simulate on it that we could not simulate on a classical turing machine ? in this paper we argue that a qc could solve some relevant physical questions " more efficiently . the existence of one - to - one mappings between different algebras of observables or between different hilbert spaces allow us to represent and imitate any physical system by any other one ( e.g. , a bosonic system by a spin-1/2 system ) . we explain how these mappings can be performed showing quantum networks useful for the efficient evaluation of some physical properties , such as correlation functions and energy spectra .
the present note was initiated by the revisited astumian s paradox . in august 2004 piotrowski andsladowski asserted that astumian s analysis was flawed .however , as shown by astumian , this statement was wrong .since the analysis of the problem in a slightly more general frame than it was done earlier could be a good exercise for graduate students , we came to the conclusion that it might be useful to publish our elementary considerations about the properties of markov chains corresponding to astumian type games . for entirely didactic reasons , in sections ii andiii we present a brief summary of definitions and statements which are needed for the analysis of the astumian type markov chains . in section iv we analyze the properties of such chains and determine the probabilities of losing and winning .conclusions are made in section v.let be a finite set of positive integers , and be a set of non - negative integers .denote by the random variable which assumes the elements of .we say that the sequence forms a markov chain if for all and for all possible values of random variables the equation is fulfilled . if then the process is said to be in _ state _ at the ( discrete time instant ) step .the states define _ the space of states _ of the process . the probability distribution of the random variable called the _ initial distribution _ and the conditional probabilities are called _transition probabilities_. if and , then we say that the process made a transition at the step .the markov chain is _ homogeneous _ if the transition probabilities are independent of . in this casewe may write and it obviously holds that in what follows we shall consider only homogeneous markov chains. we would like to emphasize that the transition probability matrix which is a _ stochastic matrix _ , and the initial distribution determine the random process uniquely .for the sake of simplicity , we assume that the process is a random walk of an abstract object , called _ particle _ on the space of states .the step transition probability satisfies the following equation : where it is to note that is the probability that at the step the particle is in the state provided that at it was in the state . from eq .( [ 5 ] ) we obtain that and by using the rules of matrix multiplication we arrive at where and is the unit matrix .making use of the total probability theorem we can determine _ the absolute probabilities _ as follows : where is the initial probability .clearly , is the probability that the particle is in the state at the step . introducing the row vector eq .( [ 8 ] ) can be rewritten in the form : where the upper index indicates the transpose of matrix and vector defined by ( [ 7 ] ) and ( [ 9 ] ) , respectively .if the process starts from the state , then and order to use clear notions , we introduce several well - known definitions . if there is an integer such that , then we say the state can be reached from the state . if can be reached from and can be reached from , then and are _ connected states_. obviously , if and are not connected , then either , or .the set of states which are connected forms a _ class of equivalence_. a markov chain is called _ irreducible _ if every state can be reached from every state i.e. , the entire state space consists of only one class of equivalence . in other words , the markov chain is irreducible when all of the states are connected .the probability of passage from to in exactly steps , that is , without passing through before the step , is given by there exists an important relationship between the probabilities and which is easy to prove .the relationship is given by one has to note that the expressions are the diagonal elements of the unit matrix .the proof of ( [ 12 ] ) is immediate upon applying the total probability rule .the particle passes from to in steps if , and only if , it passes from to for the first time in exactly steps , , and then passes from to in the remaining steps .these paths " are disjoint events , and their probabilities are given by .summing over one obtains the equation ( [ 12 ] ) .let us introduce the generating functions taking into account that , from eq .( [ 12 ] ) we obtain ,\ ] ] and from this so we have and in particular defined by ( [ 16 ] ) is the probability that a particle starting its walk from passes through the state at least once .clearly , is _ the probability of returning to at least once_. more generally , the probability that a particle starting its walk from passes through _ at least times _ is given by \;f_{jj}(k-1 ) = f_{ij}\;f_{jj}(k-1).\ ] ] in particular , the probability of returning to at least times is given by .its limit is the probability of _ returning to infinitely often_. it follows from the previous relationship that the probability that a particle starting its walk from passes through infinitely many times is so that we say that is a _ return state _ or a _ nonreturn state _ according as or . as a further definition ,we say that is a _ recurrent state _ or a _ nonrecurrent state _ according as or .a nonrecurrent state is often called a _ transient state_. the state called _ periodic _ with period if a return to can occur only at steps and is the greatest integer with this property .if is not divisible by , then . if the period of each state is equal to , i.e. , if , then the markov chain is called _ aperiodic_. in the sequel we are dealing with aperiodic markov chains .a set of states in a markov chain is _ closed _ if it is impossible to move out from any state of to any state outside by one - step transitions , i.e. , if and in this case obviously holds for every . if a single state forms a closed set , then we call this an _ absorbing state _ , and we have .the states of a closed set are recurrent states since the return probability for any state is equal to .therefore , the _ set of recurrent states _ is denoted by .can be decomposed into mutually disjoint closed sets such that from any state of a given set all states of that set and no others can be reached .states can be reached from , but not conversely . ]the set of states having return probabilities is the _ set of transient states _ and it is denoted by .obviously , if and , i.e. , if is an absorbing state , then is the probability that a particle starting at is finally absorbed at .let be the _ passage time _ of a particle from the state to the state , taking values with probabilities .if then the _ expected passage time _ from to is defined by {z=1},\ ] ] while if , one says that with probability , i.e. , if , then the expected passage time . if the state and it is recurrent , i.e. , if , then the expectation {z=1 } = \tau_{ii } = \mu_{i}\ ] ] is called _ mean recurrent time_. if , then we say that is a _ recurrent null - state _ , whereas if , then we say that is a _ recurrent non - null - state_. if , i.e. , the state is transient , then is the probability that the recurrence time is infinitely long , and so .we say that the recurrent state is _ ergodic _ , if it is not a null - state and is aperiodic , that is , if and .the first statement is very simple , hence it is given without proof . if is a transient or a recurrent null - state , then for any arbitrary holds . if and are recurrent aperiodic states due to the same closed set , then irrespective of . )tauber s theorem is used instead of the lemma by erds - feller - kac . ] if , then we have from eq . ( [ 14 ] ) the formula substituting this into ( [ 14 ] ) we obtain the following expression : by using tauber s theorem we can state that since and are aperiodic recurrent states due to the same closed set , i.e. , the limit value we have to determine applying lhospital s rule we find that and thus we obtain ( [ 20 ] ) .this completes the proof . as a generalization we would like to consider the casewhen is a transient state ( ) and is an aperiodic recurrent state due to the closed set .it can be shown that where is the probability that a particle starting from will ultimately reach and stay in the state .in other words , is the _ absorption probability _ that satisfies the following system of equations : clearly , if contains all of the possible states of the particle , then the proof of ( [ 24 ] ) follows immediately from ( [ 22 ] ) .since we obtain the limit relationship ( [ 24 ] ) . finally , we would like to present a brief classification of markov chains . *a markov chain is called _ irreducible _ if and only if all its states form a closed set and there is no other closed set contained in it . * a markov chain is called _ ergodic _ if the probability distributions always converge to a limiting distribution which is independent of the initial distribution , that is , when .all states of a finite , aperiodic irreducible markov chain are ergodic . *the probability distribution is a _ stationary _ distribution of a markov chain if , when we choose it as an initial distribution all the distributions will coincide with .every stationary distribution of a markov chain satisfies the following system of linear equations : and conversely , each solution of this system is a stationary distribution of the markov chain , if it is a probability distribution .it is to mention that some parts of this short summary is based on the small but excellent book by takcs .in this section we are going to deal with markov chains containing _ two absorbing states _ and , and _ transient states_. in this case , the markov chain is _ reducible _ and _ aperiodic_. the set of its states is the union of _ two closed sets _ and , and of the set of transient states the states and can be reached from each state of but the converse does nt hold , no state of can be reached from the states and .the states of are _ non - recurrent _ since the particle leaves the set never to return to it . in contrary , the states of and are ergodic .let us assume that the transition matrix has the following form : where the particle , which starts his walk from one of the states , is captured when it enters the states or . by using the foregoing formulae for and , we can immediately obtain the capture probabilities by the absorbing states and , respectively . in order to have a direct insight into the nature of the process, we derive the backward equations for the probabilities .clearly , and by introducing the generating function we obtain the following system of equations : this can be simplified and rewritten in the form : after elementary algebra , we can determine all the generating functions , nevertheless we are now interested only in those functions which correspond to processes starting from the state .in this case we have where - ( 1 - w_{22 } z ) w_{34 } w_{43 } z^{2}.\ ] ] applying tauber s theorem we obtain that and performing the substitutions we have and it is elementary to show that in order to prove these equations , let us take into account relationship ( [ 15 ] ) and write and since we have comparing ( [ 43 ] ) and ( [ 45 ] ) with ( [ 49 ] ) we see that eqs .( [ 48 ] ) are true .it is convenient to write the absorption probabilities and in the form : where and we see immediately that , as expected .it seems to be worthwhile to study the history of a particle starting its random walk from the state .( 8,12 ) ( 0.5 , 1)(2,1)(1,0)7 ( 0.5 , 3)(2,3)(1,0)7 ( 0.5 , 5)(2,5)(1,0)7 ( 0.5 , 7)(2,7)(1,0)7 ( 0.5 ,9)(2,9)(1,0)7 ( 5.5 , 5)(0,1)2 ( 5.5 , 5)(0,-1)2 ( 3.75 , 3)(0,1)2 ( 3.75 , 3)(0,-1)2 ( 7.25 , 7)(0,1)2 ( 7.25 , 7)(0,-1)2 let us consider a trap containing a special ladder with rungs .each rung corresponds to a given state of the markov chain under investigation .the process starts when a particle enters ( say , ) on the third rung of the ladder , i.e. , in the state .once the particle has entered , it is free to move up and down the rungs randomly .1 illustrates this random walk .if the particle reaches the states either or , it is absorbed .( if the random walk is considered as a game , then the absorption state with probability smaller than is the winning " state . )having chosen the transition matrix [ ht ! ]-0.3 cm we calculated the dependencies of probabilities and on the number of steps .the results of calculation are shown in fig .we see that the probability to find the particle after steps in the transient state is practically zero .the same holds for the transient states and .after steps the particle is absorbed either in with probability or in with probability .it is instructive to determine also the probabilities and . as a reminder ,we note that is the probability that a particle starting from passes through _ at least once ._ by using the transition matrix ( [ 52 ] ) we obtain the following values : and .3 shows the histogram of these probabilities .-0.3 cm it is evident that passing through either or at least once means that the particle is absorbed . as expected in the present case , the probability that the particle starting from returns to at least once , is nearly .it is to mention that the two absorbing states and are recurrent since . in what followswe would like to deal with the _ determination of the absorption time probability_. denote by the number of steps leading to the absorption of a particle starting its random walk from the state . by definition , and are the probabilities that the particle starting from the state is absorbed exactly at the step in or in , respectively .hence we can write that it is easy to prove that from ( [ 12 ] ) one obtains and by taking into account that one has it follows immediately from these equations that and this completes the proof .the absorption time probabilities can be determined by the forward " equations : and by using these expressions one can write ,\ ] ] which in the case of defined by ( [ 27 ] ) has the following form : for the sake of completeness , we would like to show that in the case of eq .( [ 53 ] ) we see that and by using the expression ( [ 26 ] ) we find ( [ 57 ] ) . in the case of eq .( [ 55 ] ) \ ; \left[\;w_{\ell 1 } + \;w_{\ell 5}\right ] = \ ] ] \;\left[\;w_{\ell 1 } + \;w_{\ell 5}\right ] = \sum_{\ell=2}^{4 } g_{i \ell}(1)\;\left[\;w_{\ell 1 } + \;w_{\ell 5}\right ] = f_{i1 } + f_{i5 } = 1.\ ] ] [ ht ! ] -0.3 cm using the transition matrix given by ( [ 52 ] ) , we calculated the dependence of the probability on the number of steps .the results are seen in fig .4 . as expected ,if the starting state is , then the probability varies differently with the step number as the probabilities and .it is characteristic the probabilities have a rather long tail .since is the probability that a particle starting from is absorbed exactly in the step , the expectation and the standard deviation of the absorption time are given by and ^{1/2}.\ ] ] for a transition matrix of the form ( [ 52 ] ) these values are presented in the table i. .[t1 ] expectations and the standard deviations of the absorption time [ cols="^,^,^,^",options="header " , ] as it has been shown , is the probability that a particle starting its random walk from the state is finally absorbed in the state . since . ] if , then is called a losing " state , while if , then it is a winning " state .the game is fair " when , i.e. when the equation is fulfilled as it follows from eq .( [ 51 ] ) .astumian proposed two transition matrices , namely resulting in the absorption probability and showed that the arithmetic mean of these two matrices brings about the probability , i.e. , in this case the state becomes winning " state .this property of the transition matrix ( [ 27 ] ) is general _ if the diagonal entries of the matrix are different from zero_. by using a simple example we would like to demonstrate this statement .let us choose the transition matrix in the following form : one obtains immediately that where if or , then the game is fair " , i.e. , the function assumes its minimal value at and this value is introducing the notation one has choosing according to the inequalities i.e. , and one finds that evidently , _there are infinitely many pairs of transition matrices which result in probabilities of losing in the state but the arithmetic means of corresponding pairs bring about probabilities of winning in the state . [ ht ! ] -0.3 cm for the sake of illustration in fig .5 the probability vs. curve is plotted by the values and .the black points and correspond to the probabilities = 8/17,\ ] ] respectively .it seems to be not superfluous to write down the corresponding transition matrices : and by choosing values in the allowed interval , we can construct infinitely many transition matrices with just described properties .cm [ ht ! ]-0.3 cm let us now define a markov chain with transition matrix randomly chosen from and defined by ( [ 63 ] ) . in this case \times \mathbf{w}(n-1),\ ] ] i.e. , ^{n}.\ ] ] in fig .6 the dependencies of the absorption probabilities is the first entry of the third row of the matrix . ] on the number of steps are shown when the transition matrices are and , respectively .the last one corresponds to the random selection of the entries from and with probability .obviously , not all values of ] containing the values which result in absorption probabilities smaller than . in the present case we obtained that and has been shown that the random walk of a particle defined by the stochastic transition matrix of a markov chain is equivalent to an astumian type game if the diagonal entries of the matrix are different from zero and the first as well as the last entries are equal to . by using a simple example ,we have proved that there are infinitely many pairs of transition matrices which result in absorption probabilities in the state larger than but the arithmetic means of the corresponding pairs lead to probabilities smaller than .
in 2001 astumian published a very simple game which can be described by a markov chain with absorbing initial and final states . in august 2004 piotrowski and sladowski asserted that astumian s analysis was flawed . however , as was shown by astumian , this statement was wrong . in this comment the properties of markov chains corresponding to games that are more general than that studied by astumian , are investigated .
recent arrival of the information age has created an explosive demand for knowledge and information exchange in our society .this demand has triggered off an enormous expansion in wireless communications in which severe technical challenges , including the need of transmitting speech , data and video at high rates in an environment rich of scattering , have been encountered . a recent development in wireless communication systemsis the multi - input multi - output ( mimo ) wireless link which , due to its potential in meeting these challenges caused by fading channels together with power and bandwidth limitations , has become a very important area of research .the importance of mimo communications lies in the fact that they are able to provide a significant increase in capacity over single - input single - output ( siso ) channels .existing mimo designs employ multiple transmitter antennas and multiple receiver antennas to exploit the high symbol rate provided by the capacity available in the mimo channels .full symbol rate is achieved when , on average , one symbol is transmitted by each of the multiple transmitter antennas per time slot ( often called a channel use " ) . in the case of transmitter antennas , we will have an average of symbols per channel use ( pcu ) at full rate .furthermore , to combat fading and cross - talk , mimo systems provide different replicas of transmitted symbols to the receiver by using multiple receiver antennas with sufficient separation between each so that the fading for the receivers are independent of each other .such diversity can also be achieved at the transmitter by spacing the transmitter antennas sufficiently and introducing a code for the transmitted symbols distributed over transmitter antennas ( space ) and symbol periods ( time ) , i.e. , space - time coding .full diversity is achieved when the total degree of freedom available in the multi - antenna system is utilized . over the past several years, various space - time coding schemes have been developed to take advantage of the mimo communication channel .using a linear processor , orthogonal space - time block codes can provide maximum diversity achievable by a maximum likelihood detector .however , they have a limited transmission rate and thus , do not achieve full mimo channel capacity .linear dispersion codes have been proposed in for which each transmitted codeword is a linear combination of certain weighted matrices maximizing the ergodic capacity of the system .unfortunately , good error probability performance for these codes is not strictly guaranteed . to bridge the gap between multiplexing and diversity ,a linear dispersion code design has been proposed using frame theory that typically performs well both in terms of ergodic capacity and error performance , but full diversity still can not be guaranteed . thus far , with the exception of the orthogonal stbc , all existing stbc are designed such that full diversity can only be achieved when the ml detector is employed .recent research based on number theory has shown that employing a ml receiver , it is possible to design linear space - time block codes and dispersion codes which are full rate and full diversity without information loss .the major concern on these designs is that the coding gain vanishes rapidly as the constellation size increases .therefore , designs of full - rate , full - diversity space - time codes with non - vanishing coding gain have drawn much attention since such structured space - time codes could achieve the optimal diversity - vs - multiplexing tradeoff developed by zheng and tse . however , most available stbc possessing these properties are for ml receivers only .in this paper , we consider a coherent communication system equipped with multiple transmitter antennas and a single receiver antenna , i.e. , a miso system .these systems are often employed in mobile communications for which the mobile receiver may not be able to support multiple antennas .the highest transmission rate for a miso system is unity , i.e. , one symbol pcu .for such a miso system with ml receivers , rate-1 and full diversity stbc have been proposed by various authors . in this paper , however , we consider such a miso system equipped with _ linear receivers _ for which we propose a general criterion for the design of a full - diversity stbc . in particular, we introduce the toeplitz stbc as a member of the family of the full diversity stbc .it should be noted that the toeplitz structure has already been successfully employed as a special case of the delay diversity code ( ddc ) applied to mimo systems having outer channel coding and ml detection .here , we extend its application to the construction of stbc in a miso system by having a toeplitz coding matrix cascaded with a beamforming matrix .we show that the toeplitz stbc has several important properties which enable the code , when applied to a miso system with a linear receiver , to asymptotically achieve unit symbol rate , to possess non - vanishing determinants for signal constellations having non - zero distance between nearest neighbours , and to achieve full diversity accomplishing the optimal tradeoff of diversity and multiplexing gains .on the other hand , we also consider the miso system in which the channel has zero mean and fixed covariance known to the transmitter . for such miso systems , sacrificing the transmission rate by repeating the transmitted symbols , and employing maximum ratio combining together with orthogonal space - time coding , an optimal precoder can be designed by minimizing the upper bound of the average symbol error probability ( sep ) . here in this paper, we apply the toeplitz stbc to such a miso system .maintaining rate one and full diversity , we present a design that minimizes the _ exact _ worst case average pair - wise error probability when the ml detector is employed at the receiver .consider a mimo communication system having transmitter antennas and receiver antennas transmitting the symbols which are selected from a given constellation , i.e. , . to facilitate the transmission of these symbols through the antennas in the time slots ( channel use ) , each symbolis processed by an coding matrix , and then summed together , resulting in an stbc matrix given by where the ( )th element of represents the coded symbol to be transmitted from the antenna at the time slot .these coded symbols are then transmitted to the receiver antennas through flat - fading path coefficients which form the elements of the channel matrix .the received space - time signal , denoted by the matrix , can be written as where is the additive white space - time noise matrix whose elements are of complex circular gaussian distribution .let us now turn our attention to a miso wireless communication system which is a special case of the mimo system having transmitter antennas and a single receiver antenna .just as in the mimo system , the transmitted symbols , in the miso system are coded by linear stbc matrices which are then summed together so that where is the total number of symbols to be transmitted if , the system is at full - rate ( rate - one ) . at the time slot , the row of the coding matrix feeds the coded symbols to the antennas for transmission .each of these transmitter antennas is linked to the receiver antenna through a channel path coefficient . at the receiver of such a system , for every time slots ( ), we receive an -dimensional signal vector ^t ] is an channel vector assumed to be circularly symmetric complex gaussian distributed with zero - mean and covariance matrix , and is an noise vector assumed to be circularly symmetric complex gaussian with covariance .putting eq .( [ eq : ldc ] ) into eq .( [ eq : model ] ) , writing the symbols to be transmitted as a vector and aligning the code - channel products to form the new channel matrix we can write \end{aligned}\ ] ] the received signal vector can now be written as in this paper , we emphasize on the application of _ linear _receivers for the miso system in eq .( [ eq : equal_model ] ) . in the following, we will derive a condition on the equivalent channel that renders full - diversity when the signals are received by a linear receiver .first , we present the following properties of the equivalent channel matrix : [ pro : general - det ] suppose the equivalent channel in eq .is such that is non - singular for any nonzero .then we have the following inequality : where and are positive constants independent of .+ since is nonzero , we normalize the matrix by dividing each of its elements with , i.e. , , where is the normalized matrix with the element being equal to {ij}=\frac{\mathbf h^h}{\|\mathbf h\| } \mathbf a_i^h\mathbf a_j \frac{\mathbf h}{\|\mathbf h\|}&\qquad i , j=1,2,\cdots , l\end{aligned}\ ] ] the determinant of positive semi - definite ( psd ) matrix is continuous in a closed bounded feasible set where . it has the maximum and minimum values that are denoted by and respectively .now , since is non - singular for any nonzero , its determinant is positive . therefore , and eq. holds . the following example serves to illustrate the above property .* example 1 : * consider the following channel matrix .the determinant of matrix can be written as since , we can define , and , and eq .becomes it is obvious that the function is continuous in a closed bounded set .the minimum and maximum of it can be easily obtained as .both values are constants and are independent of the random channel .thus , the determinant of the channel matrix is bounded by . [ pro : inverse - bound ] if is non - singular for any nonzero , then the diagonal elements of ^{-1} ] is the snr , with is the error vector , is the rank , and are the non - zero eigenvalues of the matrix . the middle part of eq .( [ eq : chernoff ] ) is the chernoff bound , which at high snr , can be further tightly bounded by the right side . for a given , two factorsdictate the minimization of this bound on the right side of eq .( [ eq : chernoff ] ) : a. the rank of : the exponent of the second term governs the behaviour of the upper bound with respect to snr and is known as the _ diversity gain_. to keep the upper bound as low as possible , we should make the diversity gain as large as possible .full diversity is achieved when , i.e. , is of full column rank .this implies that the diversity gain achieved by an ml detector depends on , which is decided by the type of signalling .b. the determinant of : the first term consists of the product of the non - zero eigenvalues of and is called the _coding gain_. for being full rank , this product is its determinant the minimum value of which ( taken over _ all _ distinct symbol vector pairs ) must be maximized . at high snr , the upper bound in eq .( [ eq : chernoff ] ) is dominated by the exponent of .this leads to a more general definition of diversity gain as being _ the total degrees of freedom offered by a communication system , reflected by the factor involving the negative power of the snr in the expression of the error probability . _full diversity gain is achieved when the total degrees of freedom ( ) offered in the multi - antenna system are utilized .we adopt this latter notion of diversity gain when we consider the stbc for the miso system . since , full diversity for a miso systemis achieved if the exponent of the snr in the expression of the error probability is equal to .let us now consider the condition on for which full - diversity is achieved by a miso system employing a _ linear _ receiver .we need only to consider the use of a linear zero - forcing ( zf ) receiver because the same condition extends to miso systems using linear minimum mean square ( mmse ) receivers or other more sophisticated receivers .since the diversity gain of a communication system relates the probability of error to snr , we first analyze the symbol error probability ( sep ) of detecting different signal constellations by a linear zf equalizer and express these in terms of the snr . here ,we examine three commonly used signalling schemes : 1 ) square qam , 2 ) pam and 3 ) psk constellations respectively .let denote the cardinality .firstly , we summarize the definition of some common parameters which govern the performance of the zf linear detectors under these schemes .we use the index to denote parameters associated with the three signalling schemes as ordered above .let , denote the respective average symbol energy in each of the above schemes , and let be the noise variance at the receiver antenna .therefore , the snr for each symbol at the receiver is given by note that ^{-1}_{\ell\ell} ] . a toeplitz matrix generated by and a positive integer , denoted by ,is defined as {i j}=\left\{\begin{array}{cc } \alpha_{i - j+1 } , & \textrm{if and } \\ 0 , & \textrm{otherwise}\\ \end{array } \right.\end{aligned}\ ] ] which can be explicitly written as if we replace by , the information symbols to be transmitted , then a _ toeplitz _ stbc matrix is defined as where , for , is a matrix of rank placed in the coding matrix to facilitate the transmitter antennas with beamforming capability . at timeslot , the row of the matrix is fed to the transmitter antennas for transmission .apply the toeplitz space - time block coding matrix to the miso system described in eq . , and we have where , and .thus , can be viewed as the overall channel matrix of the miso system .* example 2*. for , and , the codeword matrix and channel matrix are , respectively , for this code , there are symbols to be transmitted in channel uses .therefore , the symbol transmission rate of this system is symbols per channel use . :a. eq .( [ eq : model2 ] ) is identical in form to that describing a mimo intersymbol interference channel for zero - padding block data transmission ( e.g. ) .it can thus be interpreted that the original miso channel is transformed into a toeplitz virtual mimo channel . in other words ,the space diversity has been exchanged for delay ( time ) diversity .this is realized by transforming the flat fading channel into a frequency selective channel with zero - padding .this technique is parallel to that employed in .b. for such a system , we can utilize the efficient viterbi algorithm to detect the signal if perfect channel knowledge is available at the receiver . on the other hand , when channel coefficients are not known at the receiver, we can make use of the second order statistics of the received signal to blindly identify the channel . c. toeplitz stbc is a _ non - orthogonal _ stbc whose coding matrix possesses non - vanishing determinant for _ any _ signalling scheme . hence according to theorem [ theo : full - diversity ], the code achieves full diversity even with the use of a linear receiver . on the other hand , since full diversity stbc designed for ml receivers ( e.g. , ) maintain non - vanishing determinant only for certain types of signalling , full diversity gain is not guaranteed when a linear receiver is used .d. when , toeplitz stbc becomes a special delay diversity code ( ddc ) with padded zeroes .in general , ddc is applied with the use of outer channel coding and ml detectors to achieve the full diversity gain . however , here we show that the toeplitz stbc possesses special properties which enable full space diversity to be achieved even with the use of the simplest linear receiver and the signals can be of any type .we now examine some important properties of the toeplitz space - time block codes introduced in the previous subsection. these properties will be useful in performance analysis and code designs in the ensuing sections .[ pro : full - rate ] the definition of the toeplitz space - time code shows that the symbol transmission rate is symbols per channel use when .therefore , for a fixed , the transmission rate can approach unity if the number of channel uses is sufficiently large .[ pro : toeplitz lower bound ] for any nonzero vector , there exists , and the matrix satisfies the following inequality , _ proof _ : by letting in eq .( [ eq : equal_channel ] ) and choosing where we obtain an equivalent channel of the same structure as .hence , is a special case of .thus , from property [ pro : general - det ] , there exist and for which eq. holds .now , we note that the diagonal entries of the matrix are all the same and are equal to {kk}=\|{\boldsymbol \alpha}\|^2,~ k=1,\cdots , k ] , of the transmission channels .suppose we perform an eigen decomposition such that where is an unitary matrix and with .the following theorem provides us with an optimum design of : [ th : bop ] let be the singular values of and let denote the integral we obtain an optimal by solving the following convex optimization problem : and is separated from the signal vector .this , by the properties of the toeplitz stbc shown , transforms the design of into a convex optimization problem . in , however , the design parameter and the signal vector are all parts of the toeplitz structure resulting in a non - convex design problem that can only be solved by numerical method with no guarantee for global optimality . ] where is a diagonal matrix given by with being the highest integer for which {kk}={\gamma}_{{\rm op}k}>0 ] where .then , we can write where the first step is a result of the structure of on the toeplitz code , and the second step is the result of an inequality for the determinant of a matrix .equality in eq .( [ eq : hadamard ] ) holds if and only if , i.e. , the singular vectors of are the eigenvectors of . substituting the inequality of ( [ eq : hadamard ] ) in eq .( [ eq : pairwise - error ] ) , we have . since , the worst case average pair - wise error probability is lower bounded by if we minimize both sides of eq .( [ eq : low - bound ] ) , we can write where is obtained according to eq .( [ eq : opgamma ] ) .let us now establish an upper bound for the worst case average pair - wise error probability for the specially structured transmission matrix above . for any error vector , we have where the special structure of has been utilized , denotes the diagonal matrix containing the largest positive eigenvalues of and . using eq .( [ eq : gequality ] ) in property [ pro : lb ] , for any nonzero vector and nonzero in the interval ] .now , is monotonically increasing with . by _composition rule _ ( page 84 ) , the integrand in eq . is a convex function implying that is convex . c. the solution of eq .( [ eq : opgamma ] ) yields the values of the diagonal elements . some of these values may not be positive .we choose all the positive ones to form the singular values of .theorem [ th : bop ] provides us with an efficient scheme to obtain the optimal matrix by numerically minimizing .however , if the chernoff bound of the pairwise error probability is employed as the objective function for minimization instead , a closed - form optimal can be obtained .this can be shown by setting in the pairwise error probability of eq . , so that we obtain the chernoff bound as seeking to minimize the worst case chernoff bound , and following similar arguments which establish the the optimization problem in eq . , we arrive at the following problem , where is a diagonal matrix with diagonal elements .this problem is a relaxed form of that in eq .( [ eq : opgamma ] ) and its solution is provided by the following corollary : [ corollary : water - filling ] the solution , , for the optimization problem of eq . can be obtained by employing the water - filling strategy .the diagonal elements of are given by {+}},\\ k = 1,\cdots , k \end{gathered}\ ] ] where notation {+} ] in case i ) with being the dimension of .we evaluated the error performances of the systems equipped with different in all three cases and the results are shown in fig .[ example2 ] from which the following observations can be made : + [ example2 ] + [ example3 ] * for the system employing a ml detector , performance of case ii ) and iii ) are superior to that of case i ) , confirming the theoretical analyses in theorem [ th : bop ] and corollary [ corollary : water - filling ] .* for the system employing a ml detector , the ber performance for cases ii ) and iii ) employing and respectively are very close .this shows that chernoff bound is tight for this system .close performance in the two cases is also true for the system using a zf detector . *although and are optimal transmission matrices developed for the ml detector , they are equally effective in providing substantial performance improvement for the same system employing a linear zf detector . * at lower snr , we have , i.e. , only one transmitter antenna is effective .therefore , given a coded system , linear zf and ml detectors provide the same performance .in the second part of the experiment , we put in case i ) and examine its performance .the error performance for such a choice is shown in fig .[ example3 ] . here ,the system using has higher transmission data rate than those in cases ii ) and iii ) . for the sake of comparison , we have re - plotted in fig .[ example3 ] the performance curves from fig .[ example2 ] of case iii ) corresponding to the uses of as a transmission matrix .( since the performance of cases ii ) is almost the same as that of case iii ) , we have omitted here the performance curves corresponding to the use of ) .it should be noted that when the signals are detected by a ml detector , the system coded with has higher diversity gain over the system with .this is due to the fact that water - filling strategy may not employ all the available transmitter antennas for correlated channels . specifically in this example , the effective number of antennas for is .however , the optimal coding gain achieved by with ml detectors ensures a better performance .it is also important to note that the employments of and result in a relatively large difference in performances , revealing that the upper bound on pep given in eq . is not tightthus , even though this bound is quite commonly employed in stbc designs for independent channels , the results here show that this relaxed bound is a poor design criterion for an environment of highly correlated channel coefficients .[ fig : stbc_4t1r ] * example 3 * : in this example , we compare the ber performance of toeplitz stbc with other stbc for independent miso channels . here again , we choose for toeplitz stbc .the experiments are performed for the two cases in which the number of transmitter antennas in the communication system are and respectively : 1 . transmitter antennas and a single receiver antenna : we compare ber performance of toeplitz stbc with other rate one stbc : * quasi - orthogonal stbc .the code for four transmitter antennas was presented in , and the maximization of its coding gain was subsequently shown in .* dense full - diversity stbc * multi - group decodable stbc + for the toeplitz stbc , we choose for which the symbol transmission data rate is symbols pcu . to achieve a fair comparison , the same transmission _ bit _ rateis imposed on all the codes such that signals are selected from 256-qam constellation for toeplitz stbc and from 64-qam for the other full - rate stbc .therefore , the same transmission bit rate , bits pcu , is employed for all the systems . at the receiver, the toeplitz stbc is processed by a linear zf equalizer followed by a symbol - by - symbol detector . for the other full - rate stbc, we examine the two cases in which the signals are processed by a ) a ml detector and b ) a linear zf receiver .the ber curves are plotted in fig .[ fig : stbc_4t1r ] .when a linear zf equalizer and a symbol - by - symbol detector is applied at the receiver , it can be observed that toeplitz stbc outperforms quasi - orthogonal " stbc and dense " stbc , and at higher snr , its performance is superior to multi - group code .it is also interesting to observe that at higher snr , for toeplitz stbc with linear zf receivers , the performance is also superior to that of the multi - group stbc using a ml receiver .in fact , for the range of snr tested , the slope of its ber curve is the same as those of the dense " stbc and the quasi - orthogonal " stbc processed by ml detectors , indicating they have the same diversity gain .+ [ fig : orthogonal ] 2 .we now consider the system having transmitter antennas .for the toeplitz code , we choose and therefore , the symbol transmission data rate is symbols pcu .we compare the bit error rate performance of our toeplitz code with that of the orthogonal stbc having symbol transmission rate of : + i ) symbols pcu and + ii ) symbols pcu ( this the highest symbol rate achievable by the orthogonal stbc applied to an eight transmitter antenna system ) .+ to achieve a fair comparison , the transmitted signals are selected from a -qam constellation for our toeplitz code , a -qam constellation for the rate orthogonal code and a -qam constellation for the rate orthogonal code .hence , all of the codes have the same transmission data rate in bits , i.e. , bits pcu . at the receiver end, the orthogonal stbc is decoded by a linear zf detector for which , because of the orthogonality , the performance is the same as that of a ml detector . for toeplitz stbc , the signals are decoded separately by a linear zf receiver and a zf - dfe receiver .the average bit error rate for these codes are plotted fig .[ fig : orthogonal ] .it can be observed that the performance of the toeplitz code detected with a linear zf receiver is superior to that of the -rate orthogonal stbc when the snr is less than or equal to 25 db .when the toeplitz stbc is received by a zf - dfe receiver , due to the higher coding gain , its performance is significantly better than that of the orthogonal stbc . in fig .[ fig : orthogonal ] at , the toeplitz code with a zf - dfe receiver outperforms the orthogonal code by about 4 db .+ it should be noted that for the toeplitz code , both linear zf and zf - dfe receivers can achieve full - diversity .however , from fig . [fig : orthogonal ] , while the slope of ber curve for toeplitz code with zf - dfe receiver is similar to those of the orthogonal codes , the slope of the curve for the toeplitz code with linear zf receiver is not as steep .recall that the diversity gain of a communication system is defined at _ high _ snr and here , the upper end of the snr range is not sufficiently high . to show full diversity for both systems ,we need ber at higher snr , the evaluation of which demands exorbitant computation for the parameters in this example . to circumvent this difficulty, we choose to compare the _ symbol error rate _( ser ) obtained by the use of the toeplitz code with linear zf receiver to that obtained by the use of the orthogonal code .the results are shown in fig .[ fig : ser ] from which it can be observed that the two ser curves have the same slope for snr above , indicating the same diversity gain for both codes .thus , we can see that the toeplitz code with a linear zf ( or more sophisticated ) receiver indeed achieves full diversity .+ [ fig : ser ]in this paper , we have presented a general design criterion for full - diversity linear stbc when the signals are transmitted through a miso communication system and processed by a linear receiver .this is , to our knowledge , the first design criterion for linear receivers to achieve full diversity . specifically , we proposed a linear toeplitz stbc for a miso channel which satisfies the criterion and achieves full - diversity .we have shown that such a code possesses many interesting properties , two of which recapitulated here are of practical importance : 1 .the symbol transmission rate for the code approaches one when the number of channel uses ( ) is large .if the signalling scheme has a constellation for which the distance between the nearest neighbours is nonzero ( such as qam ) , then employing the toeplitz code results in a non - vanishing determinant . when employed in a miso system equipped with a linear receiver ( zf or mmse ), the toeplitz code can provide full diversity .furthermore , when the number of channel uses is large , in an independent miso flat fading environment , the toeplitz code can approach the zheng - tse optimal diversity - multiplexing tradeoff .when employed in a miso system equipped with a ml detector , for both independent and correlated channel coefficients , we can design the transmission matrix inherent in the proposed toeplitz stbc to minimize the exact worst case average pair - wise error probability resulting in full diversity and optimal coding gain being achieved . in particular , when the design criterion of the worst case average pair - wise error probability is approximated by the chernoff bound , we obtain a closed - form optimal solution .the use of the toeplitz stbc ( having an identity transmission matrix ) in a miso system fitted with a zf receiver has been shown by simulations to have the same slope of the ber curves to other full rate stbc employing a ml detector , whereas even better performance can be achieved by using receivers ( such as zf - dfe ) more sophisticated than the linear ones to detect the toeplitz code .for correlated channels , employing the optimum transmission matrices in the toeplitz code results in substantial additional improvements in performance to using the identity transmission matrix .this substantial improvement of performance is observed in either case for which a ml or a zf receiver is used .guey , m. p. fitz , m. r. bell , and w .- y .kuo , `` signal design for high data rate wireless communication systems over rayleigh fading channels , '' in _ proc .ieee vehicular technology conf ._ , pp . 136140 , 1996 .liang and x .- g .xia , `` nonexistence of rate one space - time blocks from generalized complex linear processing orthogonal designs for more than two transmit antennas , '' in _ international symposium on inform . theory _, ( washington dc ) , june . 2001 .w. su and x .- g .xia , `` two generalized complex orthogonal space - time block codes of rates 7/11 and 3/5 for 5 and 6 transmit antennas , '' in _ international symposium on inform . theory _, ( washington dc ) , june . 2001 . j .- k .zhang , k. m. wong , and t. n. davidson , `` information lossless full rate full diversity cyclotomic linear dispersion codes , '' in _ int .speech , signal process ._ , ( montreal , canada ) , may 2004 . h. yao and g. w. wornell , `` achieving the full mimo diversity - vs - multiplexing frontier with rotation - based space - time codes , '' in _41th annual allerton conf .control , and comput ._ , ( monticello , il ) , oct . 2003 .j. c. belfiore , g. r. rekaya , and e. viterbo , `` the golden code : a 2 full rate space - time code with non - vanishing determinants , '' in _ proceedings ieee international symposium on information theory _ , ( chicago ) , june 2004 .wang , j .- k .zhang , y. zhang , and k. m. wong , `` space - time code designs with non - vanishing determinants based on cyclic field extension families , '' in _ int .acoust . , speech , signal process ._ , ( philadelphia , usa ) , march 2005 .h. e. gamal , g. caire , and m. o. damen , `` lattice coding and decoding achieve the optimal diversity multiplexing tradeoff of mimo channels , '' _ ieee trans .inform . theory _ , vol .50 , pp . 968985 , june 2004 .j. hiltunen , c. hollanti , and j. lahtonen , `` dense full - diversity matrix lattices for four transmit antenna miso channel , '' in _ proceedings ieee international symposium on information theory _ , ( adelaide , australia ) , pp . 12901294 , sept .2005 .d. n. dao and c. tellambura , `` optimal rotations for quasi - orthogonal stbc with two - dimentional constellations , '' in _ proceedings ieee global communications conference _ , ( st . louis , u.s.a . ) , pp . 12901294 , nov .- dec .2005 .n. seshadri and j. winters , `` two signaling schemes for improving the error performance of fdd transmission systems using transmitter antenna diversity , '' in _ proc .1993 iee vtc _ , pp . 508511 ,may 1993 .v. tarokh , n. seshadri , and a. r. calderbank , `` space - time codes for high date rate wireless communication : performance criterion and code construction , '' _ ieee trans .inform . theory _44 , pp .744765 , mar .1998 .j. w. craig , `` a new , simple , and exact result for calculating the probability of error for two - dimensional signal constellations , '' in _ proc .ieee milit ._ , ( mclean , va ) , pp . 571575 , oct .1991 .a. scaglione , g. b. giannakis , and s. barbarossa , `` redundant filterbank precoders and equalizers .ii . blind channel estimation , synchronization , and direct equalization , '' _ ieee trans .signal process ._ , vol .47 , pp . 20072022 , july 1999 .k. lu , s. fu , and x .-xia , `` closed form designs of complex orthogonal space - time block codes of rates for or transmit antennas , '' _ ieee trans .inform . theory _51 , pp .43404347 , dec .
in this paper , a general criterion for space time block codes ( stbc ) to achieve full - diversity with a linear receiver is proposed for a wireless communication system having multiple transmitter and single receiver antennas ( miso ) . particularly , the stbc with toeplitz structure satisfies this criterion and therefore , enables full - diversity . further examination of this toeplitz stbc reveals the following important properties : a ) the symbol transmission rate can be made to approach unity . b ) applying the toeplitz code to any signalling scheme having nonzero distance between the nearest constellation points results in a non - vanishing determinant . in addition , if qam is used as the signalling scheme , then for independent miso flat fading channels , the toeplitz codes is proved to approach the optimal diversity - vs - multiplexing tradeoff with a zf receiver when the number of channel uses is large . this is , so far , the first non - orthogonal stbc shown to achieve the optimal tradeoff for such a receiver . on the other hand , when ml detection is employed in a miso system , the toeplitz stbc achieves the maximum coding gain for independent channels . when the channel fading coefficients are correlated , the inherent transmission matrix in the toeplitz stbc can be designed to minimize the average worst case pair - wise error probability . shell : bare demo of ieeetran.cls for journals full diversity , linear receiver , miso , ml detection , non - vanishing determinant , optimal diversity - vs - multiplexing tradeoff , stbc , toeplitz
in the last decade , time domain decomposition has been exploited to accelerate the simulation of systems ruled by time dependent partial differential equations . among others , the parareal algorithm or multi - shooting schemes have shown excellent results . in the framework of optimal control , this approach has been used to control parabolic systems , . in this paper , we introduce a new approach to tackle such problems . the strategy we follow is based on the concept of target trajectory that has been introduced in the case of hyperbolic systems in . because of the irreversibility of parabolic equations , a new definition of this trajectory is considered .it enables us to define at each bound of the time sub - domains relevant initial conditions and intermediate targets , so that the initial problem is split into independent optimization problems .+ we now introduce some notations . given , we consider the optimal control problem associated with a heat equation defined on a compact set and a time interval interval ] , and the optimal control problem : find such that where ;l^{2}(\omega_c))}\ ] ] with the solution of the equation .we have }}\ ] ] * proof : * thanks to the uniqueness of the solution of the optimization problem associated to , it is sufficient to show that }} ] and \times\omega\\ \tilde p(\tau ) & = & y(\tau)-\chi(\tau ) , \end{array } \right.\ ] ] first , note that }}^\star ] with }} ] satisfies . finally , equation is a consequence of .the result follows . given , we decompose the interval ] , .we also introduce the spaces and the corresponding scalar product and norm . in this framework ,given we define as follows with where is associated to through the definition .in this functional , the state is defined by these subproblems have the same structure as the original one and are also strictly convex .note also that their definitions depend on the control through the target trajectory , hence the notation .+ the optimality system associated with these minimization problems are given by equation and [ lem2 ] we keep the previous notations .denote by the target trajectory defined by equation with and and by the solutions of equations ( [ pos1][pos3 ] ) associated with .one has : the proof of this result follows the lines of lemma [ lem1 ] and is left as an exercise to reader .we are now in the position to propose a time parallelized procedure to solve equations .consider an initial control and assume that , at step , a control is known .the computation of is achieved as follows : 1 .compute , and the associated target trajectory according to equations , and respectively .2 . for ,solve the sub - problems in parallel and denote by the corresponding solutions.[step2 ] 3 .define as the concatenation of the sequence .[ step4 ] update the control variable by where the value is chosen to minimizes .we have not detailed step [ step2 ] as we rather aim at presenting a structure for a general approach .however , because of the strict convexity of the problems we consider , a small number of conjugate gradient method steps can be used to achieve the resolution of these steps .in this section , we test the efficiency of our method . we consider a 2d example , where \times [ 0,1] ] .the parameters related to our control problem are , and .the time interval is discretized using a uniform step , and an implict - euler solver is used to approximate the solution of equations ( [ os1][os2 ] ) . for the space discretization, we use finite elements .our implementation make use of the freeware ` f`reefem and the parallelization is achieved thanks to the message passing interface library .the independent optimization procedures required in step [ step2 ] are simply carried out using one step of an optimal step gradient method .the results are presented in figure [ fig ] . in the first plot, we consider the evolution of the cost functional values with respect to the iterations and do not take into account the parallelization .the result reveals that our algorithm significantly accelerates the optimization process .this outcome may indicates that the splitting introduced in our approach acts as a preconditionner during the numerical optimization . this will be the purpose of some further investigation , in the same spirit as in . in a second plot, we represent the evolution of the cost functional values with respect to the number of matrix - vector product .parallel computations that are done in step [ step2 ] are only counted once .when comparing with a standard optimal gradient step method , we observe speed - up approximatively equal to 3 .[ r][c] : [ r][c] : [ r][c] : [ r][c] : [ r][c] : [ r][c] : [ c]number of iterations [ c]functional values [ c ] number of multiplications [ c]functional values [ c ]
in this paper , we present a method that enables to solve in parallel the euler - lagrange system associated with the optimal control of a parabolic equation . our approach is based on an iterative update of a sequence of intermediate targets and gives rise independent sub - problems that can be solved in parallel . numerical experiments show the efficiency of our method . dans cet article , on prsente une mthode permettant une paralllisation en temps de la rsolution des quations deuler - lagrange associes un problme de contrle optimal dans le cas parabolique . notre approche est base sur une mise jour itrative de cibles intermdiaires et donne lieu des sous - problmes de contrle indpendants . les rsultats numriques prouvent lefficacit de la mthode .
information overload problem is one of the recent hard problems . there is a huge amount of information on the web and lotis being added to it constantly .organizing the information on a page is a challenging task . while designing a web page ( site ) for an organization, the designer will organize the information in the form of pages that are linked from main page .but all the information is not equally important , in other words , all the links are not used frequently or each link will be having different frequency . in this paper , we propose dynamic link pages that have a fixed space on the page to accommodate the links that are frequently used in addition to their present location .note that the frequently used links will also be present at their actual locations .few of the current day retrieval systems are displaying the frequently visited links of the first retrieved site as shown in figure [ 1 ] .this facility is not available with all the sites and links .in contrast , we propose a model where each site will allocate a fixed space to publish the links that are visited frequently .so , instead of retrieval systems maintaining the log of links , the details can be provided in the site itself .the advantage is two fold .the retrieval systems need not do link analysis within a site . the time spent by a user on visiting a siteis minimized as the frequently visited links are readily available in the visible portion of the screen .* motivation * : with the advancement of internet , information overload problem has emerged as one of the challenging problems .user is overwhelmed with information .also , retrieval systems give only ranked list of links .many a times , user has to go through other links in the page to satisfy his information needs .also , the links that appear as part of the user query , take the popularity of the site into consideration instead of individual links in that site .this two factors motivated me to work towards a solution . [ 1 ] * contributions * : 1 . introduced a new methodology of displaying the popular links so as to decrease the time spent by a user after visiting a site .2 . given a framework to improve the quality of retrieval systems .3 . proposed a framework to improve the quality of summaries generated on multiple text documents .based on the usage of links on a page , the importance score can be assigned to the links in the site and improve the performance of retrieval system .this framework is discussed in section [ ir ] .text summarization is one of the solutions that is adopted to overcome the information overload problem . in section [ summ ] , a framework that improvesthe performance of a summarization task is outlined .world wide web has huge amount of information and more is being added to it constantly ( information overload problem ) . with the advent of internet , the availability and accessibility of information has increased . as informationis made available from multiple sources , it is becoming difficult to a user to go through multiple sites for satisfying his information needs . in this scenario , generating a summary from these multiple sites is of great value. * text summarization * : summarization task can be classified based on various criteria. it can be abstractive - extractive , generic - query specific , indicative - informative or single document - multiple documents .abstractive summaries deal with generating an abstract by reformulating few sentences whereas extractive summaries involve extracting important sentences from the document(s ) .a summary is generic if it provides overall sense of the document whereas query specific if it is biased towards a topic .a summary is indicative if it indicates the structure or contents of the document(s ) whereas informative if it gives a comprehensive note .extraction based approaches give a score to each sentence in the document and the top few sentences are selected as summary . in most of the text summarizers ,node scores are calculated by following ideas similar to pagerank and hits and edge scores are calculated based on the amount of similarity between nodes .* organization of a web page * : wiki page is an example of a well organized structure .in fact , every web page is designed carefully to fulfill the purpose of the site .there are variety of themes that are adopted in designing a web site .the following are the classifications of them : 1 ) flat structure 2 ) linked structure and 3 ) mixed structure . in _flat structure _, all the content is made available in the first page of the site itself .user has to scroll up and down in the page to get the information from the site .links also can be made available on the page to go to any portion of the page . in _ linked structure _ , site will have links from the main page to other pages .information is segmented and each segment is stored in different page(s ) .each segment will have a link from the main page ._ mixed structure _ is the combination of both flat and linked structures .our model works on all these three structures . * link analysis by retrieval systems * : a page that is part of a popular site will have a different importance when compared to a page that is not part of an important site .one of the main disadvantages of the current scenario is : `` the links that are part of a popular site will be visited more frequently than the links of a site that are not frequently used. if the most relevant information on a topic is present in a link that is not part of an important site and if some information on that topic is present in a link that is part of a popular site then it is highly likely that the link that is part of a popular site will get higher priority . ''retrieval systems rank all the pages on the web by considering both the popularity of the site in which they are present and the number of clicks on them .consider an example : let there be two sites and . is popular than .two new pages , and are added to and respectively .both these new pages have information on the same topic but is has more valuable information than . in this scenario, will get a better rank than ( as is popular than ) , and only the users who are not satisfied with will visit . but the number of clicks that receives will be at least as many as .therefore will always be preferred to by a retrieval system .this is a classic problem with all the retrieval systems .our model overcomes this problem .since is already popular , the number of users visiting it will be more than the visitors who visit . therefore , will retain its popularity .we know that all the links in a page will not be visited with the same frequency .in other words , all the links are not equally important .so , we propose a model to analyze the links on a page .for this , each site will have to maintain two types of counters for each link , 1 ) and 2 ) . is the count of number of times link of site is visited from the launch of the site . is the count of number of times link of site is visited in the recent past . in this section, we propose a model to analyse these counters .analysis is followed by placing few of the popular links in the upper left portion of the home page .* model * : for each link , a score is computed as the product of and .top few links are selected based on these scores .selected links are placed in the upper left corner of the home page .the layout of the site will be as shown in figure [ 2 ] .the layout after identifying popular links is shown in figure [ 4 ] .it is assumed that the information of and is made available by each site as a web service .so , the retrieval system can collect this information from all the sites .current retrieval systems rank the pages based on their popularity , that is a page will be ranked based on 1 ) the number of clicks and 2 ) the popularity of the links that it is connected ( both incoming and outgoing ) .these frameworks fail to capture the importance of a page within its site .the performance of a system can further be improved if the importance of a page within its site is captured .historical importance of a link in site , is calculated as given in equation [ importancescore1 ] .current importance of a link in site , is calculated as given in equation [ importancescore2 ] using the above two measures , popularity of a link is recalculated .equations [ importancescore1 ] and [ importancescore2 ] are also made part of the popularity calculation .this methodology can be used to improve the quality of the retrieval system .* model * : a special weightage is to be given to the links that appear in the top left corner of the site . by doing so , the intentions of users for visiting the site is captured . in other words ,the usage of the site ( popular links ) is obtained .based on this , retrieval system can recalculate the popularity score of the links and thus improve the performance .text summarization is one of the useful tasks that grew in popularity recently . till date, the summary of a site is generated by giving equal priority to all the links in the site .we propose a model where the priority of the links are used while generating the summary .* framework * : generic summary generation : 1 ) single site summarization : summary of a site is generated by considering only the links that appear on the top left portion of the site .2 ) multiple site summarization : summary is generated by considering all the links that appear on the top left portion of the sites that are to be summarized .query specific summary generation : 1 ) single site summarization : summary of a site is generated by considering only the links that appear on the top left portion of the site and have query term(s ) in them .2 ) multiple site summarization : summary is generated by considering all the links that appear on the top left portion of the sites and have query term(s ) in them .this framework will certainly improve the quality of summarization because , the links that appear on the top left corner are popular and it is highly likely that they will contain information that is useful to users .also , as the number of links / pages that are to be summarized are getting pruned due to which efficiency will increase . * a framework to improve the site classification * :a site can be classified based on the popular links in that site i.e. , the links in the top left corner .the content of these links can be processed in order to classify the site .natural language processing ( nlp ) tools are very inefficient and processing few links / pages is efficient when compared to processing all the links in a site .popular links are in some sense representative pages of their site .therefore , classification of site is done efficiently .in this paper , a model that also takes into consideration the usage of links within a site is proposed .this can be understood as giving importance to local ( within its site ) popularity of a link . in all the frameworks that are proposed in this paper , there is one strong correlation i.e. , the models are dynamic . as the priorities of users change within the site , so will the popularity score of the links . in some sense, user s feedback is considered while re - ranking the links .one shortcoming of this paper is that only the frameworks are proposed .kevin knight and daniel marcu .statistics - based summarization - step one : sentence compression . in _ proceedings of the seventeenth national conference on artificial intelligence and twelfth conference on innovative applications of artificial intelligence _ , pages 703710 .aaai press / the mit press , 2000 .l. page , s. brin , r. motwani , and t. winograd . the pagerank citation ranking : bringing order to the web . in _ proceedings of the 7th international world wide web conference _ , pages 161172 , brisbane , australia , 1998 .dragomir r. radev , hongyan jing , and malgorzata budzikowska .centroid - based summarization of multiple documents : sentence extraction , utility - based evaluation , and user studies . in _naacl - anlp 2000 workshop on automatic summarization _, pages 2130 , seattle , washington , 2000 .association for computational linguistics .m sravanthi , c r chowdary , and p sreenivasa kumar .quests : a query specific text summarization system . in_ proceedings of the 21st international flairs conference _ ,pages 219224 , florida , usa , may 2008 .aaai press .ramakrishna varadarajan and vagelis hristidis .a system for query - specific document summarization . in _cikm 06 : proceedings of the 15th acm international conference on information and knowledge management _ , pages 622631 , arlington , virginia , usa , 2006 .acm press .michael j. witbrock and vibhu o. mittal .ultra - summarization ( poster abstract ) : a statistical approach to generating highly condensed non - extractive summaries . in _sigir 99 : proceedings of the 22nd annual international acm sigir conference on research and development in information retrieval _ , pages 315316 , berkeley , california , united states , 1999
in this paper we introduce the concept of dynamic link pages . a web site / page contains a number of links to other pages . all the links are not equally important . few links are more frequently visited and few rarely visited . in this scenario , identifying the frequently used links and placing them in the top left corner of the page will increase the user s satisfaction . this process will reduce the time spent by a visitor on the page , as most of the times , the popular links are presented in the visible part of the screen itself . also , a site can be indexed based on the popular links in that page . this will increase the efficiency of the retrieval system . we presented a model to display the popular links , and also proposed a method to increase the quality of retrieval system .
baseball has a rich tradition of misjudged pop - ups .for example , in april , 1961 , roy sievers of the chicago white sox hit a towering pop - up above kansas city athletics third baseman andy carey who fell backward in trying to make the catch .the ball landed several feet from third base , far out of the reach of carey .it rolled into the outfield , and sievers wound up on second with a double .a few other well - known misplays of pop - ups include : new york giants first baseman fred merkle s failure to catch a foul pop - up in the final game of the 1912 world series , costing the giants the series against the boston red sox ; st .louis first baseman jack clark s botched foul pop - up in the sixth game of the 1985 world series against kansas city ; and white sox third baseman bill melton s broken nose suffered in an attempt to catch a `` routine '' pop - up in 1970 .as seen by these examples , even experienced major league baseball players can find it difficult to position themselves to catch pop - ups hit very high over the infield .players describe these batted balls as `` tricky '' or `` deceptive , '' and at times they will be seen lunging for the ball in the last instant of the ball s descent .`` pop - ups look easy to anyone who hasnt tried to catch one - like a routine fly ball that you do nt have to run for , '' clete boyer said , `` but they are difficult to judge and can really make you look like an idiot . ''boyer , a veteran of sixteen years in the major leagues , was considered one of the best defensive infielders in baseball .several factors can exacerbate the infielder s problem of positioning himself for a pop - up .wind currents high above the infield can change the trajectory of the pop - up radically .also , during day games the sky might provide little contrast as a background for the ball a condition called a `` high sky '' by players . then , there are obstacles on the field bases , the pitcher s mound , and teammates that can hinder the infielder trying to make a catch .but even on a calm night with no obstacles nearby , players might stagger in their efforts to get to the ball .the frequency of pop - ups in the major leagues an average of nearly five pop - ups per game is great enough that teams provide considerable pop - up practice for infielders and catchers . yet, this practice appears to be severely limited in increasing the skill of these players .infielders seem unable to reach the level of competency in catching `` sky - high '' pop - ups that outfielders attain in catching high fly balls , for example .this suggests that the technique commonly used to catch pop - ups might be the factor limiting improvement .almost all baseball players learn to catch low , `` humpback '' pop - ups and fly balls before they have any experience in catching lofty pop - ups . in youth leaguesnearly all pop - ups have low velocities and few exceed a height of fifty feet ; therefore , they have trajectories that are nearly parabolic .fly balls , too , have near - parabolic trajectories .young players develop techniques for tracking low pop - ups and fly balls .if 120-foot pop - ups do not follow similar trajectories , however , major league infielders might find pop - ups are hard to catch because the tracking and navigation method they have learned in their early years is unreliable for high , major league pop - ups . in the consideration of this hypothesis, we first describe trajectories of a set of prototypical batted balls , using models of the bat - ball collision and ball flight aerodynamics .we then develop models of three specific kinds of typical non - parabolic pop - up trajectories .these `` paradoxical '' trajectories exhibit unexpected behavior around their apices , including cusps and loops .several of these paradoxical trajectories are fitted with an optical control model that has been used successfully to describe how players track and navigate to fly balls .for each fit , a prediction of the behavior of infielders attempting to position themselves to catch high pop - ups is compared with the observed behavior of players during games .as every student in an introductory physics course learns , the trajectory of a fly ball in a vacuum is a smooth symmetric parabola since the only force acting on it is the downward pull of gravity .however , in the atmosphere the ball is subject to additional forces , shown schematically in fig .[ fig : forces ] : the retarding force of drag ( ) and the magnus force ( ) .the magnus force was first mentioned in the scientific literature by none other than a young isaac newton in his treatise on the theory of light, where he included a brief description on the curved trajectory of a spinning tennis ball . whereas the drag force always acts opposite to the instantaneous direction of motion ,the magnus force is normal to both the velocity and spin vectors . for a typical fly ball to the outfield, the drag force causes the trajectory to be somewhat asymmetric , with the falling angle steeper than the rising angle, although the trajectory is still smooth .if the ball has backspin , as expected for such fly balls , the magnus force is primarily in the upward direction , resulting in a higher but still quite smooth trajectory .however , as we will show the situation is qualitatively very different for a pop - up , since a ball - bat collision resulting in a pop - up will have a considerable backspin , resulting in a significantly larger magnus force than for a fly ball .moreover , the direction of the force is primarily horizontal with a sign that is opposite on the upward and downward paths .these conditions will result in unusual trajectories sometimes with cusps , sometimes with loops that we label as `` paradoxical . '' with this brief introduction ,we next discuss our simulations of baseball trajectories in which a model for the ball - bat collision ( sec .[ sec : collision ] ) is combined with a model for the drag and magnus forces ( sec .[ sec : aero ] ) to produce the batted - ball trajectories .we discuss the paradoxical nature of these trajectories in sec [ sec : trajs ] in light of the interplay among the various forces acting on the ball .the collision model is identical to that used both by sawicki and by cross and nathan. the geometry of the collision is shown in fig .[ fig : geom ] . a standard baseball ( =1.43 inch, mass=5.1 oz ) approaches the bat with an initial speed =85 mph , initial backspin =126 rad / s ( 1200 rpm ) , and at a downward angle of 8.6 ( not shown in the figure ) .the bat has an initial velocity =55 mph at the point of impact and an initial upward angle of 8.6 , identical to the downward angle of the ball .the bat was a 34-inch long , 32-oz wood bat with an r161 profile , with radius =1.26 inch at the impact point .if lines passing through the center of the ball and bat are drawn parallel to the initial velocity vectors , then those lines are offset by the distance .simply stated , is the amount by which the bat undercuts ( ) or overcuts ( ) the ball . in the absence of initial spin on the baseball , a head - on collision ( )results in the ball leaving the bat at an upward angle of 8.6 and with no spin ; undercutting the ball produces backspin and a larger upward angle ; overcutting the ball produces topspin and a smaller upward or even a downward angle .the ball - bat collision is characterized by two constants , the normal and tangential coefficients of restitution and , respectively with the additional assumption that angular momentum is conserved about the initial contact point between the ball and bat. for , we use the parameterization e_n = 0.54 - ( v_n-60)/895 , where is the normal component of the relative ball - bat velocity in units of mph. we further assume , which is equivalent to assuming that the tangential component of the relative ball - bat surface velocity , initially equal to , is identically zero as the ball leaves the bat , implying that the ball leaves the bat in a rolling motion . the loss of tangential velocity occurs as a result of sliding friction , and it was verified by direct calculation that the assumed coefficient of friction of 0.55 is sufficient to bring the tangential motion to a halt prior to the end of the collision for all values of inches . given the initial velocities and our assumptions about and , the outgoing velocity , angle , and backspin of the baseball can be calculated as a function of the offset .these parameters , which are shown in fig .[ fig : d ] , along with the initial height of 3 ft ., serve as input into the calculation of the batted - ball trajectory .note particularly that both and are strong functions of , whereas only weakly depends on .the trajectory of the batted baseball is calculated by numerically solving the differential equations of motion using a fourth - order runge - kutta technique , given the initial conditions and the forces .conventionally , drag and magnus forces are written as & = & -c_dav^2 + & = & c_lav^2 ( ) , [ eq : drag ] where is the air density ( 0.077 lb / ft ) , is the cross sectional area of the ball ( 6.45 inch ) , is the velocity , is the angular velocity , and and are phenomenological drag and lift coefficients , respectively .note that the direction of the drag is opposite to the direction of motion whereas the direction of the magnus force is determined by a right - hand rule .we utilize the parametrizations of sawicki et al. in which is a function of the speed and is a bilinear function of spin parameter , implying that is proportional to .since the velocity of the ball does not remain constant during the trajectory , it is necessary to recompute and at each point in the numerical integration .the resulting trajectories are shown in fig .[ fig : trajs ] for values of in the range 0 - 1.7 inches , where an initial height of 3 ft was assumed . the striking feature of fig .[ fig : trajs ] is the qualitatively different character of the trajectories as a function of , or equivalently as a function of the takeoff angle .these trajectories range from line drives at small , to fly balls at intermediate , to pop - ups at large . particularly noteworthyis the rich and complex behavior of the pop - ups , including cusps and loops .the goal of this section is to understand these trajectories in the context of the interplay among the forces acting on the ball . to our knowledge, there has been no previous discussion of such unusual trajectories in the literature .we focus on two particular characteristics that may have implications for the algorithm used by a fielder to catch the ball : the symmetry / asymmetry about the apex and the curvature . before proceeding , however , we remark that the general features of the trajectories shown in fig .[ fig : trajs ] are universal and do not depend on the particular model used for either the ball - bat collision or for the drag and lift .for example , using collision and aerodynamics models significantly different from those used here , adair finds similar trajectories with both cusp - like and loop - like behavior, which we verify with our own calculations using his model .models based on equations in watts and bahill result in similar trajectories .we first examine the symmetry , or lack thereof , of the trajectory about the apex . without the drag and magnus forces ,all trajectories would be symmetric parabolas ; the actual situation is more complicated . as seen in fig .[ fig : trajs ] , baseballs hit at low and intermediate ( line drives and fly balls ) have an asymmetric trajectory , with the ball covering less horizontal distance on the way down than it did on the way up .this feature is known intuitively to experienced outfielders . for larger asymmetry is smaller , and pop - ups hit at a very steep angle are nearly symmetric . how do the forces conspire to produce these results ?we address this question by referring to figs .[ fig : forcet1 ] and [ fig : forcet2 ] , in which the time dependence of the horizontal components of the velocity and the forces are plotted for a fly ball ( , ) and a pop - up ( , ) . the initial decrease of the drag force for early times is due to the particular model used for the drag coefficient , which experiences a sharp drop near 75 mph .the asymmetry of the trajectory depends on the interplay between the horizontal components of drag and magnus , and , respectively . for forward - going trajectories ( ), always acts in the -x direction , whereas acts in the -x or + x direction on the rising or falling part of the trajectory , respectively .the relative magnitudes of and depend strongly on both and .for fly balls , and are small enough ( see fig .[ fig : d ] ) that the magnitude of is generally larger than the magnitude of , as shown in fig .[ fig : forcet1 ]. therefore is negative throughout the trajectory . under such conditions, there is a smooth continuous decrease in , leading to an asymmetric trajectory , since the horizontal distance covered prior to the apex is greater than that covered after the apex .the situation is qualitatively and quantitatively different for pop - ups , since both and are significantly larger than for a fly ball . as a result ,the magnitude of is much greater than the magnitude of .indeed , fig .[ fig : forcet2 ] shows that , so that acts in the -x direction before the apex and in the + x direction after the apex . therefore , the loss of while rising is largely compensated by a gain in while falling , resulting in near symmetry about the apex .moreover , for this particular trajectory the impulse provided by while rising is nearly sufficient to bring to zero at the apex , resulting in the cusp - like behavior . for even larger values of , is so large that changes sign prior to the apex , then reverses sign again on the way down , resulting in the loop - the - loop pattern .we next address the curvature of the trajectory , , which is determined principally by the interplay between the magnus force and the component of gravity normal to the trajectory .it is straightforward to show that is directly proportional to the instantaneous value of and in particular that the sign of is identical to the sign of . in the absence of a magnus force , the curvature is always negative , even if drag is present .an excellent example is provided by the inverted parabolic trajectories expected in the absence of aerodynamic forces .the trajectories shown in fig . [ fig : trajs ] fall into distinct categories , depending on the initial angle . for small enough , is negative throughout the trajectory . indeed ,if c is initially negative , then it is always negative , since is never larger and is never smaller than it is at t=0 . for our particular collision and aerodynamic model ,the initial curvature is negative for less than about . for intermediate , is positive at the start and end of the trajectory but experiences two sign changes , one before and one after the apex .the separation between the two sign changes decreases as increases , until the two values coalesce at the apex , producing a cusp . for larger values of , is positive throughout the trajectory , resulting in loop - like behavior such as the trajectory , where the sign of is initially positive , then changes to negative before the apex , and finally changes to back positive after the apex .all the simulations reported thus far assume that the spin remains constant throughout the trajectory .since the spin plays such a major role in determining the character of the trajectory , it is essential to examine the validity of that assumption . to our knowledge, there have been no experimental studies on the spin decay of baseballs , but there have been two such studies for golf , one by smits and smith and one by tavares et al. tavares et al .propose a theoretical model for the spin decay of a golf ball in which the torque responsible for the decay is expressed as , where is the radius of the ball and is the coefficient of moment " which is given by . by equating the torque to , where is the moment of inertia ,the spin decay constant can be expressed as = .[ eq : tau]using their measurements of , tavares et al .determine , corresponding to sec for =100 mph .the measurements of smits and smith can be similarly interpreted with =0.009 , corresponding to sec at 100 mph . to estimate the spin decay constant for a baseball , we assume eq .[ eq : tau ] applies , with scaled appropriately for a baseball and with all other factors the same .using = 2.31 and 2.49 oz / inch for a golf ball and baseball , respectively , the decay time for a baseball is about 8% longer than for a golf ball , or 22 - 27 sec at 100 mph and longer for smaller .a similar time constant for baseball was estimated by sawicki et al., quite possibly using the same arguments as we use here .since the trajectories examined herein are in the air 7 sec or less , we conclude that our results are not affected by the spin decay .adair has suggested a much smaller decay time , of order 5 sec, which does not seem to be based on any experimental data .a direct check of our calculations shows that the qualitative effects depicted in fig .[ fig : trajs ] persist even with a decay time as short as 5 sec .in a seminal article seville chapman proposed an optical control model for catching fly balls , today known as optical acceleration cancellation ( oac ) .chapman examined the geometry of catching from the perspective of a moving fielder observing an approaching ballistic target that is traveling along a parabola .he showed that in this case , the fielder can be guided to the destination simply by selecting a running path that keeps the image of the ball rising at a constant rate in a vertical image plane .mathematically , the tangent of the vertical optical angle to the ball increases at a constant rate .when balls are headed to the side , other optical control strategies become available. however , in the current paper we examine cases of balls hit directly toward the fielder , so we will emphasize predictions of the oac control mechanism .chapman assumed parabolic trajectories because of his ( incorrect ) belief that the drag and magnus forces have a negligible effect on the trajectory .of course we now know that the effects of these forces can be considerable , as discussed in sec . [ sec : trajs ] . yet despite this initial oversight , numerous perception - action catching studies confirm that fielders actually do appear to utilize chapman s type of optical control mechanism to guide them to interception , and in particular oac is the only mechanism that has been supported for balls headed in the sagittal plane directly toward fielders. further support for oac has been found with dogs catching frisbees as well as functioning mobile robots. extensive research on the navigational behavior of baseball players supports that perceptual judgment mechanisms used during fly ball catching can generally be divided into two phases. during the first phase , while the ball is still relatively distant , ball location information is largely limited to the optical trajectory ( i.e. the observed trajectory path of the image of the ball ) . during the second or final phase ,other cues such as the increase in optical size of the ball , and the stereo angle between the two eyes also become available and provide additional information for final corrections in fielder positioning and timing .the control parameters in models like oac are optical angles from the fielder s perspective , which help direct fielder position relative to the ongoing ball position .considerable work exploring and examining the final phase of catching has been done by perception scientists and some recent speculation has been done by physicists. researchers generally agree that the majority of fielder movement while catching balls takes place during the first phase in which fielders approach the destination region where the ball is headed . in the current work ,we focus on control models like oac that guide fielder position during the initial phase of catching .thus for example , we would consider the famous play in which jose canseco allowed a ball to bounce off of his head for a home run to be a catch , in that he was guided to the correct location to intercept the ball .an example of how a fielder utilizes the oac control strategy to intercept a routine fly ball to the outfield is given in fig .[ fig : oac ] .this figure illustrates the side view of a moving fielder using oac control strategy to intercept two realistic outfield trajectories determined by our aerodynamics model described in sec .[ sec : trajs ] . as specified by oac , the fielder simply runs up or back as needed to keep the tangent of the vertical optical angle to the ball increasing at a constant rate .since the trajectory deviates from a parabola , the fielder compensates by altering running speed somewhat .geometrically the oac solution can be described as the fielder keeping the image of the ball rising at a constant rate along a vertical projection plane that moves forward or backwards to remain equidistant to the fielder . for fly balls of this length ,the geometric solution is roughly equivalent to the fielder moving in space to keep the image of the ball aligned with an imaginary elevator that starts at home plate and is tilted forward or backward by the amount corresponding to the distance that the fielder runs . as can be seen in the figure , these outfield trajectories are notably asymmetric , principally due to air resistance shortening , yet oac still guides the fielder along a smooth , monotonic running path to the desired destination .this simple , relatively direct navigational behavior has been observed in virtually all previous perception - action catching studies with humans and animals. most previous models of interceptive perception - action assume that real - world fly ball trajectories remain similar enough to parabolic for robust optical control strategies like oac to generally produce simple , monotonic running path solutions .supporting tests have confirmed simple behavior consistent with oac in relatively extreme interception conditions including catching curving frisbees , towering outfield blasts and short infield pop - ups. the apparent robustness of these optical control mechanisms implies the commonly observed vacillating and lurching of fielders pursuing high pop - ups must be due to some inexplicable cause .it appears that the infielder is an unfortunate victim of odd wind conditions , if not perhaps a bit too much chew tobacco or a nip of something the inning before . in the current work ,we have provided evidence that there is a class of high infield pop - ups that we refer to as paradoxical .next we show that these deviate from normal parabolic shape in ways dramatic enough to lead fielders using oac to systematically head off in the wrong direction or bob forward and back .below we illustrate how a fielder guided by oac will behave with each of the three paradoxical pop fly trajectories that we determined in sec .[ sec : simulations ] of this paper .we first examine perhaps the most extreme paradoxical trajectory of the group , the case of , shown in fig .[ fig : pop17 ] .this trajectory actually does a full loop - the - loop between the catcher and pitcher , finally curving back out on its descent and landing about 30 feet from home plate .given the extreme directional changes of this trajectory , we might expect an infielder beginning 100 feet from home plate to experience difficulty achieving graceful interception . yet , as can be seen in the figure , this case actually results in a relatively smooth running path solution .when the fielder maintains oac throughout his approach , he initially runs quickly forward , then slightly overshoots the destination , and finally lurches back . in practice , near the interception point , the fielder is so close to the approaching ball that it seems likely the eventual availability of other depth cues like stereo disparity and rate of change in optical size of the ball will mitigate any final lurch , and result in a fairly smooth overall running path to the destination .second we examine the case of a pop fly resulting from a bat - ball offset in fig . [ fig : pop16 ] . herethe horizontal velocity decreases in the beginning and approaches zero velocity near the apex .then after the apex , the magnus force increases the horizontal velocity . yet , of greater impact to the fielder is that this trajectory s destination is near where the fielder begins .thus from the fielder s perspective , before the discontinuity takes place the trajectory slows in the depth direction such as to guide the fielder to run up too far and then later to reverse course and backtrack to where the ball is now accelerating forward . herethe normally reliable oac strategy leads the fielder to systematically run up too far and in the final second lurch backwards .third , we examine the case of a pop fly that lands just beyond the fielder , the condition , in fig .[ fig : pop15 ] . in this caseoac leads the fielder to initially head back to very near where the ball is headed , but then soon after change direction and run forward , only to have to run back again at the end . certainly , when a fielder vacillates or dances around " this much , it does not appear that he is being guided well to the ball destination . yet , this seemingly misguided movement is precisely specified by the oac control mechanism .thus , the assumption that fielders use oac leads to the bold prediction that even experienced , professional infielders are likely to vacillate and make a final lurch backward when navigating to catch some high , hard - hit pop - ups , and indeed this is a commonly witnessed phenomenon .former major league infielders have affirmed to us that pop - ups landing at the edge of the outfield grass ( 100 to 130 ft . from home plate ) usually are the most difficult to catch .it is notable that in each of the cases depicted in figs .[ fig : pop17]-[fig : pop15 ] , the final movement by the fielder prior to catching the ball is backwards . this feature can be directly attributed to the curvature of the trajectory , as discussed in sec .[ sec : trajs ] . for a typical fly ball ,the curvature is small and negative , so the ball breaks slightly towards home plate as it nears the end of its trajectory . for pop - ups ,the curvature is large and positive , so the ball breaks away from home plate , forcing the fielder to move backward just prior to catching the ball .why are very high pop - ups so hard to catch ? using models of the bat - ball collision and ball flight aerodynamics , we have shown that the trajectories of these pop - ups have unexpected features , such as loops and cusps .we then examined the running paths that occur with these dramatically non - parabolic trajectories when a fielder utilizes oac , a control strategy that has been shown effective for tracking near - parabolic trajectories .the predicted behavior is very similar to observed behavior of infielders attempting to catch high pop - ups .they often vacillate forward and backward in trying to position themselves properly to make the catch , and frequently these changes in direction can lead to confusion and positioning error .former major league infielders confirm that our model agrees with their experiences .we are grateful to former major league players clete boyer , jim french , norm gigon , bill heath , dave hirtz , and wayne terwilliger for their valuable comments and advice .also , we thank david w. smith and stephen d. boren for information they provided about pop - ups in the major leagues .finally , we thank bob adair for sharing his own unpublished work with us on judging fly balls and for the insight regarding the final backward movement . a. j. smits and d. r. smith , `` a new aerodynamic model of a golf ball in flight , '' science and golf ii , proceedings of the 1994 world scientific congress on golf , edited by a. j. cochran and m. r. farraly(e&fn spon ., london , 1994 ) , pp .340 - 347 .g. tavares , k. shannon , and t. melvin , `` golf ball spin decay model based on radar measurements , '' science and golf iii , proceedings of the 1998 world scientific congress on golf , edited by m. r. farraly and a. j. cochran(human kinetics , champaign il , 1999 ) , pp .464 - 472 .t. g. babler , t. g. and j. l. dannemiller , `` role of image acceleration in judging landing location of free - falling projectiles , '' j. expt .psychology : human perception and performance * 19 * , 1531 ( 1993 ) d. m. shaffer and m. k. mcbeath , `` naive beliefs in baseball : systematic distortion in perceived time of apex for fly balls , '' journal of experimental psychology : learning memory and cognition , * 31 * 14921501 ( 2005 ) .
even professional baseball players occasionally find it difficult to gracefully approach seemingly routine pop - ups . this paper describes a set of towering pop - ups with trajectories that exhibit cusps and loops near the apex . for a normal fly ball , the horizontal velocity is continuously decreasing due to drag caused by air resistance . but for pop - ups , the magnus force ( the force due to the ball spinning in a moving airflow ) is larger than the drag force . in these cases the horizontal velocity decreases in the beginning , like a normal fly ball , but after the apex , the magnus force accelerates the horizontal motion . we refer to this class of pop - ups as paradoxical because they appear to misinform the typically robust optical control strategies used by fielders and lead to systematic vacillation in running paths , especially when a trajectory terminates near the fielder . in short , some of the dancing around when infielders pursue pop - ups can be well explained as a combination of bizarre trajectories and misguidance by the normally reliable optical control strategy , rather than apparent fielder error . former major league infielders confirm that our model agrees with their experiences .
the first problem to demonstrate a superpolynomial separation between random and quantum polynomial time was the recursive fourier sampling problem .exponential separations were subsequently discovered by simon , who gave an oracle problem , and by shor , who found polynomial time quantum algorithms for factoring and discrete log .we now understand that the natural generalization of simon s problem and the factoring and discrete log problems is the hidden subgroup problem ( hsp ) , and that when the underlying group is abelian and finitely generated , we can solve the hsp efficiently on a quantum computer .while recent results have continued to study important generalizations of the hsp ( for example , ) , only the recursive fourier sampling problem remains outside the hsp framework . in this paper ,we give quantum algorithms for several hidden shift problems . in a hidden shift problemwe are given two functions , such that there is a shift for which for all .the problem is then to find .we show how to solve this problem for several classes of functions , but perhaps the most interesting example is the shifted legendre symbol problem , where is the legendre symbol is defined to be 0 if divides , 1 if is a quadratic residue mod and if is not a quadratic residue mod . ] with respect to a prime size finite field , and the problem is then : `` given the function as an oracle , find '' .the oracle problem our algorithms solve can be viewed as the problem of predicting a pseudo - random function .such tasks play an important role in cryptography and have been studied extensively under various assumptions about how one is allowed to query the function ( nonadaptive versus adaptive , deterministic versus randomized , et cetera ) . in this paperwe consider the case where the function is queried in a quantum mechanical superposition of different values .we show that if is an -shifted multiplicative character , then a polynomial - time quantum algorithm making such queries can determine the hidden shift , breaking the pseudo - randomness of .we conjecture that classically the shifted legendre symbol is a pseudo - random function , that is , it is impossible to efficiently predict the value of the function after a polynomial number of queries if one is only allowed a classical algorithm with oracle access to .partial evidence for this conjecture has been given by damgrd who proposed the related task : `` given a part of the legendre sequence , where is , predict the next value '' , as a hard problem with applications in cryptography .using the quantum algorithms presented in this paper , we can break certain algebraically homomorphic cryptosystems by a reduction to the shifted legendre symbol problem .the best known classical algorithm for breaking these cryptosystems is subexponential and is based on a smoothness assumption .these cryptosystems can also be broken by shor s algorithm for period finding , but the two attacks on the cryptosystems appear to use completely different ideas . while current quantum algorithms solve problems based on an underlying group and the fourier transform over that group , we initiate the study of problems where there is an underlying ring or field .the fourier transform over the additive group of the ring is defined using the characters of the additive group , the additive characters of the ring .similarly , the multiplicative group of units induces multiplicative characters of the ring .the interplay between additive and multiplicative characters is well understood , and we show that this connection can be exploited in quantum algorithms . in particular , we put a multiplicative character into the phase of the registers and compute the fourier transform over the additive group .the resulting phases are the inner products between the multiplicative character and each of the additive characters , a gauss sum .we hope the new tools presented here will lead to other quantum algorithms .we give algorithms for three types of hidden shift problems : in the first problem , is a multiplicative character of a finite field . given , a shifted version of , the shift is uniquely determined from and .an example of a multiplicative character of is the legendre symbol .our algorithm uses the fourier transform over the additive group of a finite field . in the second problem, is a multiplicative character of the ring .this problem has the feature that the shift is not uniquely determined by and and our algorithm identifies all possible shifts .an example of a multiplicative character of is the jacobi symbol is defined so that it satisfies the relation and reduces to the legendre symbol when the lower parameter is prime . ] . in the third problemwe have the same setup as in the second problem with the additional twist that is unknown .we also define the _ hidden coset problem _ , which is a generalization of the hidden shift problem and the hidden subgroup problem .this definition provides a unified way of viewing the quantum fourier transform s ability to capture subgroup and shift structure .some of our hidden shift problems can be reduced to the hsp , although efficient algorithms for these hsp instances are not known . assuming conjecture 2.1 from , the shifted legendre symbol problem over can be reduced to an instance of the hsp over the dihedral group in the following way .let and , where is unknown and .then the hidden subgroup is .this conjecture is necessary to ensure that will be distinct on distinct cosets of . forthe general shifted multiplicative character problem , the analogous reduction to the hsp may fail because may not be distinct on distinct cosets .however , we can efficiently generate random coset states , that is , superpositions of the form , although it is unknown how to use these to efficiently find .the issue of nondistinctness on cosets in the hsp has been studied for some groups .the existence of a time efficient quantum algorithm for the shifted legendre symbol problem was posed as an open question in .the fourier transform over the additive group of a finite field was independently proposed for the solution of a different problem in .the current paper subsumes and . building on the ideas in this paper , a quantum algorithm for estimating gauss sumsis described in .this paper is organized as follows .section [ sect : background ] contains some definitions and facts . in section [sect : idea ] , we give some intuition for the ideas behind the algorithms . in section [sect : finitefields ] , we present an algorithm for the shifted multiplicative problem over finite fields , of which the shifted legendre symbol problem is a special case , and show how we can use this algorithm to break certain algebraically homomorphic cryptosystems .in section [ sect : rings ] , we extend our algorithm to the shifted multiplicative problem over rings .this has the feature that unlike in the case of the finite field , the possible shifts may not be unique .we then show that this algorithm can be extended to the situation where is unknown . in section [sect : hcp ] , we show that all these problems lie within the general framework of the hidden coset problem . we give an efficient algorithm for the hidden coset problem provided satisfies certain conditions .we also show how our algorithm can be interpreted as solving a deconvolution problem using fourier transforms .we use the following notation : is the root of unity , and denotes the fourier transform of the function .an algorithm computing in , or runs in polynomial time if it runs in time polynomial in , or . in a ring or a field ,additive characters ( or ) are characters of the additive group , that is , , and multiplicative characters ( or ) are characters of the multiplicative group of units , that is , for all and .we extend the definition of a multiplicative character to the entire ring or field by assigning the value zero to elements outside the unit group .all nonzero values have unit norm and so .we ignore the normalization term in front of a superposition unless we need to explicitly calculate the probability of measuring a particular value .we will need to compute the superposition where is in the _amplitude_. [ lemma : superposition ]let be a complex - valued function defined on the set such that has unit magnitude whenever is nonzero .then there is an efficient algorithm for creating the superposition with success probability equal to the fraction of such that is nonzero and that uses only two queries to the function .start with the superposition over all , .compute into the second register and measure to see whether is nonzero .this succeeds with probability equal to the fraction of such that is nonzero .then we are left with a superposition over all such that is nonzero .compute the phase of into the phase of .this phase computation can be approximated arbitrarily closely by approximating the phase of to the nearest root of unity for sufficiently large .use a second query to to reversibly uncompute the from the second register .it is not known how to efficiently compute the quantum fourier transform over exactly .however , efficient approximations are known .we can even compute an efficient approximation to the distribution induced when is unknown as long as we have an upper bound on .we will need to approximately fourier sample to solve the unknown case of the shifted character problem in section [ sect : unknown ] .to fourier sample a state , we form the state that is the result of repeating many times .we then fourier sample from and use continued fractions to reduce the expanded range of values .this expansion into allows us to perform the fourier sampling step over a length from which we _ can _ exactly fourier sample .more formally , let be an arbitrary superposition , and be the distribution induced by fourier sampling over .let the superposition be repeated until some arbitrary integer , not necessarily a multiple of .let be the distribution induced by fourier sampling over rather than ( where and if ) .notice that is a distribution on and is a distribution on .we can now define the two distributions we will compare . let be the distribution induced on the reduced fractions of , that is , if is a sample from , we return the fraction in lowest terms . in particular ,define if .let be the distribution induced on fractions from sampling to obtain , and then using continued fractions to compute the closest approximation to with denominator at most .if and , then . the elements of a finite field ( where for some prime ) can be represented as polynomials in ] .in this representation , addition , subtraction , multiplication and division can all be performed in time .we will need to compute the fourier transform over the additive group of a finite field , which is isomorphic to .the additive characters are of the form , where is the trace of the finite field , and .we can efficiently compute the fourier transform over the additive group of a finite field .[ thm : tft ] the fourier transform can be approximated to within error in time polynomial in and .see .( independently , the efficiency of this transform was also shown in . ) for clarity of exposition we assume throughout the rest of the paper that this fourier transform can be performed exactly , as we can make the errors due to the approximation exponentially small with only polynomial overhead .the multiplicative group of a finite field is cyclic .let be a generator of .then the multiplicative characters of are of the form for all where the different multiplicative characters are indexed by .the trivial character is the character with .we can extend the definition of to by defining . on a quantum computer we can efficiently compute because the value is determined by the discrete logarithm , which can be computed efficiently using shor s algorithm .the fourier transform of a multiplicative character of the finite field is given by .let be the prime factorization of .then by the chinese remainder theorem , .every multiplicative character of can be written as the product , where is a multiplicative character of and .we say is _ completely nontrivial _ if each of the is nontrivial .we extend the definition of to all of by defining if .the character is aperiodic on if and only if all its factors are aperiodic over their respective domains .we call a _ primitive character _if it is completely nontrivial and aperiodic .hence , is primitive if and only if all its terms are primitive .it is well known that the fourier transform of a primitive is .if is completely nontrivial but periodic with period , then its fourier transform obeys , where is the primitive character obtained by restricting to .see the book by tolimieri et al . for details .we give some intuition for the ideas behind our algorithms for the hidden shift problem .we use the shifted legendre symbol problem as our running example , but the approach works more generally . in the shifted legendre symbol problem we are given a function such that , and are asked to find .the legendre symbol is the quadratic multiplicative character of defined : is if is a square modulo , if it is not a square , and if .the algorithm starts by putting the function value in the phase to get .assume the functions are mutually ( near ) orthogonal for different , so that the inner product approximates the delta function value .using this assumption , define the ( near ) unitary matrix , where the row is .our quantum state is one of the rows , hence .the problem then reduces to : how do we efficiently implement ? by definition , is a circulant matrix ( ) .since the fourier transform matrix diagonalizes a circulant matrix , we can write , where is diagonal . thus we can implement if we can implement .the vector on the diagonal of is the vector , the inverse fourier transform of the legendre symbol .the legendre symbol is an eigenvector of the fourier transform , so the diagonal matrix contains the values of the legendre symbol times a global constant that can be ignored .because the legendre symbol can be computed efficiently classically , it can be computed into the phase , so can be implemented efficiently . in summary , to implement for the hidden shift problem for the legendre symbol , compute the fourier transform , compute into the phase at , and then compute the fourier transform again ( it is not important whether we use or ) .figure [ fig : shift ] shows a circuit diagram outlining the algorithm for the hidden shift problem in general .contrast this with the circuit for the hidden subgroup problem shown in figure [ fig : hsp ] .[ cl] [ cc] [ cc] [ cc] [ cl]measure [ cc] [ cl] [ cc] [ cl]measurein this section we show how to solve the hidden shift problem for any nontrivial multiplicative character of a finite field .the fourier transform we use is the fourier transform over the additive group of the finite field .( shifted multiplicative character problem over finite fields ) given a nontrivial multiplicative character of a finite field ( where for some prime ) , and a function for which there is an such that for all .find .( shifted multiplicative character problem over finite fields ) [ alg : finitefield ] 1 .create .[ alg : finitefield : superposition ] 2 .compute the fourier transform to obtain .[ alg : finitefield:1stft ] 3 . for all , compute into the phase to obtain .[ alg : finitefield : conjugate ] 4 .compute the inverse fourier transform and measure the outcome .[ alg : finitefield:2ndft ] [ thm : ff ] for any finite field and any nontrivial multiplicative character , algorithm [ alg : finitefield ] solves the shifted multiplicative character problem over finite fields with probability . 1 .since only at , by lemma [ lemma : superposition ] we can create the superposition with probability .2 . by lemma [ thm : tft ]we can compute the fourier transform efficiently .the fourier transform moves the shift into the phase as described .3 . because for every nonzero , the phase change establishes the required transformation .the amplitude of is , so the probability of measuring is .the legendre symbol is a quadratic multiplicative character of defined : is if is a square modulo , if it is not a square , and if . the quantum algorithm of the previous section showed us how we can determine the shift given the function .we now show how this algorithm enables us to break schemes for ` algebraically homomorphic encryption ' .a cryptosystem is _ algebraically homomorphic _ if given the encryption of two plaintexts , with , an untrusted party can construct the encryption of the plaintexts and in polynomial - time . more formally , we have the secret encryption and decryption functions and , in combination with the public add and multiplication transformations and such that and for all .we assume that the functions , , and are deterministic .the decryption function may be many - to - one . as a result the encryption of a given number can vary depending on how the number is constructed .for example , may not be equal to .in addition to the public and functions , we also assume the existence of a zero - tester , with if , and otherwise .an algebraically homomorphic cryptosystem is a cryptographic primitive that enables two players to perform noninteractive secure function evaluation .it is an open problem whether or not such a cryptosystem can be constructed .we say we can break such a cryptosystem if , given , we can recover in time polylog( ) with the help of the public functions and .the best known classical attack , due to boneh and lipton , has expected running time for the field and is based on a smoothness assumption .suppose we are given the ciphertext .test using the function .if is not zero , create the encryption via the identity , which holds for all nonzero . in particular, using and the function , we can use repeated squaring and compute in steps . clearly , from and the function we can construct for every . then , given such an , we can compute in the following way . add and , yielding , and then compute the encrypted power . ] of , giving .next , add , or and test if it is an encryption of zero , and return , or accordingly . applying this method on a superposition of states , we can create ( after reversibly uncomputing the garbage of the algorithm ) the state .we can then recover by using algorithm [ alg : finitefield ] .given an efficient test to decide if a value is an encryption of zero , algorithm [ alg : finitefield ] can be used to break any algebraically homomorphic encryption system .we can also break algebraically homomorphic cryptosystems using shor s discrete log algorithm as follows .suppose is a generator for and that we are given the unknown ciphertext .create the superposition and then append the state to the superposition in by the procedure described above .next , uncompute the value , which gives . rewriting this as and observing that the are almost orthogonal, we see that we can apply the methods used in shor s discrete log algorithm to recover and thus .in this section we show how to solve the shifted multiplicative character problem for for any completely nontrivial multiplicative character of the ring and extend this to the case when is unknown . unlike in the case for finite fields ,the characters may be periodic .thus the shift may not be unique .the fourier transform is now the familiar fourier transform over the additive group .( shifted multiplicative character problem over ) given , a completely nontrivial multiplicative character of , and a function for which there is an such that for all .find all satisfying for all .multiplicative characters of may be periodic , so to solve the shifted multiplicative character problem we first find the period and then we find the shift .if the period is then the possible shifts will be .( shifted multiplicative character problem over ) [ alg : ring ] 1 .find the period of .let be restricted to . [ alg :ring : period ] 1 .create .2 . compute the fourier transform over to obtain .3 . measure .compute .2 . find using the period and : [ alg : ring : shift ] 1 .create .[ alg : ring : shift : superposition ] 2 .compute the fourier transform over to obtain .3 . for all coprime to , into the phase to obtain .4 . compute the inverse fourier transform and measure .[ alg : ring : shift : measure ] algorithm [ alg : ring ] solves the shifted multiplicative character problem over for completely nontrivial multiplicative characters of in polynomial time with probability at least .note : because is completely nontrivial , is a primitive character of . 1 . 1. is nonzero exactly when so by lemma [ lemma : superposition ] we can create the superposition with probability .2 . since has period , the fourier transform is nonzero only on multiples of .3 . since , and is nonzero precisely when , when we measure we have .2 . 1 . similar to the argument above, we can create the superposition with probability .2 . the fourier transform moves the shift into the phase .3 . as in the case for the finite field, this can be done by computing the phase of into the phase of .4 . let . so .then the amplitude of after the fourier transform is so the probability of measuring is .thus the algorithm succeeds with probability , which in turn is lower bounded by .we now consider the case when is unknown .( shifted multiplicative character problem over with unknown ) + given a completely nontrivial multiplicative character , for some unknown , there is an such that for all .find all satisfying for all . given a lower bound on the size of the period of , we can efficiently solve the shifted multiplicative character problem over for unknown on a quantum computer .let be the period of and be restricted to . using the fourier sampling algorithm described in section [ sect : background : fouriersampling ], we can approximately fourier sample over . because is nonzero precisely when , this fourier sampling algorithm returns with high probability , where is coprime to .thus we can find with high probability .next , apply algorithm [ alg : ring ] to find .in this section we define the hidden coset problem and give an algorithm for solving the problem for abelian groups under certain conditions .the algorithm consists of two parts , identifying the hidden subgroup and finding a coset representative . finding a coset representative can be interpreted as solving a deconvolution problem .the algorithms for hidden shift problems and hidden subgroup problems can be viewed as exploiting different facets of the power of the quantum fourier transform . after computing a fourier transform ,the subgroup structure is captured in the magnitude whereas the shift structure is captured in the phase . in the hidden subgroup problem we measure after computing the fouriertransform and so discard information about shifts .our algorithms for hidden shift problems do additional processing to take advantage of the information encoded in the phase .thus the solution to the hidden coset problem requires fully utilizing the abilities of the fourier transform .( hidden coset problem ) given functions and defined on a group such that for some , for all in , find the set of all satisfying for all in . is given as an oracle , and is known but not necessarily efficiently computable .the answer to the hidden coset problem is a coset of some subgroup of , and is constant on cosets of .let be the set of all solutions and let be the largest subgroup of such that is constant on cosets of .clearly this is well defined ( note may be the trivial subgroup as in the shifted legendre symbol problem ) .suppose are in .then we have for all in , so is in .this shows is a contained in a coset of .since is in we must have that is contained in .conversely , suppose is in ( where is in ) .then for all in , hence is in .it follows that .while this proof was written with additive notation , it carries through if the group is nonabelian .we start by finding the subgroup .we give two different algorithms for determining , the `` standard '' algorithm for the hidden subgroup problem , and the algorithm we used in section [ sect : rings ] . in the standard algorithm forthe hidden subgroup problem we form a superposition over all inputs , compute into a register , measure the function value , compute the fourier transform and then sample .the standard algorithm may fail when is not distinct on different cosets of .in such cases , we need other restrictions on to be able to find the hidden subgroup using the standard algorithm .boneh and lipton , mosca and ekert , and hales and hallgren have all given criteria under which the standard hidden subgroup algorithm outputs even when is not distinct on different cosets of .in section [ sect : rings ] we used a different algorithm to determine because the function we were considering did not satisfy the conditions mentioned above . in this algorithmwe compute the value of into the _ amplitude _ , fourier transform and then sample , whereas in the standard hidden subgroup algorithm we compute the value of into a _register_. in general , this algorithm works when the fraction of values for which is zero is sufficiently small and the nonzero values of have constant magnitude .once we have identified , we can find a coset representative by solving the associated hidden coset problem for and where and are defined on the quotient group and are consistent in the natural way with and . for notational conveniencewe assume that and are defined on and that is trivial , that is , the shift is uniquely defined .the hidden shift problem may be interpreted as a _ deconvolution _ problem . in a deconvolution problem , we are given functions and ( the convolution of with some unknown function ) and asked to find this .let be the delta function centered at . in the hidden shift problem , is the convolution of and , that is , .finding , or equivalently finding , given and , is therefore a deconvolution problem . recall that under the fourier transform convolution becomes pointwise multiplication .thus , taking fourier transforms , we have and hence provided is everywhere nonzero .for the multiplication by to be performed efficiently on a quantum computer would require to have constant magnitude and be everywhere nonzero .however , even if only a fraction of the values of are zero we can still approximate division of by only dividing when is nonzero and doing nothing otherwise . the zeros of to loss of information about .[ alg : hcp : algorithm ] 1 .[ alg : hcp : superposition ] create .[ alg : hcp:1stft ] compute the fourier transform to obtain , where are the characters of the group .3 . [ alg : hcp : invertg ] for all for which is nonzero compute into the phase to obtain .[ alg : hcp:2ndft ] compute the inverse fourier transform and measure to obtain . [ algorithm ] suppose and are efficiently computable , the magnitude of is constant for all values of in for which is nonzero , and the magnitude of is constant for all values of in for which is nonzero .let be the fraction of in for which is nonzero and be the fraction of in for which is nonzero .then the above algorithm outputs with probability . 1 . by lemma [ lemma : superposition ] we can create the superposition with probability .2 . the fourier transform moves the shift into the phase .3 . because has constant magnitude , for values where is nonzero , for some constant .so we can perform this step by computing the phase of into the phase . for the values where is zero we can just leave the phase unchanged as those terms are not present in the superposition .4 . let .then the amplitude of is so we measure with probability .thus the algorithm succeeds in identifying with probability and only requires one query of and one query of .we show how the hidden shift problems we considered earlier fit into the framework of the hidden coset problem . in the shifted multiplicative character problem over finite fields, is the additive group of , and is trivial since the shift is unique for nontrivial . in the shifted multiplicative character problem over , is the additive group of , and is the subgroup , where ( which is a factor of ) is the period of . in the shifted periodmultiplicative character problem over for unknown , is the additive group of , and is the infinite subgroup .we would like to thank the anonymous referee who pointed out the application of shifted legendre symbol problem to algebraically homomorphic cryptosystems and umesh vazirani , whose many suggestions greatly improved this paper .we also thank dylan thurston and an anonymous referee for pointing out that algebraically homomorphic cryptosystems can be broken using shor s algorithm for discrete log .thanks to lisa hales for helpful last minute suggestions .
almost all of the most successful quantum algorithms discovered to date exploit the ability of the fourier transform to recover subgroup structure of functions , especially periodicity . the fact that fourier transforms can also be used to capture shift structure has received far less attention in the context of quantum computation . in this paper , we present three examples of `` unknown shift '' problems that can be solved efficiently on a quantum computer using the quantum fourier transform . we also define the _ hidden coset problem , _ which generalizes the hidden shift problem and the hidden subgroup problem . this framework provides a unified way of viewing the ability of the fourier transform to capture subgroup and shift structure .
sandpile models have played an important role in developing our understanding of self - organized criticality .one important notion is that of universality , the idea that quantities such as critical exponents and scaling functions are independent of microscopic details of the model .this has been studied in the context of individual models , but few have determined general conditions for models to belong to a particular universality class . in the following , we present details of the solution of a general directed one - dimensional sandpile model introduced in which is a generalisation of a model studied in .we use a central limit theorem for dependent random variables to determine the precise microscopic conditions for scaling of the moments of the avalanche - size probability .we also argue that there is an -dependent crossover length , such that for systems with size branching process behaviour is observed .the avalanche size statistics are calculated by mapping the model to the problem of finding the area under a brownian curve with an absorbing boundary at the origin , that is , if is the trajectory of a brownian curve such that if for some then . in the large limit ,the avalanche size statistics are identical to those for the area under the brownian curve after a `` time '' equal to ; . this motivated us to calculate the moment generating function for this area , which is an interesting problem in its own right as there have been some recent interest in physical applications of the statistics of the area under brownian curves .the model we study is on a one - dimensional lattice of length where each lattice site , , may be in one of states .the state of site is denoted , which may take values and this is interpreted as the number of particles on site . at the beginning of each timestep a particle is added to site : .this site may topple a number of times , each toppling redistributing one particle from site to site : , .when site receives a particle it may undergo topplings , redistributing particles to site , which in turn may topple , and so on until either a site does not topple , or site topples where the redistributed particles leave the system and the time step ends .the avalanche size , is the total number of topplings which occur during a single time step .the toppling rules are therefore defined through choosing the probability that a site with particles will topple so many times upon receiving a particle .the only restrictions on the topplings are : ( i ) must remain in the range ] for and equal to zero otherwise , that is , where the sum is over all which satisfy both and and is the probability that a site with particles topples times on receiving a particle .note that particle conservation requires this section we find the steady state , , which is the eigenvector of with eigenvalue 1 .consider the single site operator , we begin by finding the eigenvectors and eigenvalues defined by where takes values from to . from the properties of the and the normalisation condition we find satisfies andso is the left eigenvector of with eigenvalue .the corresponding right eigenvector therefore satisfies which determines the precise form of the eigenvector . as the eigenvectors must be normalised , , we may write where is the probability that a site contains particles in the stationary state and .we can not , however , determine any more precisely without details of the and these will have to be calculated separately in each case .if the matrix is a regular markov matrix , that is , there exists an integer such that {ij } > 0 ] and the equation for the width , for , for some , we require which follows from normalisation of and the fact that has no negative terms .this implies that , for all for which , ^{n m^ * } { \vert e_z \rangle}_1 = { \vert e_z \rangle}_1\label{eq : cont}\ ] ] where is an integer .however , since is regular , there exists an integer such that there is only one vector , , satisfying ^{n } { \vert 0 \rangle}_1 = { \vert 0 \rangle}_1\ ] ] for any . if there are more than one values of for which this contradicts , and so is never zero . if , however , we have a single value , such that , then and does not lead to a contradiction .however , in this case the dynamics are trivial as the steady state has all sites with exactly particles and any particle added to the system will pass through immediately with exactly topplings .we now come to the main result we need in order to determine the avalanche statistics for the directed sandpile , which is that it may be mapped exactly onto a random walker on with an absorbing boundary at the origin . after addinga particle at the beginning of a time step , site will topple times with probability .these particles are redistributed to site , which will topple times with probability .the probability of site toppling times , independent of , which we denote , is which follows from . defining as the probability that site topples times independent of previous topplings , we have this is a random walker on the interval with the probability of hopping from to equal to .there is an absorbing boundary at since any non - toppling site stops the avalanche .if we denote the trajectory , , then the avalanche size is with , which is the area under the trajectory .note that the random walker described by has jumps which are correlated since the probability of hopping from to depends explicitly on and , and not simply the difference .this means we must be careful if we wish to use the results for the uncorrelated random walker , or its continuum limit .however , in ref . , the author remarks that for martingales with a fixed maximum jump size exhibiting stationarity and ergodicity , there is a quantity , such that \ ! = \ !! \int^x_{-\infty}\!\ !e^{-\frac{1}{2}y^2}\ \rmd y\label{eq : normal}\ ] ] where is the variance of the step in the process . to apply this result, we extend the random walker described by to the full space , for we may add in the effect of the boundaries later by use of mirror charges .as we have assumed the existence of a unique stationary state and have proven that and , all that is left to prove is ergodicity .this is equivalent to showing that the set of recurrent states of the random walker are irreducible , that is , the probability of reaching any recurrent state from any other recurrent state is non - zero .two states , and which have this property are said to intercommunicate , denoted .we consider the fact that is assumed to be regular , in which case there exists an such that ^m { \vert e_{z ' } \rangle } > 0 ] and .hence , all states intercommunicate since for all .we also note that and which follow respectively because the avalanche should always be able to finish in an infinite system and arbitrarily large avalanches can be initiated from a single added particle .when we consider states , we note that there can only be a finite number of these which do not intercommunicate with state 1 .since there is a unique stationary state which includes all states , these non - intercommunicating states must be transient and ergodicity follows .hence , we have now proven that for a toppling rule obeying the restrictions ( i)-(iv ) with a unique stationary state , ( i.e is regular ) , the distribution of the random walker on will approach the normal distribution given by .this means that , for long times , such a random walker with dependent jump sizes will have the statistics of ordinary diffusion with diffusion constant .hence , by adding mirror charges to remove paths that cross , we are able to calculate the large statistics of avalanches directly from the area under the brownian curve , which is our justification for calculating moments in the continuum limit in the next section .of course , we could have simply gone ahead and carried out the calculations in the continuum without the above analysis and demonstrated that they correctly modelled the numerics .however , had we done so we would not have had a precise idea of how trustworthy these calculations were and where we expect them to break down .having proven the correspondence between avalanches and a random walk of independent identically distributed step sizes , we proceed to calculate the moment generating function for the area under the brownian curve .the authors are aware of only one study which investigates the finite - size effects due to stopping the curve after some time , which corresponds to the finite size of the sandpile and since our analysis goes further than that in ref . , we present it here in some detail . the following calculation will be carried out using notation and language suitable for the random walker description of the problem .hence , the brownian curve will be described by a trajectory where is interpreted as `` space '' and is `` time '' with the diffusion constant having units .we do this because the path integral approach we are about to employ is more intuitive in this language . as existing on the entire interval and we measure the area up to the point , .hence , what is a boundary in the sandpile picture ( the open boundary at site ) is not considered a boundary in the brownian curve picture . ]the connection to the sandpile is made by noting that the number of topplings of site is equal to and the system size , , is equal to time at which we stop the curve .[ fig : brownian ] we begin with the generating function where is the probability that a random walker starting at has the area under its trajectory equal to after time .if we denote the trajectory of the walker , then the curves contributing to are all those which satisfy note that we have an absorbing boundary at , such that if for any then .hence , there are two contributions to : that due to trajectories which do not cross the absorbing boundary , and those which cross the absorbing boundary at some time , see figure [ fig : brownian ]. we shall treat these separately , writing where is the probability that a trajectory beginning at passes through with area and is the probability that a trajectory beginning at first touches the absorbing boundary at time , with area . using standard path integral methods , we may write down \rmd t\right)\ ] ] where . taking the integral over we find that the first term on the right hand side of is \rmd t\right).\ ] ] following the lines of ref . , we note that this is simply the path integral for a brownian particle with a linear potential for and an infinite potential at .hence , we write this term as where .the resulting equation of motion , is easily solved using airy functions which can be used to form an orthonormal basis on , where are the zeros of the airy function , , etc . in a similar way , for the second term on the right hand side of , we have since this is the current of diffusing particles with area under the curve equal to a , leaving the system at time .hence in order to proceed , we use the fact that the leading order dependence for each moment come from terms linear in . in ref . it was shown that if the moment generating function is written , then may be determined recursively where is the propagator for the diffusion equation with appropriate boundaries , if we define the `` current '' then if is non - zero , is proportional to to lowest order . in this case ^{3/2 } } e^{-\frac{x'^2}{4d(l - t)}}\]]b and so for since the integrand is always positive definite .hence , all moments are proportional to to lowest order .the fact that the terms linear in will also be the highest order in follows from dimensional analysis .if we write down the expansion of a moment in powers of then each term must have the same dimension . by considering the dimensions of the available quantities ,such an expansion must take the form where are simply more coefficients with no , or dependence .hence , the term of lowest order in will have the highest order dependence .taylor expanding and to first order about , note , however , that this approximation is not valid for the zeroth moment , since it is not proportional to . and are now in similar forms to equations appearing in ref .they calculate the quantity where the have been calculated in ref .we simply quote the first few values , apart from a few multiplicative prefactors , differs from only by the fact that the former uses .we therefore have to reinsert the diffusion constant , , which they assumed equal to , but this is easily done by considering the dimensions of the results .we note that is a dimensionless function , and so where is chosen such that is dimensionless .it is then easy to show that and where we may carry out an identical procedure for . the equivalent quantity in ref . is where , again , have been calculated in ref . , the first few values being following the same steps as above we find where hence we have where and the first few values are the first two values are in perfect agreement with those derived in ref . , and the authors are unaware of any previous calculations of for .thus we may immediately identify the exponents and and the amplitudes allow us to compute universal amplitude ratios , which we will use later to compare the numerics against theory .the convergence of to the normal distribution occurs only as , and hence the results above are only valid for . in using the brownian curve instead of the exact curve described by have taken a hydrodynamic limit and therefore thrown away any information about the statistics of the process for small .it is natural , therefore , to ask how we expect the results to differ in this regime .we propose the existence of an dependent crossover length , , such that the above scaling analysis is valid for .we argue that for smaller systems , , we expect to see scaling corresponding to the branching process . consider adding a particle to the first site . if the probability that the site has particles , , has support for all $ ] , then for it is likely that . in this regimewe may assume that the number of times the site topples due to this added particle , which we denote , is largely independent of .site therefore receives particles , each of which may cause it to topple times , with the total number of topplings of site 2 , . while remains far from and , the will be largely uncorrelated and by continuing this argument to more sites , we see that while remain small each site will topple nearly independently .however , as we continue through the system to higher the will begin to see large fluctuations and the avalanches will become correlated , assuming the scaling of the previous section .hence , we argue that for systems with , the avalanches will resemble those of the uncorrelated branching process with exponents and . for larger systems , ,temporal correlations emerge in the avalanches and , .the fact that the above argument relies on realisations where has support for a large range of indicates that the crossover length depends on the details of the toppling rules and as such can not be thought to have any `` universal '' qualities .indeed , we have not specified how the toppling rules in a realisation should be altered as is increased , and so it is impossible to say anything _ a priori _ about the behaviour of .we now support our claims with numerics by demonstrating that the correct scaling ( with crossovers - see previous section ) occurs for a particular realisation of this directed sandpile model . in order to study the scaling we choose a realisation such that it is clear how to generalise to higher .the only remaining difficulty is to find the correct variance to put into the equations when we come to compare with numerics . in all that follows ,we use as we find that it fits the data very well .we compare the scaling predicted above with numerics from a realisation with the following toppling rules : a site , which receives a particle will topple 1,2 or 3 times with probability or will not topple with probability .a site with will topple once with probability , a site with will topple once with probability and twice with probability and a site with will topple once with probability , and 2 or 3 times each with probability .a site with has to topple at least once in accordance with restriction ( i ) .we expect to scale with the system size where is a correlation length with some ( as yet unknown ) -dependence .these results have been confirmed and are shown in figure [ fig : numscaling ] .we also analyse the moment ratios defined by it is a straightforward calculation to show that , for an avalanche probability given by approach universal values for .these values are simply ratios of the amplitudes calculated in [ sec : moments ] , this agrees with the numerics , as illustrated in figure [ fig : ratios ] for which appears to converge to a universal value of , in excellent agreement with the theoretical prediction as well as numerics for a different realisation published elsewhere .this supports our claim that the limits of the are indeed universal .note also that has a notably different dependence in this model than in the one presented in . in this case , saturates to a constant value for large , meaning that for large the crossover occurs at the same value of .this is because the support of is finite for . .the errors for both graphs were calculated using efron s jackknife .( a ) the rescaled second moment vs. system size . for large systemsthis is a constant for all values of .( b ) the moment ratio . for large this approaches the constant value for all values of . ] .the errors for both graphs were calculated using efron s jackknife and are approximately the same size as the symbols .( a ) the moment ratio ( a ) and ( b ) . for large approach the constant values and respectively for all values of .the dashed lines indicate the exact values and , in excellent agreement with the numerics . ]we have found the stationary state avalanche - size distribution for a general -state directed sandpile model .the avalanches can be mapped onto a random walk of dependent random variables and , using an applicable central limit theorem , we have shown that under a broad set of conditions the moments scale with and .we also note that this value of agrees precisely with that obtained in , which calculates the probability distribution in the infinite system size limit .we have also calculated the moment generating function for the area under a random walker with an absorbing boundary , and found a relation for the moment amplitudes in terms of those already known for other brownian processes .in this section we calculate the steady state properties of an model , which is a generalisation of the model studied in ref . , and compare predictions to numerical simulation . for , the most general model we can write down , which obeys the rules ( i)-(iv ) , is where is the probability that a site with does not topple on receiving a particle and is the probability that a site with topples twice on receiving a particle .hence , the single site toppling matrix is \alpha & x - \beta x \end{array}\right).\ ] ] we now proceed to calculate the steady state properties of this model , following the prescription given in [ sec : stationary ] . has eigenvalues , and eigenvectors hence , the eigenvector for the stationary state is valid for . from these results it follows immediately andhence , from and using , in perfect agreement with numerics , see figure [ fig : q_1_inf ] and figure [ fig : manna ] .however , it should be noted that for and both approaching 1 the random walker it describes will spend more and more time on either only odd or only even sites .hence , it will take longer times ( larger system sizes ) for the statistics to reach the asymptotic values and so we expect very strong corrections to scaling for .when , we no longer have a unique stationary state and so scaling is not observed . for as a function of with data ( squares ) compared against the values predicted by ( solid line ) .the data was obtained by measuring for large and estimating the asymptotic value .comparison is made across the whole range of and the inset shows data in the vicinity .note that the agreement is excellent right up to .typical error bars for the numerical data are the size of the squares . ] for ( a ) as a function of for ( ) , 1024 ( ) , 4096 ( ) , 16385 ( ) , 65536 ( ) and 131072 ( ) .( b ) rescaled second moment for vs inverse system size . the dashed line is the theoretical value .the measurements appear to converge towards the theoretical line large , supporting our claim that the deviation is a finite size effect . ]the amplitudes and appearing in [ sec : moments ] can be calculated using the methods outlined in refs .for we define where are constructed through the following recursion relations similarly , for the putting these together and rearranging slightly , we find that the amplitudes are given by we tabulate the first 10 values of , along with the universal amplitude ratios in table b1 . n & & + 1 & 1 & 1 + 2 & & 1 + 3 & & + 4 & & + 5 & & + 6 & & + 7 & & + 8 & & + 9 & & + 10 & & + [ tab : values ]99 g. pruessner and h. j. jensen , phys . rev . lett . * 91 * , 244303 ( 2003 ) .g. pruessner , j. phys .a * 37 * , 7455 ( 2004 ) .p. bak , c. tang , and k. wiesenfeld , phys .rev . lett . * 59 * , 381 ( 1987 ) ; phys . rev .a * 38 * , 364 ( 1988 ) .d. dhar and r. ramaswamy , phys .lett . * 63 * , 1659 ( 1989 ) .d. dhar , phys .lett . * 64 * , 1613 ( 1990 ) ; cond - mat/9909009 ( 1999 ) ; physica a * 263 * , 4 ( 1999 ) . v. b. priezzhev , e. v. ivashkevich , a. m. povolotsky , and chin - kun hu , phys .lett . * 87 * , 084301 ( 2001 ) .mohanty and d. dhar , phys .lett . * 89 * , 104303 ( 2002 ) .m. paczuski and s. boettcher , phys .* 77 * , 111 ( 1996 ) .k. christensen , a. corral , v. frette , jens feder and torstein jssang , phys .lett . * 77 * , 107 ( 1996 ) ; m. stapleton and k. christensen , phys .e * 72 * 066103 ( 2005 ) b. m. brown , ann .stat . * 42 * , 59 ( 1971 ) .s. n. majumdar and a. comtet , j. stat . phys . *199 * 777 ( 2005 ) m. j. kearney and s. m. majumdar j. phys .a * 38 * 4097 ( 2005 ) m. j. kearney j. phys .a * 37 * 8421 ( 2004 ) j. rudnick and g. gaspari , _ elements of the random walk _( cambridge , 2004 ) . o. valle and m. soares , _ airy functions and applications to physics _( imperial college press , 2004 ) m. perman and j. a. wellner , ann .* 6 * 1091 ( 1996 ) .t. e. harris , _ the theory of branching processes _ ( dover , 1963 ) .b. efron , _jackknife , the bootstrap and other resampling plans _ ( siam 1982 ) .
we derive the steady state properties of a general directed `` sandpile '' model in one dimension . using a central limit theorem for dependent random variables we find the precise conditions for the model to belong to the universality class of the totally asymmetric oslo model , thereby identifying a large universality class of directed sandpiles . we map the avalanche size to the area under a brownian curve with an absorbing boundary at the origin , motivating us to solve this brownian curve problem . thus , we are able to determine the moment generating function for the avalanche - size probability in this universality class , explicitly calculating amplitudes of the leading order terms .
with the advancement of computer technologies and 3d acquisition techniques , 3d objects nowadays are usually captured and modeled by triangular meshes for further usages . a large number of applications of triangular meshes can be found in computer graphics and computer - aided design .however , working on general meshes is a difficult task because of their complicated geometry .the complicated geometry hinders applications such as surface registration , morphing and texture mapping .to overcome this problem , one common approach is to parameterize the surfaces onto a simple parameter domain so as to simplify the computations .for instance , textures can be designed on the simple domain and then mapped back onto the original surfaces .another example that usually makes use of parameterization is surface registration . instead of directly computing the registration between two convoluted surfaces , one can perform the registration on the simple parameter domain , which is much easier .it is also common to perform surface remeshing with the aid of parameterizations . with the development of the computer industry , the problem of finding a good parameterization method is becoming increasingly important . to make a parameterization useful and applicable, one should seek for a method that minimizes certain types of distortions .in particular , it is desirable to minimize the angular distortions of the 3d meshes ._ angle preserving parameterizations _ , also known as _ conformal parameterizations _, effectively preserve the local geometry of the surfaces . hence , in this paper , our goal is to develop an efficient conformal parameterization algorithm .the choice of the parameter domain is also a key factor in deciding the parameterization scheme . for simply - connected open surfaces ,one popular choice of the parameter domain is the unit disk . using the unit disk as a parameter domain is advantageous in the following two aspects .firstly , the existence of the conformal parameterization is theoretically guaranteed . by the uniformization theorem ,every simply - connected open surface is conformally equivalent to the open unit disk .secondly , unlike free and irregular shapes on the plane , a consistent circular boundary of the parameter domain facilitates the comparisons and mappings between different surfaces . in real applications , besides the quality of the parameterization result , it is also important to consider the computational efficiency of the parameterization algorithm . in particular , a fast algorithm is desired so that the computation can be completed within a short time . in this paper , we develop an efficient algorithm for the disk conformal parameterization of simply - connected open surfaces . to achieve the efficiency , we first transform a topologically disk - like surface to a genus-0 closed surface by a double covering technique .then we can apply a fast parameterization algorithm for genus-0 closed surfaces to compute a spherical parameterization of the double covered surface . note that although the size of the problem is doubled by double covering , the computational efficiency is preserved because of the symmetry of the double covered surface .the spherical parameterization , together with a suitable mbius transformation and the stereographic projection , provides us with an almost circular planar parameterization for the original surface . a normalization technique followed by a composition of quasi - conformal mapsare then used for obtaining a bijective disk conformal parameterization .the bijectivity of the parameterization is supported by quasi - conformal theories .the entire algorithm only involves solving sparse linear systems and hence the computation of the disk conformal parameterization is greatly accelerated .specifically , our proposed method speeds up the computation of disk conformal parameterizations by over 60% while attaining accuracy comparable to the state - of - the - art approaches .in addition , our proposed method demonstrates robustness to highly irregular triangulations .the rest of the paper is organized as follows . in section[ previous ] , we review the previous works on surface parameterizations . in section [ contributions ] , we highlight the contribution of our work . our proposed algorithm is then explained in details in section [ main ] .the numerical implementation of the algorithm is introduced in section [ implementation ] . in section [ experiments ] , we present numerical experiments to demonstrate the effectiveness of our proposed method .the paper is concluded in section [ conclusion ] .|c45mm|c|c|c|c| methods & boundary & bijective ? & iterative ? + shape - preserving & fixed & yes & no + mips & free & yes & yes + abf / abf++ & free & local ( no flips ) & yes + lscm / dncp & free & no & no + holomorphic 1-form & fixed & no & no + mean - value & fixed & yes & no + yamabe riemann map & fixed & yes & yes + circle patterns & free & local ( no flips ) & yes + genus-0 surface conformal map & free & no & yes + discrete ricci flow & fixed & yes & yes + spectral conformal & free & no & no + generalized ricci flow & fixed & yes & yes + two - step iteration & fixed & yes & yes + with a large variety of real applications , surface parameterization has been extensively studied by different research groups .readers are referred to for surveys of mesh parameterization methods . in this section, we give an overview of the works on conformal parameterization .a practical parameterization scheme should retain the original geometric information of a surface as far as possible .ideally , the isometric parameterization , which preserves geometric distances , is the best parameterization in the sense of geometry preserving .however , isometric planar parameterizations only exist for surfaces with zero gaussian curvature . hence, it is impossible to achieve isometric parameterizations for general surfaces . a similaryet far more practical substitute is the conformal parameterization .conformal parameterizations are angle preserving , and hence the infinitesimal shape is well retained .for this reason , numerous studies have been devoted to surface conformal parameterizations .the existing algorithms for the conformal parameterizations of disk - type surfaces can be divided into two groups , namely , the free - boundary methods and the fixed - boundary methods . for the free - boundary methods , the planar conformal parameterization results are with irregular shapes . in ,hormann and greiner proposed the mips algorithm for conformal parameterizations of topologically disk - like surfaces .the boundary develops naturally with the algorithm . in , sheffer andde sturler proposed the angle based flattening ( abf ) method to compute conformal maps , based on minimizing the angular distortion in each face to the necessary and sufficient condition of a valid 2d mesh ._ extended the abf method to abf++ , a more efficient and robust algorithm for planar conformal parameterizations . a new numerical solution technique , a new reconstruction scheme and a hierarchical technique are used to improve the performance . proposed the least - square conformal maps ( lscm ) to compute a conformal parameterization by approximating the cauchy - riemann equation using the least square method . in ,desbrun _ et al ._ proposed the discrete , natural conformal parameterization ( dncp ) by computing the discrete dirichlet energy . in ,kharevych _ et al ._ constructed a conformal parameterization based on circle patterns , which are arrangements of circles on every face with prescribed intersection angles . applied a double covering technique and an iterative scheme for genus-0 surface conformal mapping in to obtain a planar conformal parameterization . in , mullen __ reported a spectral approach to discrete conformal parameterizations , which involves solving a sparse symmetric generalized eigenvalue problem .when compared with the free - boundary approaches , the fixed - boundary approaches are advantageous in guaranteeing a more regular and visually appealing silhouette .in particular , it is common to use the unit circle as the boundary for the conformal parameterizations of disk - type surfaces .numerous researchers have proposed brilliant algorithms for disk conformal parameterizations .floater proposed the shape - preserving parameterization method for surface triangulations by solving linear systems based on convex combinations . in ,floater improved the parameterization method using a generalization of barycentric coordinates . in ,gu and yau constructed a basis of holomorphic 1-forms to compute conformal parameterizations . by integrating the holomorphic 1-forms on a mesh, a globally conformal parameterization can be obtained . in ,luo proposed the combinatorial yamabe flow on the space of all piecewise flat metrics associated with a triangulated surface for the parameterization . in , jin _et al . _ suggested the discrete ricci flow method for conformal parameterizations , based on a variational framework and circle packing ._ generalized the discrete ricci flow and improved the computation by allowing two circles to intersect or separate from each other , unlike the conventional circle packing - based method . in ,choi and lui presented a two - step iterative scheme to correct the conformality distortions at different regions of the unit disk .table [ previouswork ] compares several previous works on the conformal parameterizations of disk - type surfaces .our proposed algorithm involves a step of spherical parameterization .various spherical parameterization algorithms have been developed in the recent few decades , such as . among the existing algorithms ,we apply the fast spherical conformal parameterization algorithm proposed in .more details will be explained in section [ main ] .in this paper , we introduce a linear formulation for the disk conformal parameterization of simply - connected open surfaces . unlike the conventional approaches , we first find an initial map via a parameterization algorithm for genus-0 closed surfaces , with the aid of a double covering technique .the symmetry of the double covered surface helps retaining the low computational cost of the problem .after that , we normalize the boundary and apply quasi - conformal theories to ensure the bijectivity and conformality . our proposed algorithm is advantageous in the following aspects : 1 .our proposed method only involves solving a few sparse symmetric positive definite linear systems of equations .it further accelerates the computation of disk conformal parameterizations by over 60% when compared with the fastest state - of - the - art approach .2 . with the significant improvement of the computational time , our proposed method possesses comparable accuracy as of the other state - of - the - art approaches .the bijectivity of the parameterization is supported by quasi - conformal theories .no foldings or overlaps exist in the parameterization results .our proposed method is highly robust to irregular triangulations .it can handle meshes with very sharp and irregular triangular faces .in this section , we present our proposed method for disk conformal parameterizations of simply - connected open surfaces in details . a map between two riemann surfaces is called a _ conformal map _ if it preserves the inner product between vectors in parameter space and their images in the tangent plane of the surface , up to a scaling factor .more specifically , there exists a positive scalar function such that .in other words , conformal maps are angle preserving .the following theorem guarantees the existence of several special types of conformal maps .[ uniformization theorem ] every simply connected riemann surface is conformally equivalent to exactly one of the following three domains : a. the riemann sphere , b. the complex plane , c. the open unit disk .see . with this theoretical guarantee , our goal is to efficiently and accurately compute a conformal map from a topologically disk - like surface to the open unit disk .|c30mm|c40mm|c40mm| features & two - step iterative approach & our proposed method + type of input surfaces & simply - connected open surfaces & simply - connected open surfaces + initial map & disk harmonic map & double covering followed by a fast spherical conformal map + enforcement of boundary when computing the initial map & yes & no + method for correcting the conformality distortion & step 1 : use the cayley transform and work on the upper half plane step 2 : iterative reflections along the unit circle until convergence & one - step normalization and composition of quasi - conformal maps + output & unit disk & unit disk + bijectivity & yes & yes + iterations required ?& yes & no + before explaining our proposed algorithm in details , we point out the major differences between our proposed method and the two - step iterative approach .table [ difference ] highlights the main features of our proposed method and the two - step iterative approach for disk conformal parameterizations .the two - step iterative approach makes use of the disk harmonic map as an initial map , with the arc - length parameterized circular boundary constraint .this introduces large conformality distortions in the initial map . to correct the conformality distortion ,two further steps are required in .first , the cayley transform is applied to map the initial disk onto the upper half plane for correcting the distortion at the inner region of the disk .then , iterative reflections along the unit circle are applied for correcting the distortion near the boundary of the disk until convergence .in contrast , our proposed fast method primarily consists of only two stages . in the first stage , we find an initial planar parameterization via double covering followed by a spherical conformal map .since there is no enforcement of the boundary in computing the initial planar parameterization , the conformality distortion of our initial map is much lower than that by the disk harmonic map . in the second stage, we enforce the circular boundary , and then alleviate the conformality distortion as well as achieving the bijectivity using quasi - conformal theories .the absence of iterations in our proposed algorithm attributes to the significant enhancement in the computational time when compared with the two - step iterative approach . in the following ,we explain the two stages of our proposed algorithm in details . instead of directly computing the map from a simply - connected open surface to the unit disk , we tackle the problem by using a simple double covering technique .the double covering technique was also suggested in to compute conformal gradient field of surfaces with boundaries . in the following ,we discuss the construction in the continuous setting .specifically , we construct a genus-0 closed surface by the following method .first , we duplicate and change its orientation . denote the new copy by .then we identify the boundaries of the two surfaces : by the above identification , the two surfaces and are glued along the two boundaries .note that here we do not identify the interior of the two surfaces and . as a result, a closed surface is formed .denote the new surface by .it can be easily noted that since and are simply - connected open surfaces , the new surface is a genus-0 closed surface .more explicitly , denote by and the gaussian curvature and geodesic curvature .assume that we slightly edit the boundary parts of and so that is smooth .then by the gauss - bonnet theorem , we have and hence , we have therefore , the new surface has euler characteristic , which implies that it is a genus-0 closed surface . as a remark , in the discrete case , the unsmooth part caused by the double covering does not cause any difficulty in our algorithm since we are only considering the angle structure of the glued mesh .the details of the combinatorial argument are explained in section [ implementation ] . after obtaining by the abovementioned double covering technique, we look for a conformal map that maps to the unit sphere . by the uniformization theorem ,every genus-0 closed surface is conformally equivalent to the unit sphere .hence , the existence of such a conformal map is theoretically guaranteed .in , choi _ et al ._ proposed a fast algorithm for computing a conformal map between genus-0 closed surfaces and the unit sphere .the algorithm consists of two steps , and in each step one sparse symmetric positive definite linear system is to be solved . in the following ,we briefly describe the mentioned spherical conformal parameterization algorithm .the _ harmonic energy functional _ of a map from a genus-0 closed surface to the unit sphere is defined as in the space of mappings , the critical points of are called _harmonic mappings_. for genus-0 closed surfaces , conformal maps are equivalent to harmonic maps .therefore , to find a spherical conformal map , we can consider solving the following laplace equation subject to the spherical constraint , where is the tangential component of on the tangent plane of .note that this problem is nonlinear . in ,the authors linearize this problem by solving the equation on the complex plane : given three point boundary correspondences , where for .note that since the target domain is now .now the problem ( [ eqt : laplace ] ) becomes linear since is linear and the nonlinear constraint in the original problem ( [ eqt : original ] ) is removed . after solving the problem ( [ eqt : laplace ] ) ,the inverse stereographic projection is applied for obtaining a spherical parameterization .note that in the discrete case , the conformality of the inner region on the complex plane is negligible but that of the outer region on the complex plane is quite large .correspondingly , the conformality distortion near the north pole of the sphere is quite large. therefore , to correct the conformality distortion near the north pole , the authors in propose to apply the south - pole stereographic projection to project the sphere onto the complex plane . unlike the result obtained by solving equation ( [ eqt : laplace ] ) , the part with high conformality distortion is now at the inner region on the plane . by fixing the outermost region and composing the map with a suitable quasi - conformal map, the distortion of the inner region can be corrected . finally , by the inverse south - polestereographic projection , a bijective spherical conformal parameterization with negligibly low distortions can be obtained .readers are referred to and for more details of the harmonic map theory and the abovementioned algorithm respectively .the combination of the double covering technique and the fast spherical conformal parameterization algorithm in is particularly advantageous .it should be noted that because of the symmetry of the double covered surface , half of the entries in the coefficient matrix of the discretization of the laplace equation ( [ eqt : laplace ] ) are duplicated .therefore , even we have doubled the size of the problem under the double covering technique , we can save half of the computational cost of the coefficient matrix by only computing half of the entries .moreover , the spherical conformal parameterization algorithm in involves solving only two sparse symmetric positive definite systems of equations .therefore , the computation is still highly efficient . after finding a spherical conformal map for the glued surface using the parameterization algorithm ,note that by symmetry , we can separate the unit sphere into two parts , each of which exactly corresponds to one of and . since our goal is to find a disk conformal map , we put our focus on only one of the two parts .now , we apply a mbius transformation on so that the two parts become the northern and southern hemispheres of . after that , by applying the stereographic projection defined by the southern hemisphere is mapped onto the open unit disk . since the mbius transformation and the stereographic projection are both conformal mappings , the combination of the above steps provides a conformal map .theoretically , by the symmetry of the double covered surface , the boundary of the planar region obtained by the above stereographic projection should be a perfect unit circle .however , in the discrete case , due to irregular triangulations of the meshes and the conformality distortions of the map , the boundary is usually different from a perfect circle , as suggested in the experimental results in . in other words ,the planar region we obtained after applying the stereographic projection may not be a unit disk .an illustration is given in figure [ fig : foot_disk_unnormalized ] .to solve this issue , we need one further step to enforce the circular boundary , at the same time maintaining low conformality distortions and preserving the bijectivity of the parameterization . .] to control the conformality distortion and the bijectivity , our idea is to normalize the boundary and then compose the map with a _ quasi - conformal map_. quasi - conformal maps are a generalization of conformal maps , which are orientation preserving homeomorphisms between riemann surfaces with bounded conformality distortions .intuitively , a conformal mapping maps infinitesimal circles to infinitesimal circles , while a quasi - conformal mapping maps infinitesimal circles to infinitesimal ellipses with bounded eccentricity .mathematically , a _ quasi - conformal map _ satisfies the beltrami equation for some complex - valued functions with . is called the _ beltrami coefficient _ of .the beltrami coefficient captures the important information of the mapping .for instance , the angles and the magnitudes of both the maximal magnification and the maximal shrinkage can be easily determined by the beltrami coefficient ( see figure [ fig : qc ] ) . specifically , the angle of the maximal magnification is with the magnifying factor , and the angle of the maximal shrinkage is the orthogonal angle with the shrinking factor .the maximal dilation of is given by : it is also noteworthy that is conformal around a small neighborhood of if and only if .hence , is a good indicator of the angular distortions of a mapping .in fact , the norm of the beltrami coefficient is not only related to the conformality distortion but also the bijectivity of the associated quasi - conformal mapping ,as explained by the following theorem : [ bijectivity ] if is a map satisfying , then is bijective .see . this theorem can be explained with the aid of the jacobian of .the jacobian of is given by suppose , then we have and .therefore , is positive everywhere .since is simply - connected and is proper , we can conclude that is a diffeomorphism .in fact , is a universal covering map of degree 1 .therefore , must be bijective. one important consequence of theorem [ bijectivity ] is that we can easily achieve the bijectivity of a quasi - conformal map by enforcing its associated beltrami coefficient to be with supremum norm less than 1 .moreover , it is possible for us to reconstruct a mapping by a given beltrami coefficient , as explained by the following theorem : [ correspondence ] let and be two simply - connected open surfaces . given 2-point correspondences , every beltrami coefficient with associated with a unique quasi - conformal homeomorphism .see . for the aspect of numerical computations , lui _ et al . _ proposed the linear beltrami solver ( * lbs * ) , a fast algorithm for reconstructing a quasi - conformal map on a rectangular domain from a given beltrami coefficient .the key idea of * lbs * is as follows . by expanding the beltrami equation ( [ beltramieqt ] ) ,we have suppose .then , and can be expressed as linear combinations of and : where similarly , we can express and as linear combinations of and : hence , to solve for a quasi - conformal map , it remains to solve where in the discrete case , the above elliptic pdes ( [ eqt : beltramipde ] ) can be discretized into sparse linear systems . for details, please refer to . in the following discussion, we denote the quasi - conformal map associated with the beltrami coefficient obtained by * lbs * by .another important property of quasi - conformal mappings is about their composition mappings .in fact , the beltrami coefficient of a composition mapping can be explicitly expressed in terms of the beltrami coefficients of the original mappings .[ composition ] let and be two quasi - conformal mappings . then the beltrami coefficient of is given by in particular , if , then since , we have hence is conformal .see .in other words , by composing two quasi - conformal maps whose beltrami coefficients satisfy the above condition , one can immediately obtain a conformal map .this observation motivates the following step . to enforce the circular boundary of the parameterization , we first normalize the boundary of the region to the unit circle : for all .denote the normalized region by . since the vertices near the boundary of the region may be very dense , a direct normalization of the boundary may cause overlaps of the triangulations as well as geometric distortions on the unit disk . to eliminate the overlaps and the distortions of , we apply the linear beltrami solver to construct another quasi - conformal map with the normalized boundary constraints .then by the composition property , the composition map becomes a conformal map .more specifically , denote the beltrami coefficient of the mapping from the normalized planar region to the original surface by .we reconstruct a quasi - conformal map with beltrami coefficient on the unit disk by extending the linear beltrami solver , so that it is applicable not only on rectangular domains but also circular domains .we compute a map by applying the linear beltrami solver : with the circular boundary constraint .note that by the composition property stated in theorem [ composition ] , is a conformal map from the original surface to the unit disk . finally , the bijectivity of the composition map is supported by theorem [ bijectivity ] , since the beltrami coefficient of the composition map is with supremum norm less than 1 .this completes the task of finding a bijective disk conformal parameterization .the numerical implementation of our proposed method is explained in section [ implementation ] .in this section , we describe the numerical implementation of our proposed algorithm in details . in the discrete case , 3d surfacesare commonly represented by triangular meshes .discrete analogs of the theories on the smooth surfaces are developed on the triangulations .we first briefly describe the discrete version of the mentioned double covering technique for obtaining a genus-0 closed mesh .this discretization was also applied in to compute conformal gradient fields of surfaces with boundaries .a triangulation of a smooth simply - connected open surface consists of the vertex set , the edge set and the triangular face set .each face can be represented as an ordered triple ] to ] to ] . the _ discrete harmonic energy _of is given by \in \widetilde{k } } k_{uv } ||\psi(u)-\psi(v)||^2,\ ] ] where with being the angles opposite to the edge ] on the triangulation and a triangle ] .note that the above linear system is sparse and symmetric positive definite .therefore , it can be efficiently solved .it is noteworthy that due to the symmetry of the double covered surface , for every edge ] in the duplicated triangulation such that where are the angles opposite to the edge ] .therefore , only half of the vertices and faces are needed for computing the whole coefficient matrix to solve the laplace equation ( [ eqt : laplace ] ) .more explicitly , equation ( [ eqt : laplace_linear ] ) can be expressed as the following form : here , and are respectively the coordinates of the non - boundary vertices of and , are the coordinates of the glued vertices , and is a sparse symmetric positive definite matrix .it follows that we can save half of the computational cost in finding all the cotangent weights in .hence , the computation of the spherical conformal map is efficient even the number of vertices and faces is doubled under the double covering step .another important mathematical tool in our proposed algorithm is the quasi - conformal mapping .quasi - conformal mappings are closely related to the beltrami coefficients .it is important to establish algorithms for computing the beltrami coefficient associated with a given quasi - conformal map , as well as for computing the quasi - conformal map associated with a given beltrami coefficient .we first focus on the computation of the beltrami coefficients . in the discrete case ,suppose are two triangular meshes with the same number of vertices , faces and edges , and is an orientation preserving piecewise linear homeomorphism .it is common to discretize the beltrami coefficient on the triangular faces . to compute the beltrami coefficient associated with , we compute the partial derivatives on every face on .suppose on corresponds to a triangular face on under the mapping .the approximation of on can be computed using the coordinates of the six vertices of and .since the triangulations are piecewise linear , we can place and on using suitable rotations and translations to simplify the computations .hence , without loss of generality , we can assume that and are on . specifically ,suppose ] , where , and .recall that and .hence , to discretize the beltrami coefficient , we only need to compute and on every triangular face .it is natural to use the differences between the vertex coordinates for the approximation .we define then , we can compute the beltrami coefficient on by this approximation is easy to compute .hence , it is convenient to obtain the beltrami coefficient associated with a given quasi - conformal map in the discrete case .a relatively complicated task is to compute the quasi - conformal map associated with a given beltrami coefficient . to achieve this, we apply the * lbs * to reconstruct a quasi - conformal map from a given beltrami coefficient , with the boundary vertices of the disk fixed .we now briefly explain the discretization of the * lbs*. recall that the quasi - conformal map associated with a given beltrami coefficient can be obtained by solving equation ( [ eqt : beltramipde ] ) .the key idea of * lbs * is to discretize equation ( [ eqt : beltramipde ] ) into sparse spd linear systems of equations so that the solution can be efficiently computed . for each vertex ,let be the collection of the neighboring faces attached to .let ] on the complex plane and the inverse stereographic projection is applied . then the south - pole step aims to correct the conformality distortion near the north pole of the sphere caused by the discretization and approximation errors .in fact , since we are only interested in half of the glued surface , the south - pole step may be skipped as we can take the southern hemisphere obtained by the first step as our result .it may already be with acceptable conformality .the conformality distortion in the north - pole step in is primarily caused by the choice of the boundary triangle ] in equation ( [ eqt : laplace ] ) , half of the computational cost in computing the spherical conformal mapping can be further reduced .table [ enhancement ] shows the performance of the current version of our proposed method and the possible improved version of it without the south - pole step in , under a suitable choice of the boundary triangle ] for , while the current version of our method is fully automatic .hence , the current version of our proposed method is probably more suitable for practical applications until an automatic algorithm for searching for the most suitable boundary triangle $ ] is developed .in this paper , we have proposed a linear formulation for the disk conformal parameterizations of simply - connected open surfaces .we begin the algorithm by obtaining an initial planar parameterization via double covering and spherical conformal mapping .note that even the size of the surface is doubled by double covering , the combination of the double covering technique and the spherical conformal mapping results in an efficient computation because of the symmetry .after that , we normalize the boundary and compose the map with a quasi - conformal map so as to correct the conformality distortion and achieve the bijectivity . our proposed formulation is entirely linear , and hence the computation is significantly accelerated by over 60% when compared with the fastest state - of - the - art approaches . at the same time ,our parameterization results are of comparable quality to those produced by the other state - of - the - art approaches in terms of the conformality distortions , the bijectivity and the robustness .therefore , our proposed algorithm is highly practical in real applications , especially for the problems for which the computational complexity is the main concern . in the future ,we plan to explore more applications , such as remeshing and registration of simply - connected open surfaces , based on the proposed parameterization scheme .choi , k.c .lam , and l.m .lui , _ flash : fast landmark aligned spherical harmonic parameterization for genus-0 closed brain surfaces _ , siam journal on imaging sciences , volume 8 , issue 1 , pp . 6794 , 2015 .x. gu , y. wang , t. f. chan , p. m. thompson , and s .- t .yau , _ genus zero surface conformal mapping and its application to brain surface mapping _, ieee transactions on medical imaging , volume 23 , pp .949958 , 2004 .s. haker , s. angenent , a. tannenbaum , r. kikinis , and g. sapiro , _ conformal surface parameterization for texture mapping _ ,ieee transactions on visualization and computer graphics , volume 6 , issue 2 , pp .181189 , 2000 .b. lvy , s. petitjean , n. ray , and j. maillot , _ least squares conformal maps for automatic texture atlas generation _ ,acm transactions on graphics ( proceedings of acm siggraph 2002 ) , pp . 362371 , 2002 .l. m. lui , s. thiruvenkadam , y. wang , p. thompson , and t. f. chan , _ optimized conformal surface registration with shape - based landmark matching _, siam journal on imaging sciences , volume 3 , issue 1 , pp . 5278 , 2010 .j - f remacle , c. geuzaine , g. compre , and e. marchandise , _ high quality surface remeshing using harmonic maps _ , international journal for numerical methods in engineering , volume 83 , issue 4 , pp .403425 , 2010 .
surface parameterization is widely used in computer graphics and geometry processing . it simplifies challenging tasks such as surface registrations , morphing , remeshing and texture mapping . in this paper , we present an efficient algorithm for computing the disk conformal parameterization of simply - connected open surfaces . a double covering technique is used to turn a simply - connected open surface into a genus-0 closed surface , and then a fast algorithm for parameterization of genus-0 closed surfaces can be applied . the symmetry of the double covered surface preserves the efficiency of the computation . a planar parameterization can then be obtained with the aid of a mbius transformation and the stereographic projection . after that , a normalization step is applied to guarantee the circular boundary . finally , we achieve a bijective disk conformal parameterization by a composition of quasi - conformal mappings . experimental results demonstrate a significant improvement in the computational time by over 60% . at the same time , our proposed method retains comparable accuracy , bijectivity and robustness when compared with the state - of - the - art approaches . applications to texture mapping are presented for illustrating the effectiveness of our proposed algorithm .
the migration of eukaryotic cells in complex environments plays a significant role in many biological processes , such as embryonic morphogenesis , immune defense , and tumor invasion .one widely encountered biomechanical environment for migrating eukaryotic cells _ in vivo _ is the three - dimensional ( 3d ) extracellular matrix ( ecm ) , composed of a dense network of biopolymers such as collagen and fibrin . to make their ways through ecm, cells apply a variety of different strategies , involving mechanisms of cytoskeleton force generation .protease production , and cell adhesions .whatever strategy cells use , cell - generated forces acting on the ecm are valuable clues to infer what is happening within a migrating 3d cell . recently ,experimental advances have been made in quantifying the ecm s response to migrating cells . in these experimental setups ,cells are often cultured in artificially synthesized extracellular matrix ( ecm ) , such as type i collagen gel , which efficiently mimics the environment in living tissues . as the cells migrate , they deform the surrounding environment ; this deformation is trackable by for example placing marker beads in the gel. however , from a theoretical perspective , a gap still exists between knowing the deformation of the ecm and determining what forces cells have exerted on that ecm . the inversion from the former tothe latter remains elusive , because the ecm display very complex properties such as strain - stiffening , non - affine deformations . in a recent work ,steinwachs _ et al _ attempted a reconstruction scheme based on a continuous elasticity model , which phenomenologically captures the strain - stiffening property of collagen gels .this effort goes beyond previous approaches which used linear elastic assumptions and hence represents a step forward .however it remains unclear how accurately this method would capture the mechanics of a real biopolymer network . in particular real networks are expected to exhibit micromechanical fluctuations in its properties on the scale of the network elements which are not very different than the scale of the embedded cell .therefore , a more detailed understanding of ecm networks is essential for a quantitatively successful reconstruction of cellular forces in 3d ecm . here, we use a lattice - based mechanical model to study this force reconstruction problem .in particular , we make use of recent progress in the soft - matter physics community towards the understanding of the ecm systems . it has been shown that these systems can be modeled as a disordered network of semiflexible polymers with an interplay between bond stretching and bending . on the basis of this idea , computational models have beed built to capture the critical properties such as strain stiffening , negative normal stress , and non - affine deformations .these models roughly fall into two categories : lattice - based models and off - lattice models ( mikado network) .the former places straight fibers on a regular lattice ; these fibers are determined by straight segments of bonds on a diluted network .the other approach consists of placing stochastically positioned fibers , intersecting with each other and forming crosslinks ; this is usually referred to as a mikado model . as in both of these cases the mechanicsis controlled by critical behavior around the maxwell point ( at which bending becomes the dominant response mechanism at small strain ) , the results from these different approaches are extremely consistent with each other . in this paper, we study a reconstruction scheme based on a two - dimensional ( 2d ) diluted lattice model .we chose the lattice model for its computational efficiency , in addition , the fact , that lattice - based model in both 2d and 3d exhibit nearly identical nonlinear elastic response , has enabled us to qualitatively explore the feasibility of our scheme without a to carryout a full 3d simulation .our approach will enable us to study the feasibility of doing this reconstruction even if we do not know the exact microstructure of the material .we model the ecm as a diluted triangular lattice ( fig 1 a ) . in this lattice, each bond , with stretching stiffness k and bending stiffness , exists stochastically with a probability p. the hamiltonian is : in which is the natural length of each bond . when the bond between node i and j is present and when the bond is removed .the first term refers to the stretching energy : sums over all neighboring lattice sites and is the length of bond in deformed state .the second term represents the bending energy : sums over all groups of three co - linear consecutive lattice sites in the reference state and is the change of angle in the deformed state .in such a lattice , satisfies , is the coordination number , which is 6 for a triangular lattice , and is the average connectivity of biopolymer networks . sinceexperiments have shown , we set in our model .we insert a round cell into the network by cutting a circular hole in the middle ( fig 1 b ) .the intersections between the cell boundary and network bonds are the attached nodes , which connect the cell with its ecm environment. each one of these nodes can be located by its angular position in the circular cell boundary .then we stretch the cell in the normal directions ( fig 1 c ) ; for the i th attached node , its displacement towards cell center satisfies : in which is the position of the i th node , are the determining parameters of cell stretching and the length of is the number of degrees of freedom , the above formula accounts for the fact that spatially close attached nodes should undergo similar displacements to preserve the smoothness of cell membrane .hence , we limit ourselves to relatively few large wavelength modes in setting the cell boundary deformation . once the cell is deformed , we relax the lattice into its energy minimum state by the conjugate gradient method .each parameter setting leads us to a particular cell stretching pattern : which results in ecm deformation . in our scheme, we assume that ecm deformation is measured through the displacements a set of m marker beads , as following : we stochastically set the parameters and generate the corresponding `` observed '' displacements of marker beads . by minimizing we can approximate the with , thus realizing the reconstruction . to minimize the function , we apply a combination of particle swarm optimization ( pso ) and downhill simplex algorithm : we first run the pso search for several times and pick the best value reached , and then we run the downhill simplex algorithm to further minimize the error function .we carry out most of our simulations in a 60 x 60 lattice , with p = 0.57 ; to eliminate the possible boundary effects from the lattice , we verified that a 100 x 100 lattice gives similar results .we set the lattice segment length to be and our round cell has an initial radius .physically , a is corresponding to the persistent length of collagen fibers ( ) .as already mentioned , we maintain the smoothness of cell membrane by limiting deformation to the longest wavelength modes when stretching the cell : we fix and , and is a stochastic variable uniformly distributed between 0 and 1 .a constant ( usually around 2a to 3a in our simulations ) is added to make sure that the cell contracts mostly , as most cells studied in such ecm systems are contractive . to track the ecm deformation, we initially let all lattice sites be inhabited by marker beads .we solve the inverse problem of " what stretching pattern leads to the the observed beads displacements with particle swarm optimization and the downhill simplex algorithm . since the particle swarm optimization algorithm applies a stochastic searching strategy , instead of giving a `` yes or no '' answer, we will measure how likely the reconstruction will be successful by running the same simulation for dozens of times with different random seeds.here , by `` a successful reconstruction '' , we refer to the predictions for all attached nodes on the cell membrane deviating by no more than 5% from the input data . in addition , the chances to succeed for a pso procedure will always increase as the number of searching rounds and ideally one can always get to the right answer with an infinite amount of computer time .due to limited resources , also for the sake of comparison , we assign at most 3 rounds of pso search with a maximum of 100 steps in each simulation , followed by the downhill simplex method .we run groups of 20 simulations for a variety of values .the results indicate the existence of a limit of resolution in our scheme for spatial frequencies .our scheme works well if the morphology change of cell is restricted to the 4 or 5 longest wavelength modes .the algorithm experiences a sharp drop in performance when stretching at higher frequencies is present ( fig 2 ) .this is to say , if a cell changes its shape in a highly noncontinuous way , the reconstruction of cellular forces will become extremely hard . given the continuous nature of cell membrane ,this is unlikely to occur .errors are unavoidable in all kinds of experimental measurements .we therefore test the robustness of our scheme in the face of errors in the determination of the ecm deformation .we stochastically disturb the positions of all marker beads by some percentage of of bond length after the network has relaxed to its energy - minimized state , and then do the inverse problem with the disturbed beads displacement data .we found significant robustness to our reconstruction , with accurate forces able to be found even with noise as big as 30% ( see fig 3a ) .a second possible source of error is a mistaken value of the parameters of the system .we therefore carry out the forward and inverse problems with different values of the stretching and bending stiffnesses .as can be seen in fig 3b , our scheme gives robust answers with stretching and bending stiffness changes of roughly 10%.these results are encouraging as we plan in future work to carry out the inversion for actual ( rather than simulated ) experimental data ., ) ; the green dots are values of the stiffnesses used in the inverse algorithm leading to correct reconstructions , while red x signs are the values leading to failed reconstructions.,title="fig : " ] , ) ; the green dots are values of the stiffnesses used in the inverse algorithm leading to correct reconstructions , while red x signs are the values leading to failed reconstructions.,title="fig : " ] to study the relation between the reconstruction efficiency and the distribution of beads , we start by `` turning off '' the marker beads far away from the cell center ( ) .noise - free simulations of all lead to perfect performances , indicating that sampling the whole network with marker beads is not necessary for a successful reconstruction . to further investigate the effects of the distribution of marker beads near the cell , we keep only the marker beads within a range of to from the center of the cell .as seen in table 1 , elimination of beads that are within a range of 10 to 15 will lead to a sharp drop in reconstruction performance , which means marker beads near the cell are not equally important in our scheme : the closer , the more important .in addition , inversion attempts with both randomly and regularly diluted ( up to 50% ) distributions of beads in the neighborhood of the cell suffer no decrease in performance , showing the insensitivity of our scheme towards bead density ..relation between bead positions and reconstruction performance ( with 5% noise ) [ cols="<,^,>",options="header " , ] as we have discussed above , the response of the ecm to cell - induced deformation is non - linear and non - affine .this is ultimately due to the fact that the network is below the maxwell point and hence exhibits a transition from bending - dominated to stretching - dominated as strain is increases and the fibers become more aligned .a second feature is that the ecm , both in our model and in experiment , is highly heterogeneous on the cellular scale .the first point argues against being able to accurately invert the bead displacements by using linear elastic models . to figure out how much this matters in practice , we compare our results with calculations on a a full triangular lattice , with all bonds present except for the ones attached to the cell membrane , representing an approximately affine reference model , and perform our comparison in two ways : one forward and one backward . in the forward comparison ,we contract our cell exactly the same way in a diluted lattice and a full lattice and compare the displacements of marker beads , in a range between 10 and 15 from the center of the cell . as seen in fig 4(a - c ) , large deviations in network responses are observed in these areas close to the cell . in the backward perspective , we relax our system on a diluted lattice and then attempt to recover the cell deformation based on a full lattice ( fig 5(a , b ) ) ; this would be analogous to doing the inverse problem with a linear elastic model . in all of our 20 attempts, reconstruction fails with very large deviations , up to 175% ( fig 5 ( c ) ) .our conclusion is that naive reconstructions will be highly inaccurate .we next turn to the second issue , that of heterogeneity .it is obvious that the details of the actual microstructure will vary from realization to realization , even for networks that are statistically the same .hence it is important to ask whether one needs to know a detailed model of the specific ecm or whether having a model in the same class is sufficient .this question is connected as well to whether a nonlinear continuum model which gets the correct macroscopic properties of the network can suffice for the inversion problem . to answer this question , we apply a similar strategy to our study of non - affine effects .we compare the responses to a same deforming cell of two ecm networks with the same dilution but different network connectivity . as seen in fig 5(a ), deformations may differ with a magnitude about twice the bond length in these two cases .in addition , we relax the cellularized network with one lattice , while reconstructing cell contraction based on a differently connected network , keeping the bulk properties invariant .similarly to the above affinity case , all simulations , without exception , lead to considerably wrong predictions .( fig 5(b ) ) .therefore , merely knowing the bulk properties will not lead one to correct force reconstruction , unless micromechanical information of the ecm is estimated to some extent .so far , all our studies have focused on the normal contraction of round cells . in realityhowever , cells often display long protrusive structures , pulling on the ecm networks . as a simple extension of our model, we have also studied elliptical cells .we allow an elliptical cell to move , while shrinking in its long axis . for this case , our simulations can quickly converge to the correct reconstruction .( fig 6(a , b ) ) .noticing that slight tangential movements are also possible , we also verify that our scheme works for a uniformly rotated round cell ( fig 6(c , d ) ) .for motion on two - dimensional surfaces , the method of traction force microscopy has given unprecedented insight into the way cells use their active cytoskeletal machinery to interact with the surface ( and , in the case of collective motion , with each other ) .it is therefore only natural to try to extend this methodology to cells moving within three - dimensional matrix materials .there are of course microscopy challenges in measuring the deformation of the matrix , but here for the sake of argument we have assumed that we can gather this information . instead we ask a different question , namely how does the known mechanical complexity of the ecm material limit our ability to invert deformations to find forces ? we have therefore explored the feasibility of reconstructing cell - generated forces from ecm deformation based on a mechanical ecm model . our model , based on a diluted triangular lattice which has been shown to be able mimic both macroscopic properties of the lattice rheology such as strain - stiffening and alignment but also local heterogeneities on the scale of individual cells .the results have shown both good and bad news for moving forward with this proposed inversion .on the one hand , we show the effectiveness and robustness of our inversion scheme . as loon as we do not demand too high a spatial resolution , we can show that marker deformations do indeed cellular contractions .this limitation is not too surprising .we can imagine a pair of close neighboring adhesion sites on the cell membrane which undergo very different stretching ; if they exchange their stretching , almost the same displacements of marker beads will be observed , thus making it too hard to distinguish the different configurations in the reconstruction .that explains why a relatively smoother stretching configuration , where close neighbors bonds stretch almost the same , will suffer much less from this degeneracy .fortunately , the fact cell membrane is relatively smooth should keep us from worrying too much about these troublesome degeneracies .finally , though we limit our investigation in 2d , we believe a 3d version will exhibit similar physics .any experiment based reconstruction must be robust with respect to expected uncertainties in measurement and in assumed material properties .our scheme does well in filtering out the fluctuations of bead displacements . as for any error in the assumed values of the stretching and bending stiffnesses , the performance shows a relatively higher sensitivity to stretching over bending .this is presumably because the strain induced by the cell is high enough to put the nearby lattice in a stretching dominated regime and is it these nearby points which are most important for the inversion .this is also consistent with our study of the effects of limitations on the available marker measurements .now for the bad news . to bridge the gap between our scheme and experiments , several more questions need to be addressed more carefully .we have shown that one needs to do a decent job on the actual microstructure of the lattice , not just on its average properties .we showed this by using a different realization of the network geometry to do the inversion and noted that there were rather severe inaccuracies .this of course depend on the details of the ecm geometry and specifically on the size of the cell versus the scale of the lattice microstructure .our results , chosen for parameters which dies a good job of reproducing the micro - mechanical variation for typical collage 1 gels , show however that this can be a real limitation. one approach would be to measure the detailed micromechanics and use this information to guide lattice construction .to our knowledge , microstructures of collagen networks can be imaged through confocal reflection microscopy , and optical trap methods could help extract micromechanical information in the ecm .clearly , we do not know at present how these data can be `` written '' into a model for force reconstruction .in summary , we have developed a computational scheme to reconstruct cellular forces in a cell - ecm system .the guiding principle of our scheme is to model the ecm response to cell - generated forces with a simple lattice - based model .our scheme demonstrates robustness against noise in marker beads measurements and systematic errors of material characterization .results of our 2d exploration elucidate how non - affine effects affect the reconstruction accuracy .also we argue that the micromechanical properties of ecm may be crucial for a precise reconstruction .
how cells move through 3d extracellular matrix ( ecm ) is of increasing interest in attempts to understand important biological processes such as cancer metastasis . just as in motion on 2d surfaces , it is expected that experimental measurements of cell - generated forces will provide valuable information for uncovering the mechanisms of cell migration . here , we use a lattice - based mechanical model of ecm to study the cellular force reconstruction issue . we conceptually propose an efficient computational scheme to reconstruct cellular forces from the deformation and explore the performance of our scheme in presence of noise , varying marker bead distribution , varying bond stiffnesses and changing cell morphology . our results show that micromechanical information , rather than merely the bulk rheology of the biopolymer networks , is essential for a precise recovery of cellular forces .
our knowledge of the physical world has reached a most remarkable state .we have established economical standard models " both for cosmology and for fundamental physics , which provide the conceptual foundation for describing a vast variety of phenomena in terms of a small number of input parameters .no existing observations are in conflict with these standard models .this achievement allows us to pose , and have genuine prospects to answer , new questions of great depth and ambition .indeed , the standard models themselves , through their weirdness and esthetic failings , suggest several such questions . a good way to geta critical perspective on these standard models is to review the external inputs they require to make them go .( there is a nice word for such input parameters : exogenous " , born on the outside . endogenous " parameters , by contrast , are explained from within . )this exercise also exposes interesting things about the nature of the models .what is usually called the standard model of particle physics is actually a rather arbitrary truncation of our knowledge of fundamental physics , and a hybrid to boot . inis more accurate and more informative to speak of two beautiful theories and one rather ramshackle working model : the gauge theory , the gravity theory , and the flavor / higgs model .the gauge theory is depicted in figure 1 .it is based on the local ( yang - mills ) symmetry .the fermions in each family fall into 5 separate irreducible representations of this symmetry .they are assigned , on phenomenological grounds , the funny hypercharges displayed there as subscripts . of course, notoriously , the whole family structure is triplicated .one also needs a single singlet , doublet scalar higgs " field with hypercharge .taken together , gauge symmetry and renormalizability greatly restrict the number of independent couplings that are allowed .putting aside for a moment yukawa couplings of fermions and higgs field , there are just three continuous parameters , namely the universal interaction strengths of the different gauge fields .( there is a subtlety here regarding the charges . since implementing gauge symmetry does not automatically quantize charge, the hypercharge assignments might appear to involve many continuous parameters , which on phenomenological grounds we must choose to have extremely special values . and that is true , classically .but consistency of the quantum theory requires cancellation of anomalies .this requirement greatly constrains the possible hypercharge assignments , bringing us down essentially to the ones we adopt . )i should emphasize that gauge symmetry and renormalizability are deeply tied up with the consistency and existence of quantum field theories involving vector mesons . taking a little poetic license , we could say that they are not independent assumptions at all , but rather consequences of special relativity and quantum mechanics .general relativity manifestly provides a beautiful , conceptually driven theory of gravity .it has scored many triumphs , both qualitative ( big bang cosmology , black hole physics ) and quantitative ( precession of mercury , binary pulsar ) . the low - energy effective theory of gravity and the other interactionsis defined algorithmically by the minimal coupling prescription , or equivalently by restricting to low - dimension operators . in this context , low " means compared to the planck energy scale , so this effective theory is very effective indeed . as in the gauge sector , symmetry here , general covariance greatly constrains the possible couplings , bringing us down to just two relevant parameters .almost all the observed phenomena of gravity are described using only one of these parameters , namely newton s gravitational constant .we are just now coming to accept that the other parameter , the value of the cosmological term , plays an important role in describing late - time cosmology .this impressive effective field theory of gravity is perfectly quantum - mechanical .it supports , for example , the existence of gravitons as the particulate form of gravity waves .there are major unsolved problems in gravity , to be sure , a few of which i ll discuss below , but they should nt be overblown or made to seem mystical .the third component of the standard model consists , one might say , of the potential energy terms .they are the terms that do nt arise from gauge or space - time covariant derivatives .( note that field strengths and curvatures are commutators of covariant derivatives . )all these terms involve the higgs field , in one way or another .they include the higgs field mass and its self - coupling , and the yukawa couplings .we know of no deep principle , comparable to gauge symmetry or general covariance , which constrains the values of these couplings tightly .for that reason , it is in this sector where continuous parameters proliferate , into the dozens .basically , we introduce each observed mass and weak mixing angle as an independent input , which must be determined empirically .the phenomenology is not entirely out of control : the general framework ( local relativistic quantum field theory , gauge symmetry , and renormalizability ) has significant consequences , and even this part of the standard model makes many non - trivial predictions and is highly over - constrained . in particular , the cabibbo - kobayashi - maskawa ( ckm ) parameterization of weak currents and cp violation has , so far , survived close new scrutiny at the b - factories intact .neutrino masses and mixings can be accommodated along similar lines , if we expand the framework slightly .the simplest possibility is to allow for minimally non - renormalizable ( mass dimension 5 ) ultra - yukawa " terms .these terms involve two powers of the scalar higgs field . to accommodate the observed neutrino masses and mixings, they must occur with very small coefficients .the emerging standard model of cosmology " is also something of a hybrid .one part of it is simply a concrete parameterization of the equation of state to insert into the framework of general relativistic models of a spatially uniform expanding universe ( friedmann - robertson - walker model ) ; the other is a very specific hypothesis about the primordial fluctuations from uniformity .corresponding to the first part , one set of exogenous parameters in the standard model of cosmology specifies a few average properties of matter , taken over large spatial volumes .these are the densities of ordinary matter ( i.e. , of baryons ) , of dark matter , and of dark energy . we know quite a lot about ordinary matter , of course , and we can detect it at great distances by several methods .it contributes about 3% of the total density .concerning dark ( actually , transparent ) matter we know much less .it has been seen " only indirectly , through the influence of its gravity on the motion of visible matter .we observe that dark matter exerts very little pressure , and that it contributes about 30% of the total density .finally dark ( actually , transparent ) energy contributes about 67% of the total density .it has a large _ negative _ pressure . from the point of view of fundamental physicsthis dark energy is quite mysterious and disturbing , as i ll elaborate shortly below .given the constraint of spatial flatness , these three densities are not independent .they must add up to a critical density that depends only the strength of gravity and the rate of expansion of the universe .fortunately , our near - total ignorance concerning the nature of most of the mass of the universe does not bar us from modeling its evolution .that s because the dominant interaction on large scales is gravity , and according to general relativity gravity does not care about details . according to general relativity , only total energy - momentum counts or equivalently , for uniform matter , total density and pressure . assuming these values for the relative densities , andthat the geometry of space is flat and still assuming uniformity we can use the equations of general relativity to extrapolate the present expansion of the universe back to earlier times .this procedure defines the standard big bang scenario .it successfully predicts several things that would otherwise be very difficult to understand , including the red shift of distant galaxies , the existence of the microwave background radiation , and the relative abundance of light nuclear isotopes .it is also internally consistent , and even self - validating , in that the microwave background is observed to be uniform to high accuracy , namely to a few parts in .the other exogenous parameter in the standard model of cosmology concerns the small departures from uniformity in the early universe .the seeds grow by gravitational instability , with over - dense regions attracting more matter , thus increasing their density contrast with time .this process plausibly could , starting from very small seeds , eventually trigger the formation of galaxies , stars , and other structures we observe today . _a priori _ one might consider all kinds of assumptions about the initial fluctuations , and over the years many hypotheses have been proposed . butrecent observations , especially the recent , gorgeous wmap measurements of microwave background anisotropies , favor what in many ways is the simplest possible guess , the so - called harrison - zeldovich spectrum . in this set - up the fluctuations are assumed to be strongly random uncorrelated and gaussian with a scale invariant spectrum at horizon entry , to be precise and to affect both ordinary and dark matter equally ( adiabatic fluctuations ) . given these strong assumptions just one parameter , the overall amplitude of fluctuations , defines the statistical distribution completely . with the appropriate value for this amplitude , and the relative density parameters i mentioned before ,this standard model cosmological model fits the wmap data and other measures of large - scale structure remarkably well .the structure of the gauge sector of the standard model gives powerful suggestions for its further development .the product structure , the reducibility of the fermion representation , and the peculiar values of the hypercharge assignments all suggest the possibility of a larger symmetry , that would encompass the three factors , unite the representations , and fix the hypercharges .the devil is in the details , and it is not at all automatic that the observed , complex pattern of matter will fit neatly into a simple mathematical structure .but , to a remarkable extent , it does .the smallest simple group into which could possibly fit , that is , fits all the fermions of a single family into two representations ( ) , and the hypercharges click into place .a larger symmetry group , , fits these and one additional singlet particle into a single representation , the spinor .the additional particle is actually quite welcome .it has the quantum numbers of a right - handed neutrino , and it plays a crucial role in the attractive seesaw " model of neutrino masses , of which more below .this unification of quantum numbers , though attractive , remains purely formal until it is embedded in a physical model .that requires realizing the enhanced symmetry in a local gauge theory .but nonabelian gauge symmetry requires universality : it requires that the relative strengths of the different couplings must be equal , which is not what is observed .fortunately , there is a compelling way to save the situation .if the higher symmetry is broken at a large energy scale ( equivalently , a small distance scale ) , then we observe interactions at smaller energies ( larger distances ) whose intrinsic strength has been affected by the physics of vacuum polarization .the running of couplings is an effect that can be calculated rather precisely , in favorable cases ( basically , for weak coupling ) , given a definite hypothesis about the particle spectrum . in this way we can test , quantitatively , the idea that the observed couplings derive from a single unified value .results from these calculations are quite remarkable and encouraging .if we include vacuum polarization from the particles we know about in the minimal standard model , we find approximate unification . if we include vacuum polarization from the particles needed to expand the standard model to include supersymmetry , softly broken at the tev scale , we find accurate unification .the unification occurs at a very large energy scale , of order gev .this success is robust against small changes in the susy breaking scale , and is not adversely affected by incorporation of additional particle multiplets , so long as they form complete representations of . on the other hand , many proposals for physics beyond the standard model at the tev scale ( technicolor models , large extra dimension scenarios , most brane - world scenarios ) corrupt the foundations of the unification of couplings calculation , and would render its success accidental . for me , this greatly diminishes the credibility of such proposals . low - energy supersymmetry is desirable on several other grounds , as well .the most important has to do with the black sheep " of the standard model , the scalar higgs doublet . in the absence of supersymmetry radiative corrections to the vacuum expectation value of the higgs particlediverge , and one must fix its value ( which , of course , sets the scale for electroweak symmetry breaking ) by hand , as a renormalized parameter . that leaves it mysterious why the empirical value is so much smaller than unification scales .upon more detailed consideration the question takes shape and sharpens considerably .enhanced unification symmetry requires that the higgs doublet should have partners , to fill out a complete representation .however these partners have the quantum numbers to mediate proton decay , and so if they exist at all their masses must be very large , of order the unification scale gev .this reinforces the idea that such a large mass is what is natural " for a scalar field , and that the light doublet we invoke in the standard model requires some special justification. it would be facile to claim that low - energy supersymmetry by itself cleanly solves these problems , but it does provide powerful theoretical tools for addressing them .the fact that an enormous new mass scale for unification is indicated by these calculations is profound .this enormous mass scale is inferred entirely from low - energy data .the disparity of scales arises from the slow ( logarithmic ) running of inverse couplings , which implies that modest differences in observed couplings must be made up by a long interval of running .the appearance of a very large mass scale is welcome on several grounds .* right - handed neutrinos can have normal , dimension - four yukawa couplings to the lepton doublet . in couplings are pretty much mandatory , since they are related by symmetry to those responsible for charge- quark masses . in addition , being neutral under they , unlike the fermions of the standard model , can have a majorana type self - mass without breaking these low - energy symmetries .we might expect the self - mass to arise where it is first allowed , at the scale where breaks ( or its moral equivalent ) .masses of that magnitude remove these particles from the accessible spectrum , but they have an important indirect effect . in second - order perturbation theory the ordinary left - handed neutrinos , through their ordinary yukawa couplings , make virtual transitions to their right - handed relatives and back .this generates non - zero masses for the ordinary neutrinos that are much smaller than the masses of other leptons and quarks .the magnitudes predicted in this way are broadly consistent with the observed tiny masses .no more than order - of - magnitude success can be claimed , because many relevant details of the models are poorly determined .* unification tends to obliterate the distinction between quarks and leptons , and hence to open up the possibility of proton decay .heroic experiments to observe this process have so far come up empty , with limits on partial lifetimes approaching years for some channels .it is very difficult to assure that these processes are sufficiently suppressed , unless the unification scale is very large .even the high scale indicated by running of couplings and neutrino masses is barely adequate . spinning it positively , experiments to search for proton decay remain a most important and promising probe into unification physics . * similarly , it is difficult to avoid the idea that unification , brings in new connections among the different families .there are significant experimental constraints on strangeness - changing neutral currents , lepton number violation , and other exotic processes that must be suppressed , and this makes a high scale welcome .* axion physics requires a high scale of peccei - quinn symmetry breaking , in order to implement weakly coupled , invisible " axion models . * with the appearance of this large scale , unification of the strong and electroweak interactions with gravity becomes much more plausible .newton s constant has dimensions of mass , so it runs even classically . or , to put it another way , gravity responds to energy - momentum , so it gets stronger at large energy scales .nevertheless , because gravity starts out extremely feeble compared to other interactions on laboratory scales , it becomes roughly equipotent with them only at enormously high scales , comparable to the planck energy gev . by inverting this thought, we gain a deep insight into one of the main riddles about gravity : if gravity is a primary feature of nature , reflecting the basic structure of space - time , why does it ordinarily appear so feeble ?elsewhere , i have tracked the answer down to the fact that at the unification ( planck ) scale the strong coupling is about ! these considerations delineate a compelling research program , centered on gathering more evidence for , and information about , the unification of fundamental interactions .we need to find low - energy supersymmetry , and to look hard for proton decay and for axions .and we need to be alert to the possibility of direct information from extreme astrophysical objects and their relics .such objects include , of course , the early universe as a whole , but also perhaps contemporary cosmic defects ( strings , domain walls ) .they could leave their mark in microwave background anisotropies and polarization , in gravity waves , or as sources of unconventional and/or ultra - high energy cosmic rays .theoretical suggestions for enhancing the other two components of our standard model of fundamental physics are less well formed .i ll confine myself to a few brief observations .non - minimal coupling terms arise in the extension of supersymmetry to include gravity .such terms play an important role in many models of supersymmetry breaking .although it will require a lot of detective work to isolate and characterize such terms , they offer a unique and potentially rich source of information about the role of gravity in unification .the flavor / higgs sector of fundamental physics is its least satisfactory part . whether measured by the large number of independent parameters or by the small number of powerful ideas it contains, our theoretical description of this sector does not attain the same level as we ve reached in the other sectors .this part really does deserve to be called a model " rather than a theory " .there are many opportunities for experiments to supply additional information .these include determining masses , weak mixing angles and phases for quarks ; the same for neutrinos ; searches for and allied processes ; looking for electric dipole moments ; and others .if low - energy supersymmetry is indeed discovered , there will be many additional masses and mixings to sort out .the big question for theorists is : what are we going to do with this information ?we need some good ideas that will relate these hard - won answers to truly fundamental questions .cosmology has been reduced " to some general hypotheses and just four exogenous parameters .it is an amazing development . yeti think that most physicists will not , and should not , feel entirely satisfied with it .the parameters appearing in the cosmological model , unlike those in the comparable models of matter , do not describe the fundamental behavior of simple entities .rather they appear as summary descriptors of averaged properties of macroscopic ( very macroscopic ! ) agglomerations .they appear neither as key players in a varied repertoire of phenomena nor as essential elements in a beautiful mathematical theory . due to these shortcomingswe are left wondering why just these parameters appear necessary to make a working description of existing observations , and uncertain whether we ll need to include more as observations are refined .we d like to carry the analysis to another level , where the four working parameters will give way to different ones that are closer to fundamentals .there are many ideas for how an asymmetry between matter and antimatter , which after much mutual annihilation could boil down to the present baryon density , might be generated in the early universe .several of them seem capable of giving the observed value .unfortunately the answer generally depends on details of particle physics at energies that are unlikely to be accessible experimentally any time soon .so for a decision among them we may be reduced to waiting for a functioning theory of ( nearly ) everything .i m much more optimistic about the dark matter problem . herewe have the unusual situation that there are two good ideas , which according to william of occam ( of razor fame ) is one too many .the symmetry of the standard model can be enhanced , and some of its aesthetic shortcomings can be overcome , if we extend it to a larger theory .two proposed extensions , logically independent of one another , are particularly specific and compelling .one of these incorporates a symmetry suggested by roberto peccei and helen quinn .pq symmetry rounds out the logical structure of qcd , by removing qcd s potential to support strong violation of time - reversal symmetry , which is not observed .this extension predicts the existence of a remarkable new kind of very light , feebly interacting particle : axions .the other incorporates supersymmetry , an extension of special relativity to include quantum space - timed transformations .supersymmetry serves several important qualitative and quantitative purposes in modern thinking about unification , relieving difficulties with understanding why w bosons are as light as they are and why the couplings of the standard model take the values they do . in many implementations of supersymmetry the lightest supersymmetric particle , or lsp , interacts rather feebly with ordinary matter ( though much more strongly than do axions ) and is stable on cosmological time scales .the properties of these particles , axion or lsp , are just right for dark matter. moreover you can calculate how abundantly they would be produced in the big bang , and in both cases the prediction for the abundance is quite promising .there are vigorous , heroic experimental searches underway to dark matter in either of these forms .we will also get crucial information about supersymmetry from the large hadron collider ( lhc ) , starting in 2007 .i will be disappointed and surprised if we do nt have a much more personalized portrait of the dark matter in hand a decade from now . it remains to say a few words about the remaining parameter , the density of dark energy .there are two problems with this : why it is so small ?why is it so big ?a great lesson of the standard model is that what we have been evolved to perceive as empty space is in fact a richly structured medium .it contains symmetry - breaking condensates associated with electroweak superconductivity and spontaneous chiral symmetry breaking in qcd , an effervescence of virtual particles , and probably much more . since gravity is sensitive to all forms of energy it really ought to see this stuff , even if we do nt .a straightforward estimation suggests that empty space should weigh several orders of magnitude of orders of magnitude ( no misprint here ! ) more than it does .it should " be much denser than a neutron star , for example .the expected energy of empty space acts like dark energy , with negative pressure , but there s much too much of it . to methis discrepancy is the most mysterious fact in all of physical science , the fact with the greatest potential to rock the foundations .we re obviously missing some major insight here .given this situation , it s hard to know what to make of the ridiculously small amount of dark energy that presently dominates the universe !the emerging possibility of forging links between fundamental physics and cosmology through models of inflation is good reason for excitement and optimism .several assumptions in the standard cosmological model , specifically uniformity , spatial flatness , and the scale invariant , gaussian , adiabatic ( harrison - zeldovich ) spectrum , were originally suggested on grounds of simplicity , expediency , or esthetics . they can be supplanted with a single dynamical hypothesis : that very early in its history the universe underwent a period of superluminal expansion , or inflation .such a period could have occurred while a matter field that was coherently excited out of its ground state permeated the universe .possibilities of this kind are easy to imagine in models of fundamental physics .for example scalar fields are used to implement symmetry breaking even in the standard model , and such fields can easily fail to shed energy quickly enough to stay close to their ground state as the universe expands .inflation will occur if the approach to the ground state is slow enough .fluctuations will be generated because the relaxation process is not quite synchronized across the universe .inflation is a wonderfully attractive , logically compelling idea , but very basic challenges remain .can we be specific about the cause of inflation , grounding it in specific , well - founded , and preferably beautiful models of fundamental physics ? concretely , can we calculate the correct amplitude of fluctuations convincingly ?existing implementations actually have a problem here ; it takes some nice adjustment to get the amplitude sufficiently small .more hopeful , perhaps , than the difficult business of extracting hard quantitative predictions from a broadly flexible idea , is to follow up on the essentially new and surprising possibilities it suggests .the violent restructuring of space - time attending inflation should generate detectable gravitational waves .these can be detected through their effect on polarization of the microwave background . andthe non - trivial dynamics of relaxation should generate some detectable deviation from a strictly scale - invariant spectrum of fluctuations .these are very well posed questions , begging for experimental answers .perhaps not quite so sharply posed , but still very promising , is the problem of the origin of the highest energy cosmic rays .it remains controversial whether there so many events observed at energies above those where protons or photons could travel cosmological distances that explaining their existence requires us to invoke new fundamental physics .however this plays out , we clearly have a lot to learn about the compositions of these events , their sources , and the acceleration mechanisms .the observed values of the ratios and are extremely peculiar from the point of view of fundamental physics , as currently understood .leading ideas from fundamental theory about the origin of dark matter and the origin of baryon number ascribe them to causes that are at best very remotely connected , and existing physical ideas about the dark energy , which are sketchy at best , do nt connect it to either of the others . yetthe ratios are observed to be close to unity . andthe fact that these ratios are close to unity is crucial to cosmic ecology ; the world would be a very different place if their values were grossly different from what they are .several physicists , among whom s. weinberg was one of the earliest and remains among the most serious and persistent , have been led to wonder whether it might be useful , or even necessary , to take a different approach , invoking anthropic reasoning .many physicists view such reasoning as a compromise or even a betrayal of the goal of understanding the world in rational , scientific terms .certainly , some adherents of the anthropic principle " have overdone it .no such `` principle '' can substitute for deep principles like symmetry and locality , which support a vast wealth of practical and theoretical applications , or the algorithmic description of nature in general .but i believe there are specific , limited circumstances in which anthropic reasoning is manifestly appropriate and unavoidable .in fact , i will now sketch an existence proof .i will need to use a few properties of axions , which i should briefly recall .given its extensive symmetry and the tight structure of relativistic quantum field theory , the definition of qcd only requires , and only permits , a very restricted set of parameters .these consist of the coupling constant and the quark masses , which we ve already discussed , and one more the so - called parameter .physical results depend periodically upon , so that effectively it can take values between .we do nt know the actual value of the parameter , but only a limit , .values outside this small range are excluded by experimental results , principally the tight bound on the electric dipole moment of the neutron .the discrete symmetries p and t are violated unless ( mod ) .since there are p and t violating interactions in the world , the parameter ca nt be to zero by any strict symmetry assumption .so understanding its smallness is a challenge .the effective value of will be affected by dynamics , and in particular by spontaneous symmetry breaking .peccei and quinn discovered that if one imposed a certain asymptotic symmetry , and if that symmetry were broken spontaneously , then an effective value would be obtained .weinberg and i explained that the approach could be understood as a relaxation process , whereby a very light field , corresponding quite directly to , settles into its minimum energy state .this is the axion field , and its quanta are called axions . the phenomenology of axions is essentially controlled by one parameter , . has dimensions of mass .it is the scale at which peccei - quinn symmetry breaks .now let us consider the cosmological implications .peccei - quinn symmetry is unbroken at temperatures .when this symmetry breaks the initial value of the phase , that is , is random beyond the then - current horizon scale .one can analyze the fate of these fluctuations by solving the equations for a scalar field in an expanding universe .the main general results are as follows .there is an effective cosmic viscosity , which keeps the field frozen so long as the hubble parameter , where is the expansion factor . in the opposite limit the field undergoes lightly damped oscillations , which result in an energy density that decays as . which is to say , a comoving volume contains a fixed mass .the field can be regarded as a gas of nonrelativistic particles ( in a coherent state ) .there is some additional damping at intermediate stages .roughly speaking we may say that the axion field , or any scalar field in a classical regime , behaves as an effective cosmological term for and as cold dark matter for .inhomogeneous perturbations are frozen in while their length - scale exceeds , the scale of the apparent horizon , then get damped .if we ignore the possibility of inflation , then there is a unique result for the cosmic axion density , given the microscopic model .the criterion is satisfied for .at this point the horizon - volume contains many horizon - volumes from the peccei - quinn scale , but it still contains only a negligible amount of energy by contemporary cosmological standards .thus in comparing to current observations , it is appropriate to average over the starting amplitude statistically .if we do nt fix the baryon - to - photon ratio , but instead demand spatial flatness , as inflation suggests we should , then for gev the baryon density we compute is smaller than what we observe .if inflation occurs before the peccei - quinn transition , this analysis remains valid .but if inflation occurs after the transition , things are quite different . forif inflation occurs after the transition , then the patches where is approximately homogeneous get magnified to enormous size .each one is far larger than the presently observable universe .the observable universe no longer contains a fair statistical sample of , but some particular accidenta " value .of course there is still a larger structure , which martin rees calls the multiverse , over which the value varies .now if gev , we could still be consistent with cosmological constraints on the axion density , so long as the amplitude satisfies .the actual value of , which controls a crucial regularity of the observable universe , is contingent in a very strong sense .in fact , it is different elsewhere " . within this scenario ,the anthropic principle is demonstrably correct and appropriate .regions having large values of , in which axions by far dominate baryons , seem likely to prove inhospitable for the development of complex structures .axions themselves are weakly interacting and essentially dissipationless , and they dilute the baryons , so that these too stay dispersed . in principlelaboratory experiments could discover axions with gev .if they did , we would have to conclude that the vast bulk of the multiverse was inhospitable to intelligent life .and we d be forced to appeal to the anthropic principle to understand the anomalously modest axion density in our universe .weinberg considered anthropic reasoning in connection with the density of dark energy .it would be entertaining to let both densities , and perhaps other parameters , float simultaneously , to see whether anthropic reasoning favors the observed values .of course , in the absence of a plausible microscopic setting , these are quite speculative exercises .we do nt know , at present , which if any combinations of the basic parameters that appear in our desciption of nature vary independently over the multiverse .but to the extent anthropic reasoning succeeds , it might guide us toward some specific hypotheses about fundamental physics ( e.g. , that axions provide the dark matter , that gev , that the dark matter candidates suggested by supersymmetry are subdominant , or perhaps unstable on cosmological time scales ) .one last thought , inspired by these considerations .the essence of the peccei - quinn mechanism is to promote the phase of quark mass matrix to an independent , dynamically variable field .could additional aspects of the quark and lepton mass matrices likewise be represented as dynamical fields ?in fact , this sort of set - up appears quite naturally in supersymmetric models , under the rubric flat directions " or moduli " . under certain not entirely implausible conditions particles associated with these moduli fields could be accessible at future accelerators , specifically the lhc .if so , their study could shed new light on the family / higgs sector , where we need it badly .the way in which many of our most ambitious questions , arising from the perimeters of logically independent circles of ideas , overlap and link up is remarkable . it might be a sign that we poised to break through to a new level of integration in our understanding of the physical world . of course , to achieve that we will need not only sharp ambitious questions , but also some convincing answers .there are many promising lines to pursue , as even this brief and very incomplete discussion has revealed .
this is a broad and in places unconventional overview of the strengths and shortcomings of our standard models of fundamental physics and of cosmology . the emphasis is on ideas that have accessible empirical consequences . it becomes clear that the frontiers of these subjects share much ground in common .
coloring has been used in wireless ad hoc and sensor networks to improve communications efficiency by scheduling medium access . indeed ,only nodes that do not interfere can have the same color and then are allowed to transmit simultaneously .hence coloring can be used to schedule node activity .the expected benefits of coloring are threefold : 1 . at the bandwidth level where no bandwidth is lost in collisions , overhearing andinterferences are reduced .moreover , the use of the same color by several nodes ensures the spatial reuse of the bandwidth .2 . at the energy levelwhere no energy wasted in collision .furthermore , nodes can sleep to save energy without loosing messages sent to them because of the node activity schedule based on colors .3 . at the delay level where the end - to - end delays can be optimized by a smart coloring ensuring for instance that any child accesses the medium before its parent in the data gathering tree . concerning coloring algorithms , two types of coloringare distinguished : node coloring and link coloring . with link coloring ,time slots are assigned per link . only the transmitter and the receiverare awake , the other nodes can sleep .if the link is lightly loaded , its slot can be underused .moreover , broadcast communications are not easy : the source must send a copy to each neighbor . on the contrary , with node coloring , the slot is assigned to the transmitter that can use it according to its needs : unicast and/or broadcast transmissions . hence the slot use is optimized by its owner .the value of in -hop node coloring must be chosen to ensure that any two nodes that are strictly more than -hop away can transmit simultaneously without interfering .it follows that depends on the type of communication that must be supported .for instance , broadcast transmissions require 2-hop coloring , whereas unicast transmission with immediate acknowledgement ( i.e. the receiver uses the timeslot of the sender to transmit its acknowledgement ) requires 3-hop coloring , see for an illustrative example .this paper is organized as follows .first , we define the coloring problem in section [ problem ] and introduce definitions .we position our work with regard to the state of the art in section [ stateart ] . in section [ complexity ] , we prove that the -hop coloring decision problem is np - complete in both general and strategic modes , for any integer .that is why , we propose serena , an heuristic to color network nodes . in section[ theoretical ] , we obtain theoretical results determining an optimal periodic color pattern for grid topologies with various transmission ranges .we compare them with serena results in section [ serena ] .finally , we conclude in section [ conclusion ] pointing out future research directions .let be an undirected graph representing the network topology .each vertex represents a network node with ] : + + now , we can define the set and , ] .each node from is linked to node from associated with the same node , that is ( see links of type ) .* .two nodes and from are linked to each other if their corresponding nodes in are linked in ( see links of type ) . *finally , the nodes in are linked to the conjunction node , which was added to meet constraint [ c3 ] ( see links of type ) .this construction is polynomial in time .an example of graphs and with is illustrated in figure [ 5hopgraph ] . * - second case : is even * : see the example of in figure [ 6hopgraph ] . + to build the graph when is even ,constraints [ c1 ] , [ c2 ] and [ c3 ] are considered . however , as the number of links to introduce between nodes in the initial graph depends on the number of nodes to introduce between them , and thus , on parity , the reduction is slightly modified . _ definition of _ + in this case , let , we first define copies of , denoted and bijective functions with ] , where is a node introduced to model the data gathering tree in .+ _ definition of _ + to build the set , five types of links are introduced .we then have : + where : + * .thus , each node from the initial graph is linked to , its associated node from the set ( see links of type in figure [ 6hopgraph ] ) .* } \{(u_l , u_{l+1})\ such\ that\ u_l \in u_l\ and\ u_{l+1 } \in u_{l+1}\ and\ f_l^{-1}(u_l)=f_{l+1}^{-1}(u_{l+1})\}$ ] .each node from is linked to node from associated with the same node , ( see links of type ) . * . in other words , for each couple of nodes and in , we associate a node if and only if .we then link with and ( see links of type ) . *this means that the nodes in form a complete graph ( see links of type ) . *all nodes in are linked to a node ( see links of type ). this construction is polynomial in time .the transformed graph for of the initial graph depicted in figure [ 5hopgraph].a is illustrated in figure [ 6hopgraph ] .of for .,title="fig:"][6hop ] we now show , that the -color -hop vertex coloring problem in both general and strategic modes , for any value of has a solution if and only if the -color 1-hop vertex coloring problem has a solution .we define the following lemma : [ lemmah-1 ] all nodes in are at most -hop neighbors . by construction of .[ lemmag ] to perform a -hop coloring of the graph , the number of colors taken by nodes in is equal to with is equal to if is an odd number , and if is an even number , where is the number of nodes in , is the number of edges in .from lemma [ lemmah-1 ] , all nodes in are at most -hop neighbors .hence , no color can be reused with -hop coloring ( ) of . by construction of ,the number of these nodes is equal to if is an odd number , and if is an even number .[ lemmav ] any color used for a node in by a -hop coloring of can not be used by any node in .let us consider any node and any node .let be the number of hops between and . by construction , .from lemma [ lemmah-1 ] , and since is a neighbor of , we get .hence , and must use different colors with -hop coloring of for . to complete the proof of theorem [ thcomplexity ] ,we now prove the following lemma : has a one - hop coloring with colors if and only if has a -hop coloring in general mode with colors , with . given a one - hop coloring of with colors , we want to show that there exists a -hop coloring of with colors as follows . according to lemma [ lemmag ] , this -hop coloring will use colors for nodes in and colors for nodes in with is equal to if is an odd number , and if is an even number . from lemma [ lemmav ] ,colors used in can not be reused in .it follows that there exists a -hop coloring of with exactly colors .now , let us assume that we have a -hop coloring of with colors and we want to show that we can find a one - hop coloring of with colors . from lemma [ lemmag ] , colors are needed for -hop coloring of nodes in . from lemma [ lemmav ], colors used in can not be reused in .hence , colors are used to color the nodes in .moreover , since any two nodes and in that are one - hop neighbors in are -hop neighbors in , by construction of , we deduce that no two one - hop neighbors in use the same color .hence , we can find a valid one - hop coloring of with colors . has a one - hop coloring with colors if and only if has a -hop coloring in strategic mode with colors , with .given a one - hop coloring of with colors , we want to show that there exists a -hop coloring of with colors in strategic mode such that constraint[c0 ] is met , as follows . + we start by building a tree rooted at node from the graph .nodes from are the leaves of this tree ( see figure [ treegraph ] ) . in the case is odd , any node in has as parent .any node in with a positive integer such that has as parent the associated node from the level .any node in has as parent the root .+ in the case is even , has as root the node , any node in has as parent , and any node in with a positive integer such that has as parent the associated node in .finally , we link the nodes in to the tree . with any node ( where is the number of edges in ) linking two nodes and in we associate as its child a node from the couple , such that this node has not yet a parent . + to color , we start by coloring the node the root of the tree .then , we color nodes level by level , to finally reach the original nodes . from lemma[ lemmah-1 ] and lemma [ lemmav ] , nodes in each level do not reuse colors from lower levels .hence , each child has a color strictly higher than the color of its parent .figure [ treegraph ] depicts the tree built from graph for and , where only tree links are represented .because of the np - completeness of -hop coloring in both general and strategic modes , proved in section [ complexity ] , we focus on coloring algorithms based on heuristics . to compare different heuristics and select the best one , we use the optimal number of colors .the aim of this section is to determine this optimal number of colors for various topologies representative of wireless networks .we start with regular topologies such as grids that constitute large or dense wireless networks . in a further work, we will extend our results to random topologies and study how to efficiently color a wireless network with a random dense topology by mapping it into a grid .for space reasons , we only consider -hop coloring in general mode .the goal of this section is to determine the optimal color number for the 3-hop coloring of grids with various transmission ranges .in all the grids considered , we assume a transmission range higher than or equal to the grid step in order to get the radio connectivity . for simplicity reasons , the transmission range is expressed as a function of the grid step , that is considered as the unit .hence , . moreover , we assume an ideal environment where any node is able to communicate via a symmetric link to any node such that , where denotes the euclidian distance from to . in this paper , we only study grid colorings that reproduce periodically a color pattern . as a consequence ,the optimality of a coloring obtained is only true in the class of periodic colorings .we adopt the following notation and definitions : [ basicpatterndef ] a basic color pattern is the smallest color pattern that can be used to periodically tile the whole grid .[ optimalpatterndef ] a basic color pattern is said optimal if and only if it generates an optimal periodic coloring of the grid. we can now give properties that do not depend on the transmission range value .any color permutation of an optimal basic pattern is still valid and optimal . with the color permutation ,no two nodes that are 1-hop , 2-hop or 3-hop neighbors have the same color .hence , the permuted coloring obtained is still valid .the permutation keeps unchanged the number of colors .hence the coloring is still optimal .[ pcolgrid ] given an optimal color pattern of any grid and the color at node of coordinates ( 0,0 ) , we can build a 3-hop coloring of a grid topology based on this pattern such that the color of node is the given color .the coloring of the grid is obtained by setting the optimal color pattern in such a way that the color of node is the given color . the pattern is then reproduced to tile the whole topology . [ propsingleroundcoloring ] knowing an optimal color pattern of its grid and the color at node of coordinates , each node can locally determine its own color based on its coordinates .the 3-hop coloring obtained for the grid is optimal in terms of colors and rounds .the 3-hop coloring obtained for the grid only requires each node to know the color of node , its coordinates in the grid and the optimal pattern to apply .hence , it is optimal in terms of colors and rounds .we now determine the optimal periodic coloring of grids for various transmission ranges .the proofs of the following theorems can be found in . the optimal 3-hop coloring of a grid topology with a transmission range requires exactly 8 colors , as shown in figure [ neighborpatternfig]a .an optimal basic color pattern is given in figure [ neighborpatternfig]b . .... 4 5 8 3 4 7 2 6 4 7 2 6 4 5 8 3 1 5 8 3 3 1 5 8 2 6 4 7 2 5 8 3 2 a )b ) .... let be any non - border node of the grid .let denote the set of nodes that can not have the same color as .the proof is done in four steps : 1 .first step : at least 8 colors are needed to color node and .second step : we build a valid coloring of and with 8 colors , as depicted in figure [ neighborpatternfig]a .third step : this coloring can be regularly reproduced to constitute a valid coloring of the grid .fourth step : a basic color pattern containing exactly eight colors can be extracted ( see figure [ neighborpatternfig]b ) .an optimal coloring of a grid with a transmission range needs exactly 16 colors , as shown in figure [ color3hopr1.5fig]a .an optimal basic color pattern is given in figure [ color3hopr1.5fig]b . ....9 16 7 8 9 16 7 13 10 11 12 13 10 11 10 11 12 13 3 14 5 4 3 14 5 14 5 4 3 2 15 6 1 2 15 6 15 6 1 2 9 16 7 8 9 16 7 16 7 8 9 13 10 11 12 13 10 11 3 14 5 4 3 14 5 a ) b ) .... an optimal coloring of a grid with a transmission range needs exactly 25 colors , as shown in figure [ color3hopr2fig]a . an optimal basic color pattern is given in figure [ color3hopr2fig]b . ....13 18 25 6 2 8 17 14 22 5 7 16 20 23 12 4 20 25 6 15 21 10 19 24 13 5 21 10 19 8 17 14 22 11 3 9 18 25 6 15 22 11 3 9 18 7 16 20 23 12 4 1 2 8 17 14 22 11 23 12 4 1 2 8 17 21 10 19 24 13 5 7 16 20 23 12 24 13 5 7 16 3 9 18 25 6 15 21 10 19 25 6 15 2 8 17 14 22 11 3 14 16 20 23 12 4 10 19 24 9 a ) b ) ....we now focus on wireless ad hoc and sensor networks , where the algorithm complexity must be kept small . because of the np - completeness of -hop coloring ,heuristics are used to color network nodes .serena is a distributed 3-hop node coloring based on an heuristic : the nodes with the highest priority are colored first .results in section [ theoretical ] giving the optimal number of colors allow us to compare different node priority assignments and to select the best one for serena : the one giving a number of colors close to the optimal .as previously said , 3-hop node coloring is necessary to support unicast transmissions with immediate acknowledgement in case of general communications , where any node is likely to exchange information with any neighbor node . in serena ,any node proceeds as follows to color itself : 1 .node characterizes the set of nodes that can not have the same color as itself .this set depends on the type of : * _ communications supported _ : unicast and/or broadcast ; * _ application _ : general where any node is likely to exchange information with any neighbor node or on the contrary tree type where a node exchanges information only with its parent and its children in the data gathering tree ; * _ acknowledgement for unicast transmissions _ : immediate or deferred .+ in our case , the set is the set of neighbors up to 3-hop .2 . node computes its priority .different priority assignments will be tested in the next subsection .node applies the two following rules : * * rule r1 * : node colors itself if and only if it has a priority strictly higher than any uncolored node in . ** rule r2 * : to color itself , node takes the smallest color unused in . in serena, each node sends its color message to its 1-hop neighbors .this message contains the information related to the priority and color of the node itself and of its 1-hop and 2-hop neighbors .performance of serena is evaluated by the number of colors and the number of rounds needed to color all network nodes .the optimal number of colors is given for comparison .table [ tabserenagrid ] reports the results obtained by serena for different grids and different transmission ranges .the number of nodes ranges from 100 to 900 .the density is computed as the average number of neighbors per node ( i.e. average number of nodes in radio range of a sender ) .it varies from 3 to 25 . means that there is only one criterion used for node priority assignment : either the position of the node in the grid line , or a random number .these choices are justified by the node address assignment : addresses are assigned either consecutively according to grid lines or randomly . means that two criteria are used : first the number of nodes up to 2-hop and second either the position of the node in the grid line or a random number .notice that identical results have been obtained when considering the column instead of the line in the grid .results are averaged over 10 simulation runs .we can draw two conclusions from the results obtained : + the number of colors strongly depends on the density of nodes and weakly on the number of nodes .the number of rounds strongly depends on the number of nodes and weakly on the density .for instance , for a radio range=1 , we get 58 rounds for a 10x10 grid with 100 nodes and 178 rounds for a 30x30 grid with 900 nodes .+ the optimal number of colors is obtained by serena for as well as for transmission ranges and .+ the coloring method that colors the whole grid in a single round by repeating the basic color pattern ( see property [ propsingleroundcoloring ] ) uses exactly the same number of colors whatever the grid size .however , in small topologies where the number of nodes is too small to enable the repetition of the basic color pattern in the four directions , serena with is able to obtain a number of colors smaller than this method .this is explained by the fact that a color can be reused earlier : a conflicting node does not exist .this phenomenon can be observed in the simulation results for a grid size 10x10 , for or .+ the number of colors as well as the number of rounds depend on the priority assignment .another address assignment produces another coloring .the average number of colors reaches 15.4 for a 30x30 grid for the random priority assignment while it is 8 when the priority is given by the position of the node in the line .why do not we have the same number of colors ?the reason is given by the node priority assignment .a pure random assignment gives the worst results , except in configurations described in .the line assignment gives better results than the random one , suggesting that regularity of the topology must be taken into account . outperforms in 12 cases out of 15 both in terms of colors and rounds , suggesting that nodes from the center , having the highest number of constraints must be colored first , as found in for 1-hop coloring .such nodes will then enforce their colors on the border nodes ..number of colors and rounds obtained by serena for various grids and transmission ranges .[ cols="^,^,^,^,^,^,^,^ " , ]in this paper we have proved that the -hop node coloring problem in both general and strategic modes is np - complete , for any integer .we have then focused on specific cases of large or dense wireless networks : grids with a radio range higher than the grid step .we have determined an optimal periodic 3-hop coloring of grids with various transmission ranges .we have then compared the results obtained by serena , a distributed 3-hop coloring algorithm for different node priority assignments .the priority assignment equal to the number of neighbors up to 2-hop , where ties are broken by addresses assigned by grid line , gives the optimal number of colors for a transmission range of 1 or 1.5 .it also outperforms random priority assignments in 12 cases over the 15 cases tested . as a further work, we will optimize serena to take into account these results .we will also study how to map a grid on a given random topology and determine the best grid adapted to this topology .mahfoudh , s. , minet , p. , amdouni , i. , _ energy efficient routing and node activity scheduling in the ocari wireless sensor network _ , future internet journal , www.mdpi.com/journal/futureinternet , 2(3 ) , august 2010 .rhee , i. ; warrier , a. ; xu , l. , _ randomized dining philosophers to tdma scheduling in wireless sensor networks _ , technical report tr-2005 - 21 , dept of computer science , north carolina state university , april 2005 .lee , w. l. ; datta a. ; cardell - oliver , r. , flexitp : a flexible - schedule - based tdma protocol for fault - tolerant and energy - efficient wireless sensor networks , _ ieee transactions on parallel and distributed systems _ , vol .19 , 6 , june 2008 .fertin , g. ; godard , e. ; raspaud , a. , _ acyclic and k - distance coloring of the grid _ journal information processing letters , 87(1 ) , july 2003 .garey , m. ; johnson , d. , _ computers and intractability : a guide to theory of np - completeness _ , w.h .freeman , san francisco , california , 1979 .
coloring is used in wireless networks to improve communication efficiency , mainly in terms of bandwidth , energy and possibly end - to - end delays . in this paper , we define the -hop node coloring problem , with any positive integer , adapted to two types of applications in wireless networks . we specify both general mode for general applications and strategic mode for data gathering applications . we prove that the associated decision problem is np - complete . we then focus on grid topologies that constitute regular topologies for large or dense wireless networks . we consider various transmission ranges and identify a color pattern that can be reproduced to color the whole grid with the optimal number of colors . we obtain an optimal periodic coloring of the grid for the considered transmission range . we then present a 3-hop distributed coloring algorithm , called serena . through simulation results , we highlight the impact of node priority assignment on the number of colors obtained for any network and grids in particular . we then compare these optimal results on grids with those obtained by serena and identify directions to improve serena .
much can be learned about the behavior and evolution of an astronomical source through analysis of properly calibrated spectra .interpretation can lead to estimates of density and temperature conditions , the chemical composition , the dynamics , and the sources of energy that power the emitting object .such an interpretation requires modeling , with sufficiently high accuracy , the excitation and ionization balance of plasmas out of local thermodynamic equilibrium ( lte ) .but ultimately , the accuracy of the models depends on the quality of the atomic / molecular data employed . at present, atomic data exists for most spectral lines observed from the infrared to the x - rays .these data account for most processes leading to tens of thousands of transitions from all ionic stages of nearly all elements of the first five rows of the periodic table .however , this huge amount of data has been obtained primarily through theoretical calculations with only sparse checks against experimental measurements . despite many advances in spectral modeling , mostly in terms of increased completeness and improved quality of atomic / molecular parameters ,a generalized quantitative estimate of uncertainties of the resultant models has not been provided . in recent years , a few authors have presented methods based on the monte carlo numerical technique for propagating uncertainties through spectral models , e.g. .as these techniques are very inefficient , their general applicability to complex spectral modeling is very limited . finding a general and efficient method for estimating uncertainties in spectral models is important for two reasons .first , the accuracy of atomic / molecular data must be known before reliable conclusions can be provided on physically realistic comparisons between theoretical and observed spectra . at present researchers can only provide best fits to observed spectra without much understanding of the uncertainties impacting the results .second , homogeneously accurate atomic data for all transitions of a complex and/or very large atomic system , like for example systems with multiple metastable levels ( e.g. fe ii ) or models with hundreds of energy levels as those needed in uv and x - ray spectroscopy , can not be obtained . in such models ,error propagation analysis of the spectrum could discriminate between a few critically important atomic transitions and the very large numbers of less consequential transitions .conversely , detailed error analysis could direct further theoretical and/or experimental efforts to selectivelly obtain specific atomic measures that would significantly improve spectral models , rather than trying to determine all possible rates at once .this paper is organized as follows .section 2 presents the analytical solution to the uncertainties in level populations of a non - lte spectral model for assumed uncertainties in atomic parameters . in section 3we propose a mechanism to estimate the uncertainties in atomic / molecular data and we test this by the case of fe ii through extensive comparisons with observed spectra . in section 4we discuss the uncertainties in line emissivities and emission line ratio diagnostics .section 5 presents our conclusions .for the sake of clarity , the rest of the paper deals explicitly with the case of population balance by electron impact excitation followed by spontaneous radiative decay .it is also assumed that the plasma is optically thin .however , we note that our method can easily be extended to ionization balance computations , to additional excitation mechanisms such as continuum and bowen fluorescence , and to optically thick transitions .under steady - state balance the population , , of a level is given by [popeq ] where is the electron density , is the einstein spontaneous radiative rate from level to level and is the electron impact transition rate coefficient for transitions from level to level . here, we assume that the electron velocity distribution follows the maxwell - boltzmann function , thus and are both proportional to a symmetrical effective collision strength , , which is the source of uncertainty in the collisional transition rates . assuming that the spectral model is arranged in increasing level energy order whenever .we note that the second term in the denominator of the above equation is the inverse of the lifetime of level , i.e. , .this is important because lifetimes are generally dominated by a few strong transitions , which are much more accurately determined than the weak transitions .thus , carries smaller uncertainties than individual a - values .then , equation ( 1 ) can be written as where is the so - called critical density of level and is defined as .the uncertainty in the population of level , , can be computed as \\ + \sum_{j\ne i } \left({\partial n_i\over \partial a_{i , j}}\right)^2 ( \delta a_{i , j})^2 + \sum_{k\ne i } \left({\partial n_i\over \partial n_k}\right)^2(\delta n_k)^2 .\end{split}\]][leverr ] the first three terms on the right hand side of this equation represent direct propagation of uncertainties from atomic rates to or from level . the last term in the equationcorrelates the uncertainty in level with the uncertainties in the level populations of all other levels that contribute to it .then , \end{split}\]][leverr2 ] where this linear set of equations yields the uncertainties in the populations of all levels . before proceeding to solve these equationsit is worth pointing out some important properties : ( 1 ) uncertainties are obtained relative to the computed level populations regardless of the normalization adopted for these .this is important because while some spectral models compute population relative to the ground level other models solve for normalized populations such that is either 1 or the total ionic abundance .though , the equation above is generally applicable regardless of the normalization adopted .( 2 ) in the high density limit , , the right hand side of the equation goes to zero , thus the population uncertainties naturally go to zero as the populations approach the maxwell - boltzmann values ( lte conditions ) .( 3 ) by having an analytical expression for the propagation of uncertainties one can do a detailed analysis of the spectral model to identify the key pieces of atomic data that determine the quality of the model for any plasma conditions . ( 4 ) the set of linear equations for the uncertainties needs to be solved only once for any set of conditions and the system is of the same size as that for the level populations .this is unlike monte carlo approaches that require solving population balance equations hundreds of times , which makes real - time computation of uncertanties impractical .the set of equations above can be readily solved by writing them as where , and the matrix and vector elements of and are given by the equation [ leverr2 ] .figure 1 shows the populations and population uncertainties for the first four excited levels of o iii as a function of the electron density at a temperature of k. for this computation we have assumed 5% uncertainties in the lifetimes , 10% uncertainties in individual a - values , and 20% uncertainties in the effective collision strengths .the levels considered here are , , and .it is seen that levels 2 through 5 have maximum uncertainties , % , in the low density limit where the populations are determined by collisional excitations from the ground level . as the electron density increases thermalization of levels with similar energies and radiative cascades start becoming more important , which diminishes the contribution of uncertainties in collision strengths and enhances the importance of uncertainties in a - values . for high densitiesall population uncertainties naturally go to zero as the populations approach the boltzmann limit .another thing to notice is that , the population uncertainties exhibit multiple contributions and peaks as the metastable levels and become populated and the uncertainties in these propagate through higher levels .figure 2 shows the populations , relative to the ground level , and population uncertainties for the first eight excited levels of fe ii as a function of the electron density at a temperature of k. for these calculations we use atomic data as in and assume uncertainties of 5% in the lifetimes , 10% in individual a - values , and 20% in the effective collision strengths .the levels considered here are and .an interesting characteristic of the fe ii system is that the excited level is more populated , at least according to the atomic data adopted here , than the ground level at densities around , typical of h ii regions . moreover , under these conditions only % of the total fe ii abundance is in the ground level .this means that unlike lighter species , where excitation is dominated by the ground level or the ground multiplet , in fe ii all metastable levels are strongly coupled and uncertainties in atomic data are expected to propagate in a highly non - linear fashion . in figure 3we present the population errors for the lowest 52 levels of fe ii at k and .these are all even parity metastable levels , except for the ground level .the figure shows the total estimated uncertainties together with the direct contributions from uncertainties in the collision strengths and a - values ( first and second terms on the right hand side of equation [ leverr ] ) and the contribution from level uncertainty coupling .it is observed that the collision strengths are the dominant source of uncertainty for all levels except level 6 ( ) .for this level the uncertainty is dominated by the a - values and uncertainty couplings with levels of its own multiplet and levels of the ground multiplet .this is important because we find that the level makes the largest contribution to the uncertainties in 36 of the lowest 52 levels of fe ii .unfortunately , the atomic data for the level are among the most uncertain parameters of the whole fe ii system , as we discuss in the next section .in the previous section we adopted general uncertainties for lifetimes , a - values for forbidden transitions , and effective collision strengths of 5% , 10% , and 20% , respectively . in absence ofgenerally accepted procedures to estimate uncertainties in theoretical atomic data , these kind of numbers are often cited in the literature as general guidelines ; however , uncertainty estimates on specific rates are rarely provided .in we proposed that uncertainties in gf - values could be estimated from the statistical dispersion among the results of multiple calculations with different methods and by different authors .the uncertainties can be refined by comparing with experimental or spectroscopic data whenever available , although these also have significant associated uncertainties .this approach is similar to what has been done for many years by the atomic spectroscopy data center at the nation institute of standards and technology ( nist ; http://www.nist.gov/pml/data/asd.cfm ) in providing a critical compilation of atomic data . in estimating uncertainties from the dispersion of multiple results one must keep in mind some caveats : ( a ) small scatter among ratesis obtained when the computations converge to a certain value , yet such a convergence is dependant on the maximum size of the quantum mechanical representation treatable at the time of the computation .thus , there is no guarantee that every seemingly converged result is indeed correct , as some values may result from local minima in the parameter space .( b ) large scatter among different calculations is expected in atomic rates where configuration interaction and level mixing lead to cancellation effects .the magnitude of these effects depends on the wave - function representation adopted .thus , some computations maybe a lot more accurate than others for certain transitions and if we knew which computation is the most accurate , then the scatter among all different computations may overestimate the true uncertainty .however , detailed information about configuration and level mixing for every transition is rarely available in the literature .nevertheless , in absense of complete information about every transition rate from every calculation , a critical comparison between the results of different calculations and other sources of data , if available , provides a reasonable estimate of the uncertainty in atomic / molecular rates .are the statistical dispersion values realistic uncertainty estimates ? to answer this question we look at the intensity ratios between emission lines from the same upper level as obtained from observed astronomical spectra and theoretical predictions .the advantage of looking at these ratios is that they depend only on the a - values , regardless of the physical conditions of the plasma .thus , the ratios ought to be the same in any spectra of any source , provided that the spectra have been corrected for extinction .fe ii yields the richest spectrum of all astronomically abundant chemical species .thus , high resolution optical and near - ir [ fe ii ] lines are the best suited for the present experiment .one hundred thirty seven [ fe ii ] lines are found in the hst / stis archived spectra of the weigelt blobs of carinae .six medium dispersion spectra ( =6000 to 10,000 ) of the blobs were recorded between 1998 and 2004 at various orbital phases of the star s 5.5-year cycle .seventy eight [ fe ii ] lines are also present in the deep echelle spectrum ( =30 000 ) of the herbig - haro object ( hh 202 ) in the orion nebula from .the importance of having multiple spectra from different sources and different instruments must not be overlooked .multiple measurements of the same line ratio minimize the likelihood of systematic errors due to unidentified blends , contamination from stellar emission , and instrumental effects .from the observations , there are 107 line ratios reasonably well measured from the spectra .the ratios are defined as where and are the measured fluxes of two lines from the same upper level . here , it is important that the minimum of the two fluxes is put in the denominator for the ratio .thus , the line ratios are unconstrained and they are all equally weighted when comparing with theoretical expectations .figure [ measure ] illustrates a few line ratio determinations from several measurements from spectra of carinae and hh 202 , as well as from various theoretical determinations . in practice , we perform up to four measurements of every observation for different spectral extractions along the ccd and different assumptions about the continuum and the noise levels . thus , we see that the scatter between multiple measurements of a given ratio greatly exceed the statistical uncertainties in the line flux integrations .moreover , the scatter between measured line ratios often exceeds the scatter between theoretical predictions .full details about the fe ii spectra and measurement procedures will be presented in a forthcoming paper , where we will also present our recommended atomic data for fe ii . for the present work we consider seven different computations of a - values for fe ii .these are the superstructure and relativistic hartree - fock ( hfr ) calculations by , the recent civ3 calculation of , and various new hfr and autostructure calculations that extend over previous works .figure [ uncerfig ] presents a sample of theoretically calculated lifetimes and transition yields in fe ii .the yields are defined as . from the dispersion among various results ,the average uncertainty in lifetimes for all levels of the and configuration is 13% .more importantly , it is found that the uncertainty in the critically important level is , due to cancelation effects in the configuration interaction representation of the transition .we compared the observed line ratios described above with the predictions from different sets of theoretical a - values . without uncertaintyestimates for the theoretical values , the reduced- values from these comparison range from 2.2 to 3100 for the different sets of a - values . on other hand , if one adopts average a - values from all calculations and uncertainties from the resultant standard deviations the reduced- is 1.03 .this is indicative of well estimated uncertainties , neither underestimated nor overestimated , and within these uncertainties there is good agreement between theoretical and experimental line ratios .the comparison between observed and theoretical line ratios , including uncertainties , is presented in figure [ aratios ] .figure [ errordist2 ] shows the estimated lifetime uncertainties for the lowest 52 levels of fe ii .the figure also presents the level population uncertainties that results from the present uncertainties in lifetimes and transition yields for a plasma with k and . here ,the adopted uncertainties in the collision strengths are kept at 20% for all transitions . by far, the most uncertain lifetime is that of the important level ( ) , yet the way that this uncertainty propagates through level populations depends on the density of the plasma . for electron densitiesmuch lower than the critical density for the level the uncertainty in the lifetime reflects directly on the level population for that level .this is seen at for levels and higher . however , as the density increases the uncertainties in the level populations become incresingly dominated by the collision strengths .this effect is clearly illustrated in figure [ errorfe2b ] .the line emissivity , in units of photons per second , of a transition , with , is [emisseq ] in computing the uncertainty in one must to account for the fact that and are correlated , because the latter appears in the denominator term of equation ( 1 ) that determines .this is important because the most frequently observed lines from any upper level are usually those that dominate the total decay rate for the level , i.e. , the inverse of the level s lifetime .it is convenient to re - write the above equation as combining this equation with equation [ leverr ] one finds [emisser ] this equation can be readily evaluated from the level populations and uncertainties already known .the equation has various interesting properties : ( 1 ) the equation is independent of the physical units used for the emissivities ; ( 2 ) in the high density limit , as the uncertainty in the level population goes to zero , the uncertainty in the emissivity is the same as in the a - value .figure [ lineerror ] depicts uncertainties in emissivity for a sample of strong ir , near - ir , and optical [ fe ii ] lines .these are computed at k. the uncertainties in the collision strengths are 20% and the uncertainties in the lifetimes and a - values are those estimated in the previous section .the behavior of these uncertainties for different physical conditions is complex .let us look , for instance , at the uncertainty of emissivity of the 5.3 m line ( ; ) whose behavior is contrary to the uncertainty in the population of the level ( see figure [ errorfe2b ] ) . according to equations [ popeq ] and [ emisseq ] , in the low density limit in the case of the level the 5.3 m transition dominates the total decay rate of level and the ratio is essentially 1 .thus , the uncertainty in the rate cancels out at low electron densities and the uncertainty in the emissivity is small despite a large uncertainty in the level population .by contrast , at high densities the population of the level approaches the boltzmann limit and the uncertainty in the emissivity is solely given by that in , which is .a line emissivity ratio between two lines is given by where is the energy difference between levels and and we have used emissivties in units of energy per second . in computing the uncertainty in this line ratio one must account for the fact that the emissivities are correlated . moreover , a general expression for the uncertainty must account for cases where , in which case the uncertainty in the ratio would depend only on the a - values .the uncertainty is the ratio is given by ^ 2 \left({\delta j_{i , f}\over j_{i , f}}\right)^2 + \left[1-r\left({\partial j_{i , f}\over \partial j_{g , h}}\right)\right]^2 \left({\delta j_{g , h}\over j_{g , h}}\right)^2 , \end{split}\ ] ] where thus , ^ 2\left ( { \delta j_{i , f}\over j_{i , f}}\right)^2 + \\ \left[1-r\left({a_{i , f}\delta e_{i , f}\over a_{g , h}\delta e_{g , h}}{\partial n_i\over \partial n_g } + { a_{i , f}\delta_{i , f}\over \delta e_{g , h } n_g } { \partial n_i\over \partial a_{g , h}}\right)\right]^2\left ( { \delta j_{g , h}\over j_{g , h}}\right)^2 . \end{split}\ ] ] from equation [ popeq ] we find for , for , and otherwise . in the general case of a ratio involving several lines in the numerator and/or denominator , i.e. , the uncertainty is figure [ raterr ] shows a sample of line ratios between ir and optical lines and their uncertainties .the uncertainties exhibit complex behaviour with changes in density and temperatures .in general , line ratios are only useful as diagnostics when the observed ratio lies around middle range of the theoretical ratio .moreover , it is very important to know the uncertainties in the ratios when selecting appropriate diagnostics from a given spectrum .we presented a method to compute uncertainties in spectral models from uncertainties in atomic / molecular data .our method is very efficient and allows us to compute uncertainties in all level populations by solving a single algebraic equation .specifically , we treat the case of non - lte models where electron impact excitation is balanced by spontaneous radiative decay .however , the method can be extended to ionization balance and additional excitation mechanisms .our method is tested in o iii and fe ii models , first by assuming commonly assumed uncertainties and then by adopting uncertainties in lifetimes and a - values given by the dispersion between the results of multiple independent computations .moreover , we show that uncertainties taken this way are in practice very good estimates. then we derive analytic expresions for the uncertainties in line emissivities and line ratios .these equations take into account the correlations between level populations and line emissivities .interestingly , the behaviour of uncertainties in level populations and uncertainties in emissivities for transitions from the same upper levels are often different and even opposite .this is the case , in particular , for lines that result from transitions that dominate the total dacay rate of the upper level .then , the uncertainties in a - values for the transitions that yield the lines cancel out with the uncertainties in the lifetimes of the levels . in terms of emission line ratios, it is also found that knowledge of the uncertainties in the ratios is essential selecting appropriate ratios for density and temperature diagnostics . at present, we are in the process of estimating uncertainties in atomic data for species of astronomical interest .our uncertainty estimates and analysis of the uncertainties in various spectral models , ionic abundance determinations , and dianostic line ratios will be presented in future publicaitons .
we present a method for computing uncertainties in spectral models , i.e. level populations , line emissivities , and emission line ratios , based upon the propagation of uncertainties originating from atomic data . we provide analytic expressions , in the form of linear sets of algebraic equations , for the coupled uncertainties among all levels . these equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data . we illustrate our method applied to spectral models of o iii and fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions . as to intrinsic uncertainties in theoretical atomic data , we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations . this technique provides excellent results for the uncertainties in a - values of forbidden transitions in [ fe ii ] .
wide field of view ( fisheye ) camera has received increasing attention over the past few years with its broad applications in surveillance , robotics , intelligent vehicles , immersive virtual environment construction , etc .for example , nissan motors developed a visual system that consists of four fisheye cameras mounted on the four sides of the vehicle .they together cover the entire 360 surrounding scene and allow drivers to examine all the visual blind spots that may cause danger . in surveillance ,ip fisheye camera has become extremely prevalent for its wide cover range and easy axcessibility .samsung provides a product with over 5 megpixel and 360 fov , which is equipped in an alarm system performing intelligent motion detection , audio detection , and tampering detection .the supporting de - warping software allows users to undistort any subregion in the captured image .recently , ricoh unveilled its first personal 360 fisheye camera ricoh theta .two fisheye cameras are embedded on both front and back sides , to capture the entire scene with one click .then the two captured images are stitched together to provide a dynamic 360 view with adjustable perspective controlled by the user . with this portable and handy device, our project aims to reconstruct the 3d scene using the captured spherical images . one important advantage of using this camera is that we are no longer required to set up multiple traditional cameras at different locations and directions to cover the entire scene . as a tradeoff , traditional camera model with perspective projectioncan not be directly applied since fisheye camera has large radial distortion , especially near the border . to establish a one - to - one mapping between the 180 scene and a circular image, we created a model based on spherical projection . based on this model, we can develop the epipolar geometry for fisheye cameras and solve the triangulation problem with least - square method .we used manually selected points to calculate the fundamental matrix , then applied it as a filter to prune the sift [ [ lowe ] ] matching result , at last augmented the point correspondences for reconstruction . on the other hand, we also tried dense reconstruction by first doing image rectification and then calculating the disparity map . in sec .[ relatedwork ] , we will briefly review previous work about 3d reconstruction with fisheye camera . then in sec . [ model ]we will jump into the details of our camera model , revised epipolar geometry and data augmentation .extension to multicamera registration and dense reconstruction will also be illustrated . in sec .[ experiment ] , we first show the reconstruction result using hand - picked points , then we show the sift augmented result . next , we give the disparity map and dense reconstruction result . finally , we will show a snapshot of our gui and provide the source code package for users to taste .perspective camera model is the most popular camera model in 3d reconstruction .however , it is limited for its narrow field of view . on the other hand , fisheye cameras which can capture spherical imageshave been paid more attention to during recent years .the major advantage is the wide fov and thus more information it can incorporate from the environment . shah and aggarwal [ [ shah ] ] presented an autonomous mobile robot navigation system in an indoor environment using two calibrated fisheye sensors .micusik et al .[ [ micusik ] ] proposed a 3d reconstruction of the surrounding scene with two or more uncalibrated fisheye images .li [ [ li ] ] drew 3d reconstruction by computing spherical disparity maps using binocular fisheye camera , which first calibrated the binocular camera to rectify the captured images and then used the correlation - based stereo to acquire the dense 3d representation of some simple environment .herrera et al .[ [ herrera ] ] and moreau et al .[ [ moreau ] ] placed the camera upwards and retrieved the environment information from the images .they computed disparity maps without image rectification step .* proposed the camera model and epipolar geometry for fisheye camera . *designed a method to estimate camera rotation and position from point correspondences in multiple images .* implemented sift feature extraction and matching algorithm through equirectangular - to - cube mapping .* proposed sparse & dense 3d reconstruction algorithm from multiple images . *developed a graphical user interface to interactively show multiple correlated 360 images .in this section , we go over the mathematical model behind this project .it mainly consists of four parts , the fisheye camera model , epipolar geometry , multicamera registration and image rectification for dense reconstruction .the fisheye camera model is based on spherical projection .suppose there is a sphere of radius and a point in space , as shown in fig .[ cammodel ] .first , is projected to which is the intersection of the sphere surface with the line defined by sphere center and point .this defines a mapping between spatial points to points on the sphere surface .then , these points are vertically projected onto the image plane as is projected to , which results in a circular image . in mathematical term , let ^t ] .the relation between and is where and .the vertical projection reduces the component to 0 , and we get ^t ] as the fundamental matrix for fisheye camera pair , we have constraint .now , we can use the eight - points algorithm or ransac to solve for . once we get the fundamental matrix, we can calculate the epipoles in the two cameras by solving , .recall that the definition of epipoles is , , which gives .then the rotation matrix can be derived as , + {\times } + [ v]_{\times}^2\frac{1-c}{s^2}\ ] ] where , , .here we assume the euclidean distance between and is 1 , i.e. .now , we can triangulate using parameters , , , and .we define the line passing through and as , where ; the line passing through and as , .the goal of triangulation is to find the minimal distance between the two lines. we can formulate this into a least square problem , + where the optimal solution is given by , = { \left ( { { a^t}a } \right)^ { - 1}}{a^t}{e_1},\quad a = \left [ { \begin{array}{*{20}{c } } { { z_{p,1 } } } & { - { r^ { - 1}}{z_{p,2 } } } \end{array } } \right]\ ] ] once we get the optimal parameter , . the minimal distance is known to be achieved between and , then can be assigned as the their middle points : in order to calculate , we must have enough point correspondences in multiple images . in our methods , we manually selected around 45 pairs of corresponding points .we also attempted to automatically estimate by applying ransac with constraint on sift matching results , to find the best estimation of .however , sift matching is not invariant to radial distortion and the matching results have unacceptable outliers , thus the estimation of is not robust enough .instead , we estimated by using hand - picked points , and in turn use to filter the sift matches and extend our point pairs pool .next , we extend the discussion to multi - view scenario . from the section above we can obtain the fundamental matrix and epipoles for each pair of cameras , but we can no longer assume the euclidean distance between camera centers is 1 .now we want to estimate the rotation matrix and camera position for each camera. this can be done in a two - step process .first , we estimate the rotation for each camera .assume we have cameras , for each pair of camera and , we can calculate the epipoles and , which denotes on image and on image , respectively . here, we assume the cameras all lie on the same horizonal plane , which is a very good approximation of how we took pictures .therefore , the rotation of each camera can be represented by an angle .the relation between and rotation matrix is \ ] ] the epipole direction in world coordinate is the last line should be obvious as they both denote the direction of the line segment defined by and .the sign indicates the two - fold ambiguity in calculating from the fundamental matrix .now we need to minimize the objective function , which is a convex optimization problem , and we solve it by newton s method .next , we estimate the position of each camera . as we have the direction of each line segment , this is a triangulation problem . a naive way to solvethis problem is to choose two cameras , e.g. and , and set the euclidean distance between them to be 1 .then for each camera other than and , its position can be triangulated from the direction of and .we can repeat the procedure with different choice of baseline to check the consistency .we can also feed the result into another gradient descent program to adjust the camera positions using all directions obtained .now that we have recovered the rotation and translation of each camera , the object points can be triangulated in a similar way as described in sec .[ epigeo ] . in the multiview case ,we assign as the distance between object point and camera center , and minimize the mean squared distance among the points obtained from each camera image . now , we want to go one step further from sparse reconstruction to dense reconstruction . in order to achieve a dense reconstruction ,we need to rectify the image pairs so that their epipolar lines are horizontal and all corresponding points have the same vertical coordinate on the image . as we know , the epipolar lines in spherical images are circles which intersect with the epipoles . therefore ,if we rotate the camera reference such that the axis align with the epipole and the , axis are parallel , and map the sphere onto equirectangular image , then epipolar lines would be vertical lines in equirectangular images , as show in fig .[ rectpipe ] . by exchanging the horizontal and vertical coordinates , the image pairs will be rectified . from the rectified image pairswe can calculate a disparity map , which is the distance ( in pixels ) between corresponding points on the image pair . asthe images are equirectangular , is the angle by a constant . assume the corresponding points are and respectively , and , then , , where . from that we can calculate the 3d coordinates of point .in this section , we show the reconstruction results for a 2-cameras settings .the two raw images are shown in fig .[ fig:2view ] . in order to implement the eight - point algorithm to compute the fundamental matrix, we manually labeled ground truth point correspondences on circular images , shown in fig .[ fig : ptscorr ] .for each view , we labeled around 45 pairs of corresponding points , which are typically on the ceiling or on the walls , thus easy to recognize .there are also several points around the desk , such as the corner of the computer .we also tried to extract point correspondences using sift , then estimate automatically using ransac .however , due to the large amount of outliers , this approach is not robust enough .therefore , we proposed another pipeline use the ground truth to filter sift matching results , and add those correspondences into our correspondences pool to achieve a denser reconstruction result . in our project, we used the sift implementation in vlfeat toolbox [ [ vlfeat ] ] for extracting features and performing point matching .we applied point mathcing on both raw images and cubic images achieved by cube mapping [ [ greene ] ] .[ cubesift ] shows a rough matching result using cubic images . using the ground truth fundamental matrix , we calculated and .then , we used the triangulation method we proposed in sec .[ epigeo ] to recover points position in 3d space .the reconstruction result is shown in fig .[ fig:3drec ] .now , we show the reconstruction result using pictures captured at 6 different loacations , which is equivalent to 6 cameras .we manually select 12 corresponding points on each of the 6 images . for each pair of imagewe calculate the fundamental matrix and epipoles , .then , we calculated the rotation and position of each camera .the position of the cameras is shown in fig .[ campos ] .once the rotation and position of each camera is obtained , we can triangulate the corresponding points as well as rectify each pair of images . the 3d reconstruction for the 12 pointsis shown in fig .the result matches well with the ground truth .after rectification , the corresponding points are at the same longitude with each other .so after transforming the raw image into longitude - latitude image , we can use the traditional method to find the corresponding pairs in the images .the calculated disparity map is shown in fig . [rectified ] , together with the two rectified images .the brighter part means smaller disparity and the darker part indicates larger disparity . as we can see , the image have roughly presented the deapth information .while since the rectified images still have distortion , the disparity map may have noise .the reconstruction result is shown in fig .[ denserec ] .although the result looks a little messy , we can see the closet are reconstructed fairly well . a graphical user interface ( gui )is developed using the 6-view dataset .you can run ` demos.m ` to see the demonstration .[ gui ] gives a brief illustration of the 6 views obtained by user control .in this project we implemented 3d reconstruction algorithm for multiple spherical images .we obtained our data using ricoh theta fullview fisheye camera .we used both manually selected points and sift matching points to estimate fundamental matrix for each pair of images .then , we calculated epipoles , the rotation and the position of each camera . based on these information we implemented sparse 3d reconstruction , the result matches well with the ground truth .we also developed a user interface to enable users to interactively view multiple correlated 360 images .our project is an important step towards building virtual tour from large number of fullview images .there are two things we want to improve in the future .the first is to enhance the algorithm of generating disparity map .the second is the robustness of sift matching in various datasets .currently the performance of sift matching fluctuates between different image sets . in outdoor images ,sift matching performance tends to deteriorate , the reason could be that camera centers are too far apart thus image pairs differ too much , or that buildings tend to have repetitive features like arches , windows , etc .we could improve the image capturing behaviours and select more appropriate scenes to get a better performance .[ shah ] shah , shishir , and j. k. aggarwal .`` intrinsic parameter calibration procedure for a ( high - distortion ) fish - eye lens camera with distortion model and accuracy estimation . ''pattern recognition 29.11 ( 1996 ) : 1775 - 1788 .[ micusik ] micusik , branislav , and tomas pajdla .`` autocalibration & 3d reconstruction with non - central catadioptric cameras . '' computer vision and pattern recognition , 2004 .cvpr 2004 .proceedings of the 2004 ieee computer society conference on .1 . ieee , 2004 .[ moreau ] moreau , julien , sebastien ambellouis , and yassine ruichek .`` 3d reconstruction of urban environments based on fisheye stereovision . ''signal image technology and internet based systems ( sitis ) , 2012 eighth international conference on .ieee , 2012 .[ fujiki ] fujiki , jun , akihiko torii , and shotaro akaho .`` epipolar geometry via rectification of spherical images . ''computer vision / computer graphics collaboration techniques .springer berlin heidelberg , 2007 .461 - 471 .
in this report , we proposed a 3d reconstruction method for the full - view fisheye camera . the camera we used is ricoh theta , fig . [ ricoh ] , which captures spherical images and has a wide field of view ( fov ) . the conventional stereo apporach based on perspective camera model can not be directly applied and instead we used a spherical camera model to depict the relation between 3d point and its corresponding observation in the image . we implemented a system that can reconstruct the 3d scene using captures from two or more cameras . a gui is also created to allow users to control the view perspective and obtain a better intuition of how the scene is rebuilt . experiments showed that our reconstruction results well preserved the structure of the scene in the real world .
let be a distribution on , where is an arbitrary set equipped with a -algebra .the goal of quantile regression is to estimate the conditional quantile , that is , the set - valued function | x ) \geq \tau \mbox { and } \mathrm{p}([t,\infty ) | x)\geq 1-\tau\ } , \qquad x\in x , \ ] ] where is a fixed constant specifying the desired quantile level and , , is the regular conditional probability of . throughout this paper , we assume that has its support in ] for some .the uniform boundedness of the conditionals is , however , crucial . )let us additionally assume for a moment that consists of singletons , that is , there exists an , called the conditional -quantile function , such that for -almost all .( most of our main results do not require this assumption , but here , in the introduction , it makes the exposition more transparent . )then one approach to estimate the conditional -quantile function is based on the so - called _ -pinball loss _ , which is defined by with the help of this loss function we define the -risk of a function by recall that is up to -zero sets the _ only _ function satisfying , where the infimum is taken over _ all _ measurable functions .based on this observation , several estimators minimizing a ( modified ) empirical -risk were proposed ( see for a survey on both parametric and nonparametric methods ) for situations where is unknown , but i.i.d .samples drawn from are given .empirical methods estimating quantile functions with the help of the pinball loss typically obtain functions for which is close to with high probability . in general, however , this only implies that is close to in a very weak sense ( see , remark 3.18 ) but recently , , theorem 2.5 , established _ self - calibration inequalities _ of the form which hold under mild assumptions on described by the parameter ] . moreover , it is easy to check that the interior of is a -zero set , that is , . to avoid notational overload ,we usually omit the argument if the considered distribution is clearly determined from the context .[ distribut - type - q ] a distribution with ] and such that for all t^*_{\mathrm{min } } = t^*_{\mathrm{max}} ] , be a distribution with ] , then has a -quantile of type for all as simple integration shows . in this case , we set and .again , let be a distribution with ] that has a lebesgue density , and for some . if , for a fixed , there exist constants and such that , \\ h(y ) & \geq & b \bigl(y - t^*_{\mathrm{max}}(\mathrm{q } ) \bigr)^p , \qquad y\in [ t^*_{\mathrm{max}}(\mathrm{q}),1].\end{aligned}\ ] ] lebesgue - almost surely , then simple integration shows that has a -quantile of type and we may set and .let be a distribution with ] , and hence is a -quantile of type for all satisfying .let be a distribution with ] with . if ) = 0 ] and ) ) + \beta ] is the ) + \alpha ] , , and be a distribution on with ] defined , for -almost all , by where is defined in definition [ distribut - type - q ] , satisfies . to establish the announced self - calibration inequality, we finally need the distance between an element and an .moreover , denotes the function . with these preparations the self - calibration inequality reads as follows .[ main1 ] let be the -pinball loss , ] , we have let us briefly compare the self - calibration inequality above with the one established in . to this end , we can solely focus on the case , since this was the only case considered in .for the same reason , we can restrict our considerations to distributions that have a unique conditional -quantile for -almost all . then theorem [ main1 ] yields for .on the other hand , it was shown in , theorem 2.5 , that under the _ additional _ assumption that the conditional widths considered in definition [ distribut - type - q ] are _ independent _ of .consequently , our new self - calibration inequality is more general and , modulo the constant , also sharper .it is well known that self - calibration inequalities for lipschitz continuous losses lead to variance bounds , which in turn are important for the statistical analysis of erm approaches ; see . for the pinball loss ,we obtain the following variance bound .[ main2 ] let be the -pinball loss , ] , there exists an ] for -almost all . assume that there exists a function with for -almost all and constants and ] .moreover , let be a separable rkhs over with a bounded measurable kernel satisfying .in addition , assume that ( [ eigenvalues - assump ] ) is satisfied for some and .then there exists a constant depending only on , , and such that , for all , and , we have with probability not less than that let us now discuss the learning rates obtained from this oracle inequality . to this end , we assume in the following that there exist constants and ] implies ( [ a2 ] ) for .now assume that ( [ a2 ] ) holds .we further assume that is determined by , where then theorem [ oracle - general ] shows that converges to with rate ; see , lemma a.1.7 , for calculating the value of .note that this choice of yields the best learning rates from theorem [ oracle - general ] .unfortunately , however , this choice requires knowledge of the usually unknown parameters , and . to address this issue ,let us consider the following scheme that is close to approaches taken in practice ( see for a similar technique that has a fast implementation based on regularization paths ) .[ conc - bas : tv - svm ] let be an rkhs over and be a sequence of finite subsets ] such that the cardinality of grows polynomially in .furthermore , consider the situation of theorem [ oracle - general ] and assume that ( [ a2 ] ) is satisfied for some ]. then defined the _inner -risks _ by and the _ minimal inner -risk _ was denoted by .moreover , we write for the set of exact minimizers . our first goal is to compute the excess inner risks and the set of exact minimizers for the pinball loss . to this end recall that ( see , theorem 23.8 ) , given a distribution on and a measurable function we have with these preparations we can now show the following generalization of , proposition 3.9 . [loss : pin - ball - more ] let be the -pinball loss and be a distribution on with . then there exist ] , and , for all , we have moreover , if , then we have and . finally , equals the -quantile , that is , ] , and hence we obtain ) \leq \tau + \mathrm{q}(\{t^*_{\mathrm{max}}\}) ] satisfying and ) = \tau + q_+ . \ ] ] let us consider the distribution defined by for all measurable .then it is not hard to see that .moreover , we obviously have for all .let us now compute the inner risks of with respect to . to this end , we fix a .then we have and and hence we obtain moreover , using ( [ bauer - h1 ] ) we find and since ( [ loss : pin - ball - more - h1 ] ) implies ) = \tau + q_+ ] .moreover , if , the fact yields ) = \mathrm{q}(\{t^*_{\mathrm{min}}\ } ) + \mathrm{q}(\{t^*_{\mathrm{max}}\ } ) .\ ] ] using the earlier established and , we then find both and . to prove ( [ loss : pin - ball - more - a1 ] ) and ( [ loss : pin - ball - more - a2 ] ) , we first consider the case . then ( [ loss : pin - ball - more - a1-h1 ] ) and ( [ loss : pin - ball - more - a1-h2 ] ) yield , .this implies , and hence we conclude that ( [ loss : pin - ball - more - a1-h1 ] ) and ( [ loss : pin - ball - more - a1-h2 ] ) are equivalent to ( [ loss : pin - ball - more - a1 ] ) and ( [ loss : pin - ball - more - a2 ] ) , respectively .moreover , in the case , we have , which in turn implies ) = \tau ], we consequently find \\[-8pt ] & = & ( \tau-1)\int_{y < t^*_{\mathrm{max}}}y \ , \mathrm{d } \mathrm{q}(y ) + \tau \int_{y\geq t^*_{\mathrm{max}}}y \ , \mathrm{d } \mathrm{q}(y ) , \nonumber\end{aligned}\ ] ] where we used ) = \tau ] .analogously , we find for all , and hence we can , again , conclude for all .as in the case , the latter implies that ( [ loss : pin - ball - more - a1-h1 ] ) and ( [ loss : pin - ball - more - a1-h2 ] ) are equivalent to ( [ loss : pin - ball - more - a1 ] ) and ( [ loss : pin - ball - more - a2 ] ) , respectively . for the proof of ] .let us assume that ] , which in turn implies ) \geq \tau ] .let us define the _ self - calibration function _by note that if , for , we write , then we have , and hence the definition of the self - calibration function yields in other words , the self - calibration function measures how well an -approximate -risk minimizer approximates the set of exact -risk minimizers . [ lower - pol ] for ] defined by \varepsilon\in [ \alpha,2] ] , we have since and we easily see by the definition of that the assertion is true for ] defined by .\ ] ] it suffices to show that for all ] .now we obtain the assertion from this , ] that has a -quantile of type .moreover , let ] , we have since is convex , the map is convex , and thus it is decreasing on ] , we thus find for all . since this gives , we obtain let us first consider the case .for ] , ( [ loss : pin - ball - more - a1 ] ) and ( [ q - type-2 ] ) yield for ] by the definition of and . in the case and , ( [ loss : pin - ball - more - h1 ] ) yields ) - \tau \geq b_\mathrm{q} ] .finally , using ( [ loss : pin - ball - more - a2 ] ) instead of ( [ loss : pin - ball - more - a1 ] ) , we can analogously show for all ] .now the assertion follows from lemma [ lower - pol ] .proof of theorem [ main1 ] for fixed we write . by lemma [ self - cal - pinball - lower ] and ( [ m1 ] )we obtain , for -almost all , by taking the power on both sides , integrating and finally applying hlder s inequality , we then obtain the assertion .proof of theorem [ main2 ] let ] that satisfies both latexmath:[\[\begin{aligned } f^*_{\tau,\mathrm{p}}(x ) & \in & f^*_{\tau,\mathrm{p}}(x ) , \\ for -almost all .let us write .we first consider the case , that is , .using the lipschitz continuity of the pinball loss and theorem [ main1 ] we then obtain since , we thus obtain the assertion in this case .let us now consider the case .the lipschitz continuity of and theorem [ main1 ] yield since for we have , we again obtain the assertion .proof of theorem [ oracle - general ] as shown in , lemma 2.2 , ( [ eigenvalues - assump ] ) is equivalent to the entropy assumption ( [ en ] ) , which in turn implies ( see , theorem 2.1 , and , corollary 7.31 ) where denotes the empirical measure with respect to and is a constant only depending on .now the assertion follows from , theorem 7.23 , by considering the function that achieves .mendelson , s. ( 2001 ) .geometric methods in the analysis of glivenko cantelli classes . in _ proceedings of the 14th annual conference on computational learning theory _ ( d. helmbold and b. williamson , eds . ) 256272 .new york : springer .mendelson , s. ( 2001 ) .learning relatively small classes . in _ proceedings of the 14th annual conference on computational learning theory _( d. helmbold and b. williamson , eds . ) 273288 .new york : springer .steinwart , i. and christmann , a. ( 2008 ) . how svms can estimate quantiles and the median . in _ advances in neural information processing systems 20 _platt , d. koller , y. singer and s. roweis , eds . ) 305312 .cambridge , ma : mit press .steinwart , i. , hush , d. and scovel , c. ( 2009 ) .optimal rates for regularized least squares regression . in _ proceedings of the 22nd annual conference on learning theory _( s. dasgupta and a. klivans , eds . ) 7993 .available at http://www.cs.mcgill.ca/\textasciitilde colt2009/papers/038.pdf#page=1[http://www.cs.mcgill.ca/~colt2009/papers/038.pdf#page=1 ] .
the so - called pinball loss for estimating conditional quantiles is a well - known tool in both statistics and machine learning . so far , however , only little work has been done to quantify the efficiency of this tool for nonparametric approaches . we fill this gap by establishing inequalities that describe how close approximate pinball risk minimizers are to the corresponding conditional quantile . these inequalities , which hold under mild assumptions on the data - generating distribution , are then used to establish so - called variance bounds , which recently turned out to play an important role in the statistical analysis of ( regularized ) empirical risk minimization approaches . finally , we use both types of inequalities to establish an oracle inequality for support vector machines that use the pinball loss . the resulting learning rates are min max optimal under some standard regularity assumptions on the conditional quantile .
particle simulation techniques are now over 50 years old and have become a vital tool in exploring natural processes at all scales .molecular dynamics , granular dynamics , dissipative particle dynamics , and even smooth particle hydrodynamics algorithms are all fundamentally identical .they each attempt to solve classical equations of motion for a large number of particles . in such models ,conservative interactions between particles are typically defined through a pairwise additive inter - particle potential , where is the distance between the particles .the force acting on particle due to particle is given by where is the position of particle , and is the position of particle .there are two broad categories of inter - particle potentials : continuous and discrete . for continuous potentials ,the interaction energy is a continuous function of the particle positions .the lennard - jones potential is a classic example of a continuous potential : \end{aligned}\ ] ] where is the distance between the two particles , is the minimum interaction energy , and is the separation distance corresponding to zero interaction energy . in discontinuous ( also known as `` stepped '' or `` terraced '' ) potentials , the interaction potential changes only at discrete locations and a functional definition is difficult .an illustration of the two forms of the lennard - jones potential is given in fig .[ fig : ljstepped ] .continuous potentials are popular as the finite - difference algorithms used to simulate them are well - known and it is straightforward to implement physical scaling laws into the model potential .for example , the term in the lennard - jones potential was selected to match the known scaling of molecular dispersion forces .discontinuous potentials on the other hand are typically reported as a table of discontinuity locations and energies .although these two classes of potentials are distinct , it is clear that they may be made equivalent , provided a sufficient number of discontinuities or steps are used , as illustrated in fig .[ fig : ljstepped ] .the optimal number , location , and energetic change of the discontinuities for an accurate representation of a continuous potential is not well understood and is the subject of this paper .a comparison of the continuous lennard - jones potential ( solid ) and three stepped approximations created using eq . for step placement and eq .for the step energies . ] the motivation for this study is to understand the equivalence between the two approaches and to allow conversion between them .continuous potentials are prevalent in the simulation literature , beginning with the first simulations of lennard - jones systems by verlet in 1967 to the complex many - body potentials used for biological systems today .discrete potentials are equally as popular due to their amenability to theoretical analysis , and are at the heart of thermodynamic perturbation theory ( tpt ) and kinetic theory .however , there has not been the same explosion of molecular force fields and software tools as for continuous potentials ( e.g. , gromacs and espresso ) .this is even more surprising given that the very first particle simulations were carried out using a discrete potential , almost ten years before verlet s simulations .it is only relatively recently that fine - tuned discrete potentials for detailed , atomistic simulations have started to appear ; these include force fields for a broad range of compounds , including hydrocarbons and fluorocarbons , organic acids , esters , ketones and other organic compounds , phospholipids , and peptides and proteins .the use of tpt has even allowed rapid and direct fitting of discrete potentials to experimental data .in addition , standard simulation packages for event - driven molecular dynamics have also begun to appear .the strong theoretical frameworks and stable simulation algorithms make discrete potentials an attractive alternative to continuous potentials ; therefore , it is desirable to have a mechanism to map existing continuous potentials into discrete forms .this mapping must be optimized in the sense that it must use the smallest number of discontinuities to reduce the complexity of the converted potential and to minimize the computational cost of simulation .chapela et al . was the first to attempt to represent the continuous lennard - jones potential by an equivalent discrete form .this mapping was optimized `` by hand '' to reproduce the thermodynamic properties at one state point , but more recent work has focused on using regular step placement and algorithms to determine the step energies to partially automate the process .algorithms for directly specifying both the location and energy changes of discontinuities from underlying continuous potentials have also been presented allowing a convenient implementation of arbitrary potentials in event - driven dynamics ; however , the optimization of direct conversion is yet to be explored .recently , there has been an attempt to replace the soft interactions of continuous potentials entirely with collision dynamics at low densities but this approach is restricted to low density systems . in this work , the mapping of a continuous potential to a discrete form is investigated using the lennard - jones potential . in the following section , the placement of discontinuities and allocation of step energiesis discussed before the methods are evaluated in sec .[ sec : evalulation ] .the most efficient mapping scheme is then evaluated for a range of thermodynamic and transport properties in sec .[ sec : performance ] . a comparison between time - stepping and event - driven simulationis performed in sec .[ sec : compcost ] .finally , the conclusions of the paper are presented in sec .[ sec : conclusions ] .the primary aim of this work is to develop an algorithm to convert a continuous potential to an optimal discrete form : one that provides an accurate approximation of the original continuous potential and can be simulated at a minimal computational cost . as the computational cost of an event - driven simulation is roughly proportional to the number of discontinuities encountered by the particles , it is vital that the number of discontinuities or steps used to achieve a set level of accuracy is minimized .the location of a single step in a spherically - symmetric discrete potential is specified by the segment ] .this allows the task of discretizing the potential to be split into two smaller tasks : the optimal placement of discontinuities and the determination of an effective step energy for a segment of the continuous potential .it is common to accelerate molecular dynamics calculations by truncating the interaction potential at a cut - off radius , thus requiring only local particle pairings to be considered in force calculations .typically in simulations of continuous potentials , the potential is also shifted to eliminate the discontinuity at the cutoff in order to avoid the presence of impulsive forces .for example , the truncated , shifted lennard - jones potential is given by as each step of the discontinuous potential represents a segment of the original continuous potential , the first discontinuity is defined to lie at the cutoff radius ( i.e. ) , while all other discontinuities lie within in the region .it is tempting to also define an inner hard - core radius of the stepped potential using one of the available methods ( e.g. , see ref . ) ; however , this would require each step energy to somehow compensate for the overly repulsive core , inextricably linking step placement and energy once again .the available methods for placing discontinuities are reviewed in the next section before the algorithms used to generate representative step energies are discussed .the simplest approach to place the discontinuities of a discrete potential is to divide the region into a number of steps of equal width . the total number of discontinuities / steps in the potential ( including the cutoff ) is given by .it is not immediately clear that a uniform radial placement of the steps is the natural choice for a spherical potential .an alternative choice is to fix the volume bounded by each step of the potential . in this case, each step location is determined using the following recursive expression the total number of discontinuities in the potential is then .the primary disadvantage of the approaches outlined above is that they do not attempt to adapt the step locations according to the behavior of the potential .it is likely that the performance of both algorithms is particularly sensitive to the configuration of the steps near the minimum of the potential where the interaction energy changes rapidly .it has also been proposed to discretize continuous potentials by placing discontinuities at fixed intervals of interaction energy .this approach allows a controlled resolution of the potential , while balancing the contribution of each step and allows a straightforward extension to asymmetric potentials .the locations of the discontinuities are the ordered solutions to the following set of equations the application of eq . to the shifted , truncated lennard - jones potential results in an infinite number of steps due to the singularity at .in practice , the high - energy steps are inaccessible and only a small number need to be computed during the simulation . before these approaches can be evaluated , a technique for determining the step energies must be selected .this is discussed in the following section . with the location of each step defined through one of the above algorithms , an algorithm for determining the effective energy of a segment of the potentialis required . in the limit of a large number of discontinuities/ small segments , the original continuous potential must be recovered .chapela et al . have evaluated three approaches based on point sampling of the continuous potential . where is the energy of step over the region 12 & 12#1212_12%12[1][0] link:\doibase 10.1063/1.1743957 [ * * , ( ) ] _ _( , , ) * * , ( ) link:\doibase 10.1063/1.2901173 [ * * , ( ) ] _ _ ( , , ) link:\doibase 10.1063/1.456811 [ * * , ( ) ] link:\doibase 10.1103/physrev.159.98 [ * * , ( ) ] link:\doibase 10.1016/s0065 - 3233(03)66002-x [ * * , ( ) ] in _ _ , vol . , ( , ) pp .link:\doibase 10.1063/1.1712308 [ * * , ( ) ] _ _ , ed .( , ) link:\doibase 10.1002/jcc.20291 [ * * , ( ) ] link:\doibase 10.1016/j.cpc.2005.10.005 [ * * , ( ) ] link:\doibase 10.1063/1.1469608 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/j.fluid.2007.09.026 [ * * , ( ) ] link:\doibase 10.1021/ie800374h [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/j.fluid.2005.07.021 [ * * , ( ) ] link:\doibase 10.1021/ie2009058 [ * * , ( ) ] link:\doibase 10.1021/jp309712b [ * * , ( ) ] link:\doibase 10.1529/biophysj.104.047159 [ * * , ( ) ] link:\doibase 10.1021/ie034036 m[ * * , ( ) ] link:\doibase 10.1016/j.fluid.2008.09.025 [ * * , ( ) ] link:\doibase 10.1002/jcc.21915 [ * * , ( ) ] link:\doibase 10.1063/1.3518711 [ * * , ( ) ] link:\doibase 10.1063/1.4789915 [ * * , ( ) ] link:\doibase 10.1021/ie201186q [ * * , ( ) ] link:\doibase 10.1063/1.3281416 [ * * , ( ) ] link:\doibase 1539 - 3755/2013/87(3)/033301(14 ) [ * * , ( ) ] link:\doibase 10.1063/1.1701689 [ * * , ( ) ] link:\doibase 10.1063/1.462271 [ * * , ( ) ] link:\doibase 10.1103/physreve.52.602 [ * * , ( ) ] link:\doibase 10.1063/1.477798 [ * * , ( ) ] link:\doibase 10.1119/1.1399044 [ * * , ( ) ] link:\doibase 10.1063/1.3486567 [ * * , ( ) ] _ _ , ph.d .thesis , ( ) * * , ( ) * * , ( ) link:\doibase 10.1016/j.chemphys.2008.06.013 [ * * , ( ) ] link:\doibase 10.1016/j.jcp.2009.08.026 [ * * , ( ) ] link:\doibase 10.1016/j.jcp.2003.08.009 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/j.sbi.2006.01.001 [ * * , ( ) ] , * * , ( ) link:\doibase 10.1016/j.str.2005.04.009 [ * * , ( ) ] link:\doibase 10.1016/j.bpj.2011.08.042 [ * * , ( ) ]
the optimal conversion of a continuous inter - particle potential to a discrete equivalent is considered here . existing and novel algorithms are evaluated to determine the best technique for creating accurate discrete forms using the minimum number of discontinuities . this allows the event - driven molecular dynamics technique to be efficiently applied to the wide range of continuous force models available in the literature , and facilitates a direct comparison of event - driven and time - driven molecular dynamics . the performance of the proposed conversion techniques are evaluated through application to the lennard - jones model . a surprising linear dependence of the computational cost on the number of discontinuities is found , allowing accuracy to be traded for speed in a controlled manner . excellent agreement is found for static and dynamic properties using a relatively low number of discontinuities . for the lennard - jones potential , the optimized discrete form outperforms the original continuous form at gas densities but is significantly slower at higher densities .
in the last few years , many researchers tried to combine first - order logic and probability for modeling uncertain domains and performing inferece and learning . in the field of probabilistic logic programming ( plp for short ) many proposals have been presented .an effective and popular approach is the distribution semantics , which underlies many plp languages such as prism , independent choice logic , logic programs with annotated disjunctions and problog . along this line, many reserchers proposed to combine probability theory with description logics ( dls for short ) .dls are at the basis of the web ontology language ( owl for short ) , a family of knowledge representation formalisms used for modeling information of the semantic web . in presented disponte , a probabilistic semantics for dls based on the distribution semantics that allows probabilistic assertional and terminological knowledge . in order to allow inference over the information in the semantic web , many efficient dl reasoners , such as pellet , racerpro and hermit ,have been developed . despite the availability of many dl reasoners ,the number of probabilistic reasoners is quite small . in presented bundle , a reasoner based on pellet that extends it by allowing to perform inference on disponte theories .most of the available dl reasoners , included bundle , exploit procedural languages for implementing their reasoning algorithms .nonetheless , some of them use non - deterministic operators for doing inference .we implemented a reasoner , called trill , that exploits prolog for managing the non - determinism .then , we developed a new version of trill , called trill and we added in both versions the ability to manage disponte knowledge bases ( kbs for short ) and computing the probability of a query given a probabilistic kb under the disponte semantics . since a problem of probabilistic kbs is that the parameters are difficult to define , in we presented edge that learns the parameters of a disponte kb from the information available in the domain .moreover , we are currently working on the extension of edge in order also to learn the structure of the probabilistic kb togheter with the parameters . in the field of plp , we are working at improving existing algorithms .we have considered lifted inference that allows to perform inference in a time that is polynomial in the variables domain size .we applied lifted variable elimination , and gc - fove in particular , to plp and developed the algorithm lp .the paper is organised as follows .section [ dl ] briefly introduces , while section [ disp ] presents the disponte semantics .section [ problem ] defines the problem of finding explanations for a probabilistic query w.r.t .a given probabilistic kb .section [ trill ] presents trill and trill and section [ related ] discusses related work .section [ exp ] shows experiments and section [ issues - achi ] discusses our achievements and future plans .finally , section [ conc ] concludes the paper .dls are knowledge representation formalisms that are at the basis of the semantic web and are used for modeling ontologies .they are represented using a syntax based on concepts , basically sets of individuals of the domain , and roles , sets of pairs of individuals of the domain . in this section ,we recall the expressive description logic .we refer to for a detailed description of dl , that is at the basis of owl dl .let , and be sets of _ atomic concepts _, _ roles _ and _ individuals_. a _ role _ is an atomic role ._ concepts _ are defined by induction as follows .each , and are concepts .if , and are concepts and , then , , , , and are concepts .let , be concepts , and . is a finite set of _ concept membership axioms _ and _ role membership axioms _ , while a _ tbox _ is a finite set of _ concept inclusion axioms _ . abbreviates and . a _ knowledge base _ consists of a tbox and an abox .a kb is assigned a semantics in terms of set - theoretic interpretations , where is a non - empty _ domain _ and is the _ interpretation function _ that assigns an element in to each , a subset of to each and a subset of to each .a query over a kb is an axiom for which we want to test the entailment from the knowledge base , written .the entailment test may be reduced to checking the unsatisfiability of a concept in the knowledge base , i.e. , the emptiness of the concept .for example , the entailment of the axiom may be tested by checking the satisfiability of the concept .disponte applies the distribution semantics of probabilistic logic programming to dls .a program following this semantics defines a probability distribution over normal logic programs called _worlds_. then the distribution is extended to queries and the probability of a query is obtained by marginalizing the joint distribution of the query and the programs . in disponte ,a _ probabilistic knowledge base _ is a set of _ certain axioms _ or _probabilistic axioms _ in which each axiom is independent evidence .certain axioms take the form of regular dl axioms while probabilistic axioms are where is a real number in ] that informally mean `` generally , if an object belongs to , then it belongs to with a probability in the interval $ ] '' .p-(d ) uses probabilistic lexicographic entailment from probabilistic default reasoning and allows both terminological and assertional probabilistic knowledge about instances of concepts and roles .p-(d ) is based on nilsson s probabilistic logic in which the probabilistic interpretation defines a probability distribution over the set of interpretations instead of a probability distribution over theories .the probability of a logical formula according to , denoted , is the sum of all such that and .we did several experiments in order to evaluate the performances of the algorithms we have implemented . herewe report a comparison between the performances of trill , trill and bundle when computing probability for queries .we used four different knowledge bases of various complexity : 1 ) brca models the risk factor of breast cancer ; 2 ) an extract of the dbpedia ontology obtained from wikipedia ; 3 ) biopax level 3 models metabolic pathways ; 4 ) vicodi contains information on european history . for the tests , we used a version of the dbpedia and biopax kbs without the abox , a version of the brca with an abox containing 1 individual and a version of vicodi with an abox containing 19 individuals .to each kb , we added 50 probabilistic axioms . for each datasets we randomly created 100 different queries . in particular , for the dbpedia and biopax datasets we created 100 subclass - of queries while for the other kbs we created 80 subclass - of and 20 instance - of queries . for generating the subclass - of queries , we randomly selected two classes that are connected in the hierarchy of classes contained in the ontology , so that each query had at least one explanation .for the instance - of queries , we randomly selected an individual and a class to which belongs by following the hierarchy of the classes starting from the class to which is instantiated in the kb .table [ table : res ] shows , for each ontology , the average number of different minas computed and the average time in seconds that trill , trill and bundle took for answering the queries .in particular , the brca and the version of dbpedia that we used contain a large number of subclass axioms between complex concepts .these preliminary tests show that both trill and trill performances can sometimes be better than bundle , even if they lack all the optimizations that bundle inherits from pellet .this represents evidence that a prolog implementation of a semantic web tableau reasoner is feasible and that may lead to a practical system .moreover , trill presents an improvement of the execution time with respect to trill when more minas are present .l|c|c|c|c dataset & avg .n. minas & trill time ( s)&trill time ( s)&bundle time ( s ) + brca & 6.49 & 27.87 & 4.74 & 6.96 + dbpedia & 16.32 & 51.56 & 4.67 & 3.79 + biopax level 3 & 3.92 & 0.12 & 0.12 & 1.85 + vicodi & 1.02 & 0.19 & 0.19 & 1.12 +our work aims at developing fast algorithms for performing inference over probabilistic disponte semantics .section [ trill ] shows that trill and trill can compute the explanations for a query and its probability w.r.t .a and an probbilistic kb respectively . for the future we plan to improve the performances of both algorithms .we are also studying the problem of lifted inference for probabilistic logic programming using lifted variable elimination .we are adapting the generalized counting first order variable elimination ( gc - fove ) algorithm presented in to probabilistic logic programming under the distribution semantics . to this purpose, we are developing the system lp that extends gc - fove by introducing two new operators , _ heterogeneous sum _ and _ heterogeneous multiplication_. this work will be presented at the iclp 2014 main conference .a second line of research is the problem of learning the parameters and the structure of a disponte kb . along this line ,in we presented a learning algorithm , called edge , that learns the parameters by taking as input a dl theory and a number of examples that are usually concept assertions divided into positive and negative examples .edge first computes , for each example , the bdd encoding its explanations , then it executes an _ expectation - maximization _ ( em ) algorithm , in which the functions expactation and maximization are repeatedly applied until the log - likelihood of the examples reaches a local maximum .moreover , we are working on extending edge in order to learn also the structure of a disponte kb togheter with the parameters by adapting the celoe algorithm .in this paper we presented two algorithms trill and trill for reasoning on disponte kbs which are written in prolog .the experiments show that prolog is a viable language for implementing dl reasoning algorithms and that the performances of the two presented algorithms are comparable with those of a state - of - art reasoner .this work was started by the artificial intelligence research group of the engineering department of the university of ferrara .we would personally thank my colleagues and friends ( in alphabetical order ) elena bellodi , evelina lamma and fabrizio riguzzi .
the interest in the combination of probability with logics for modeling the world has rapidly increased in the last few years . one of the most effective approaches is the distribution semantics which was adopted by many logic programming languages and in descripion logics . in this paper , we illustrate the work we have done in this research field by presenting a probabilistic semantics for description logics and reasoning and learning algorithms . in particular , we present in detail the system trill , which computes the probability of queries w.r.t . probabilistic knowledge bases , which has been implemented in prolog . * note : * an extended abstract / full version of a paper accepted to be presented at the doctoral consortium of the 30th international conference on logic programming ( iclp 2014 ) , july 19 - 22 , vienna , austria [ firstpage ] probabilistic description logics , probabilistic reasoning , tableau , prolog , semantic web .
ever since the establishment of classical control theory by several famous scientists , namely nyquist , bode , harris , evans , wienner , nichols et al in the beginning of 20th century , most of the existing works have studied the mono - stability and mono - periodicity of the dynamical system .however , many physical and biological systems are known in which there are a multitude of coexisting attractors .examples include systems from laser physics , chemistry , semiconductor physics , neuroscience , and population dynamics , see reference and the references therein .due to its significant applications on neural networks ( nns ) with respect to associate memory pattern , recognition and decision making , the topics on multistability and multiperiodicity of nns began to gain a lot of attention and have been investigated intensively in the past decade . moreover , in a sense , multistability is a necessary property in nns in order to enable certain applications where mono - stability networks could be computationally restrictive . due to the cornucopia of opportunities and rich flexibility provided by the chaotic system ,chaos , as a very special dynamical behavior in nonlinear system , has been thoroughly investigated in the recent years .topics include stabilization , anti - control and synchronization of chaotic system .although their are a lot of research about chaos , multichaos , which means that the chaotic solution of a system lies in different disjoint invariant sets with respect to different initial values has never been talked and studied .multistability , multiperiodicity as well as multichaos are different properties of the solutions in disjoint invariant sets , the research on multichaos in systems with disjoint invariant sets has the same significance as the research of chaos in mono - system .although there should be multichaos solution of a system after the multistability and multiperiodicity solution have been found , nobody has given an exact example to demonstrate the multichaos phenomenon thus far .in this paper , by construction of a multiple logistic map , we can observe the multichaos phenomenon which help us understand the more complicated case in dynamical systems ._ definition 1 : _ let be a compact subset of .the is said to be a positive invariant set of the dimensional continuous system or the discrete map if their solution trajectories will not get out once entered ._ definition 2 : _ if there exist two or more disjoint positive invariant sets for the dimensional continuous system or the discrete map in , and the solution in every disjoint set is chaotic , then this phenomenon is called multichaos and the above systems are called multichaotic system . _ _ remark 1:__the multichaotic system is different from multi - wing chaotic system discussed by simin yu et al , xinzhi liu et al , where the multi - wing chaotic system contains only one positive invariant set . with the advent of fast computers , the numerical investigations of chaos have increased considerably over the last three decades and by now , a lot is known about chaotic systems .one of the simplest and most transparent systems exhibiting order to chaos transition is the logistic map . in this subsection, we give a unified framework which is constructed by multiple logistic map to demonstrate multistability , multiperiodicity and multichaos according to the different value of the control parameter .the logistic map is defined by where a number between zero and one , and represents the ratio of existing population to the maximum possible population at year , and hence represents the initial ratio of population to maximum population ( at year 0 ) , is a positive number , and represents a combined rate for reproduction and starvation . here in this paper , we assume is a number between zero and four . as we all know ,the eq.([1 ] ) will demonstrate stable fixed points , period-2 , period-4 , period-8 , and chaos dynamics according to different .specifically , when is between and , there exists a fixed points ; when is between 3 and approximately 3.57 , there are periodic oscillations ; when is larger than approximately 3.57 , the eq.([1 ] ) will undergo chaotic behavior .next , a multiple logistic map is redefined by connecting and extending its domain infinitely instead of only in the invariant set .below is the definition of : let be the set of all integers , for any , when when $ ] , let , the figure of is given in fig.[functionf ] . as varies from -3 to 3,width=340,height=226 ] for any ,the solution of lies within beginning the the initial . according to_ definition 1 _ , it is evident that for any , the set is positive invariant set of .the proof is trivial , thus it is omitted here .it is also very evident that there are infinite positive invariant sets because there are infinite integers ., width=340,height=226 ] , width=340,height=226 ] , width=340,height=226 ] , width=340,height=226 ] next , we demonstrate that multistability , multiperiodicity and multichaos can exist in the discrete map eq.[2 ] using simulation examples . in fig.[ms]-[mc ] , four different initial values which lie in the four positive invariant sets , , and respectively are chosen for each in the four figures .the initial values are the -0.4 , 0.5 , 1.4 , 2.4 .fig.[ms ] demonstrates multistability when , with each solution orbit residing on a fixed point which lies in the corresponding positive invariant sets .fig.[mp2 ] and fig.[mp4 ] shows multiple period-2 and period-4 solution with and respectively .fig.[mc ] presents multichaos phenomenon when .in the last section , we give an example to show the existence of multichaos in an one - dimensional map . in this section , a continuous three - dimensional system that exhibits multichaotic behavioris constructed based on lorenz system . by defining a sawtooth function ,the phase space is divided into infinite squares . in each square, there is a positive invariance set in which lorenz attractors are defined .the well known lorenz system is given below : when , and , the system [ 31 ] exhibits chaotic behavior . according to ref. , lorenz system has only one positive invariance set .let be the positive invariance set , then the set is given below by calculating the result derived by ref. , which is .let . apparently , is the subset of .when the initial value belongs to , the trajectory of [ 31 ] will not get out according to the definition of positive invariant set . in order to generate lorenz - like multichaotic attractors , a sawtooth function and a novel multichaotic lorenz systemis defined .the sawtooth function is defined as below : when , is depicted in fig.[gx ] . as varies from -400 to 400,width=340,height=226 ] for simplicity , denote , also , and are defined likewise .the multichaotic system based on lorenz system is given as below : when the initial value lies in the compact set , the solution of [ 33 ] will be exactly like [ 31 ] and thus exhibiting a typical double wing strange attractor .denote .it is obvious that is composed of infinite disjoint compact subsets . for an initial point starting from any of the disjoint sets of , there is a transformation satisfying .let ,, , then the equation [ 33 ] with the initial point lying in one of the subsets of will be transformed into the following equation : which is the same with the case of eq.[31 ] with a initial point starting from .it is very easy to prove that any orbit starting from a initial point lying in will not get out of a positive invariant set determined by the specific value of .a simulation result is given to verify the coexistence of multichaotic attractors in fig.[mlc ] .there are eight chaotic attractors in fig.[mlc ] , with each starting from different positive invariant subsets of .in this paper , we give the definition of multichaos which is never defined before . by constructing a multiple logistic map ,we show that the map can exhibit multistability , multiperiodicity and multichaos according to different .then the continuous multiple lorenz chaotic attractors are constructed using a sawtooth function . by calculating the positive invariant sets , we show that initial points starting in the positive invariant sets will not get out and will exhibit chaotic dynamical behavior in each disjoint positive invariant set .this work was supported in part by the national natural science foundation of china under grant 60973012 , 61073025 , 61073026 , 61074124 , 61170031 and the graduate student innovation fund of huazhong university of science and technology under grant 0109184979 .
in this paper , we present a unified framework of multiple attractors including multistability , multiperiodicity and multichaos . multichaos , which means that the chaotic solution of a system lies in different disjoint invariant sets with respect to different initial values , is a very interesting and important dynamical behavior , but it is never addressed before to the best of our knowledge . by constructing a multiple logistic map , we show that multistability , multiperiodicity and multiple chaos can exist according to different value of the parameter . in the end , by the derived compact invariant set of the lorenz system , the multiple lorenz chaotic attractors are constructed using a sawtooth function .
the significant change in the character of team - size distribution is the key insight underlying the proposed model .previous studies have shown a marked increase in the _ mean _ team size in recent decades , not only in astronomy [ e.g , 2 , 22 ] , but in all scientific fields [ 5 ] . specifically , the average team size in astronomy grew from 1.5 in 1961 - 1965 to 6.7 in 2006 - 2010 ( marked by arrows in fig . 1 , which shows , on a log - log scale , team - size distributions in the field of astronomy in two time periods )however , figure 1 reveals even more : a recent distribution ( 2006 - 2010 ) is not just a scaled - up version of the 1961 - 1965 distribution shifted towards larger values ; it has a profoundly different shape .most notably , while in 1961 - 1965 the number of articles with more than five authors was falling precipitously , and no article featured more than eight authors , now there exists an extensive tail of large teams , extending to team sizes of several hundred authors .the tail closely follows the _ power - law _ distribution ( red line in fig . 1 ) .the power - law tail is seen in recent team - size distributions of other fields as well [ 23 ] .in contrast , the `` original '' 1961 - 1965 distribution did not feature a power - law tail . instead ,most team sizes were in the vicinity of the mean value .the shape of this original distribution can instead be described with a simple poisson distribution ( blue curve in fig . 1 ) , an observation made in some previous works [ 10 , 20 ] .note that the time when the distribution stopped being poisson would differ from field to field .( 1961 - 65 ) the data are binned in intervals of 0.1 decades , thus revealing the behavior far in the tail , where the frequency of articles of a given size is up to million times lower than in the peak .all distributions in this and subsequent figures are normalized to the 2006 - 2010 distribution in astronomy .error bars in this and subsequent figures correspond to one standard deviation .the full dataset consists of 154,221 articles published between 1961 and 2010 in four core astronomy journals ( listed in si ) , which publish the majority of research in this field [ 24 ] .details on data collection are given elsewhere [ 25].,scaledwidth=50.0% ] we interpret the fact that the distribution of team sizes in astronomy in the 1960s is well described as a stochastic variable drawn from a poisson distribution to mean that _ initially _ the production of a scientific paper used to be governed by a _poisson process _[ 26 , 27 ] .this is an intuitively sound explanation because many real - world phenomena involving low rates arise from a poisson process .examples include pathogen counts [ 28 ] , highway traffic statistics [ 29 ] , and even sports scores [ 30 ] .team assembly can be viewed as a low - rate event , because its realization involves few authors out of a very large possible pool of researchers .poisson rate ( ) can be interpreted as a characteristic number of authors that are necessary to carry out a study .the actual realization of the process will produce a range of team sizes , distributed according to a poisson distribution with the mean being this characteristic number .in contrast , the dynamics behind the power - law distribution that features in team sizes in recent times is fundamentally different from a simple poisson process , and instead suggests the operation of a process of _ cumulative advantage_. cumulative advantage , also known as the yule process , and as preferential attachment in the context of network science [ 31 , 32 ] , has been proposed as an explanation for the tails of collaborator and citation distributions [ 23 , 32 - 38 ] . unlike the poisson process, cumulative advantage is a dynamic process in which the properties of a system depend on its previous state .how did a distribution characterized by a poisson function evolve into one that follows a power law ? does this evolution imply a change in the mode of the team assembly ? does a poisson process still operate today ?figure 1 shows that for smaller team sizes ( ) the power law breaks down , forming instead a `` hook . ''this small- behavior must not be neglected because the great majority of articles ( 90% ) are still published in teams with fewer than ten authors .the hook , peaking at teams with two or three authors , may represent a vestige of what was solely the poisson distribution in the past .this simple assumption is challenged by the fact that no single poisson distribution can adequately fit the small- portion of the 2006 - 10 team - size distribution .namely , the high ratio of two - author papers to single - author papers in the 2006 - 10 distribution would require a poisson distribution with .such distribution produces a peak at , which is significantly offset compared to its actual position .evidently , the full picture involves some additional elements . in the following section we present a model that combines the aforementioned processes and provides answers to the questions raised in this section , demonstrating that knowledge production occurs in two principal modeswe next lay out a relatively simple model that incorporates principles of team formation and its evolution .we produce simulated team - size distributions based on the model and validate them by testing how well they `` predict '' empirical distributions in the field of astronomy .this model is universally applicable to other fields , as will be discussed later .the model consists of _ authors _ who write _ papers _ over time .each paper has a _ lead _author who is responsible for putting together a team and producing a paper .each lead author is associated with two types of teams : _ core _ and _ extended_. core teams consist of the lead author and coauthors .their size is drawn from a poisson distribution with some rate .if the drawing yields the number one , the core team consists of the lead author alone .we allow , the characteristic size of core teams , to grow with time .existing authors , when they publish again , retain their original core teams .the probability of publishing by an author who has published previously is 0.8 . unlike core teams ,extended teams evolve dynamically .initially , the extended team has the same members as the core team .however , the extended team is allowed to add new members in proportion to the aggregate productivity of its current members .new extended team members are randomly chosen from core teams of existing members , or from a general pool if no such candidates are available .the cumulative advantage principle that governs the growth of extended teams will mean that teams that initially happen to have more members in their core teams and/or whose members have published more frequently as lead authors , will accrete more new members than the initially smaller and/or less productive teams .this process allows some teams to grow very large , beyond the size that can be achieved with a poisson process .the process is gradual , so very large teams appear only when some time has passed .it is important that extended teams do not replace core teams ; they co - exist , and the lead author can choose to publish with one or the other at any time .this choice is presumably based on the type or complexity of a research problem . in simulationwe assume a fixed probability ( ) for an article to require an extended team .core and extended teams correspond to traditional and team - oriented modes of knowledge production , respectively . , corresponding to of chance match .all distributions are normalized to the 2006 - 2010 distribution.,scaledwidth=50.0% ] we also incorporate several additional elements to this basic outline that brings the model closer to reality .first , the empirical data indicate that in recent times there is an excess of two - author papers over single - author papers , especially from authors who have just started publishing .apparently , such authors tend not to publish alone , probably because they include their mentors as coauthors . to reproduce such behavior we posit in the model that some fraction of lead authors will form their core teams by adding an additional member to the number drawn from a poisson distribution .we call such teams `` core + 1 teams , '' as opposed to `` standard core teams . ''furthermore , we assume that repeat publications are more likely from authors who started publishing more recently .finally , we assume that certain authors retire and their teams are dissolved . however , the process of retirement is not essential to reproduce the empirical team - size distribution .the model is implemented through a simulation of 154,221 articles , each with a list of `` authors . ''the number of articles is set to match the empirical number of articles published within the field of astronomy in the period 1961 - 2010 .the sequence in which the articles are produced in the simulation allows us to match them to actual publication periods ( e.g. , articles with sequential numbers 51188 to 69973 correspond to articles published from 1991 to 1995 ) . in figure 2we show a compelling match between the real data ( dots with error bars ) and the predictions of our model ( values connected by colored lines ) for three time periods ( 1961 - 65 , 1991 - 95 , and 2006 - 10 ) .the model correctly reproduces the emergence of the power - law tail and its subsequent increased prominence , as well as the change in the shape of the low- distribution ( the hook ) , and the shift of the peak from single - author papers to those with two or three authors .the strongest departure of the model from the empirical distribution is the bump in the far tail of the 2006 - 10 distribution ( around = 200 ) .we have identified this `` excess '' to be due to several papers that were published by a fermi collaboration [ 39 ] over a short period of time .note , however , that only 0.6% of all 2006 - 10 papers were published by teams with more than 100 authors .in addition to predicting the distribution of team sizes , the model also produces good predictions for other , author - centric distributions . figure s1 compares model and empirical distributions for article per author ( productivity ) , collaborator per author , and team per author distributions , as well as the trend in the size of the largest connected component .the latter correctly predicts that the giant component forms in the early 1970s .distributions and trends based on the implementation of guimera et al .team assembly principles [ 2 ] are also shown in figure s1 for comparison ( with team sizes supplanted from our model ) .they yield predictions of similar quality .collaborator distribution has been the focus of numerous studies [ 34 - 38 ] .here we follow the usual determination of collaborators based on co - authorship .in the limiting case in which each author appears on only one article ( which is true for the majority of authors over time periods of a few years ) , the collaborator distribution , , is related to team - size distribution as : , where is the team - size distribution .therefore , the power - law tail in the collaborator distributions , which has been traditionally explained in the network context as the manifestation of the _ preferential attachment _ in which authors with many collaborators ( `` star scientists '' [ 40 ] ) have a higher probability of acquiring new collaborators ( nodes that join the network ) may alternatively be interpreted as authors ( not necessarily of star " status ) belonging to extended teams that grow through the mechanism of cumulative advantage .interestingly , the model predicts the empirical distribution quite well ( figure 2 ) , even though we assumed that the propensity to publish with the extended team has remained constant over the 50-year period ( ) .this suggests a hypothesis that ( at least in astronomy ) there always existed a similar proportion of problems that would have required non - individualistic effort , but it took time for such an approach of conducting research to become conspicuous because of the gradual growth of extended teams .the model allows us to assess the relative contribution of different modes of authorship . in figure 3we separately show the distribution of articles produced by both types of core teams and the extended teams . by definition , `` core + 1 '' teams and extended teams start at , and therefore single - author papers can only be produced in a standard - core team mode .two - author teams are almost exclusively the result of core teams with equal shares of standard and `` core + 1 '' teams .the contribution of `` core + 1 '' teams drops significantly for three or more authors , which is not surprising if such teams are expected to be primarily composed of student - mentor pairs .standard core teams dominate as the production mechanism in articles containing up to eight authors ; i.e. , they make up most of the hook .extended teams become the dominant mode of production of articles that include ten or more authors , thus they are responsible for the power - law tail of large teams .deriving the relative contribution of different types of teams as performed in the previous section and shown in figure 3 requires a model simulation and is therefore not practical as a means of interpreting empirical distributions .fortunately , we find ( by testing candidate functions using the maximum likelihood method ) that the distribution of the articles produced by each of the three types of teams can be approximated by the following functional form equivalents : standard core and `` core + 1 '' teams are well described by poisson functions , and , while the distribution of articles produced by extended teams is well described by a power - law function with a low - end exponential cutoff , . therefore , the following analytical function can be fit to the empirical team - size distribution in order to obtain its decomposition : .ks test yields , corresponding to of chance match.,scaledwidth=50.0% ] and 0.05 for ecology , mathematics , social psychology , and arxiv respectively , which all correspond to probability of chance match . literature has too few points for a ks test.,scaledwidth=50.0% ] in the above expression , and are the poisson rates for and , is the power - law slope , and determines the strength of the exponential truncation .relative normalization of the three components is given by , , and .this expression features six independent parameters . while other analytical functions can , in principle , also provide a good fit to the overall size distribution , eq .1 is constructed so that each component corresponds to a respective authorship mode .furthermore , as shown in figure s2 , removing various components of eq .1 leads to decreased ability to fit the empirical distribution .the best - fitting functional form for the most recent team - size distribution in astronomy is shown in figure 4 .the fitting was performed using minimization .the overall fit is very good and the individual components of eq .1 match the different modes of authorship , as derived by the model ( figure 3 ) . by integrating these componentswe find that currently 57% of articles belong to and can therefore be attributed to standard core teams .another 12% are due to `` core + 1 '' teams ( ) , while the remaining 31% of articles are fit by the truncated power - law component ( ) and can therefore be interpreted as originating from extended teams . [cols="<,<,<,<,<,<,<,<,<,<,<",options="header " , ] the principles that underlie the proposed model are universal and not field dependent .only the parameters that specify the rate of growth or the relative strength of the processes will differ from field to field . consequently the analytical decomposition given by eq .1 can be applied to other fields .figure 5 shows the best - fitting functions ( equation 1 ) to the empirical team - size distributions in : mathematics , ecology , social psychology , literature , and for articles from arxiv , all for the current period ( 2006 - 10 ) . core journals used for these fields are listed in si .all of the distributions are well described by our model - based functional decomposition .parameters for the fit and contributions of different authorship modes are given in table 1 .there is much variety . in literaturethe standard core team mode accounts for nearly the entire output ( 99% ) with very small teams .mathematics also features relatively small teams and a steep decline of larger teams .nevertheless , the functional decomposition implies that 9% of articles are produced in the extended team mode ( see also fig .s3 ) , but these teams are still not much larger than core teams ( 2.9 vs. 1.8 members on average ) .mathematics and social psychology feature the largest share of core + 1 " teams .team - size distributions for ecology and social psychology both have more prominent power - law tails than mathematics ( ) but they are not yet as extensive as in astronomy ( ) .both fields feature a hook at low similar to that of astronomy .finally , articles from arxiv ( mostly belonging to the field of physics ) have a power - law slope very similar to that of astronomy .analytical decomposition , introduced in the previous section , allows us to empirically derive the contribution of different modes of authorship over time and to explore the characteristics of teams as they evolve .we now fit equation 1 to article teams in astronomy for all five - year time periods , from 1961 to 2010 .figure 6 ( left panel ) shows the change in the best - fit poisson rates of both types of core teams as well as the evolution of the slope of the power - law component .as previously suggested , the poisson rate of core teams has gradually increased from close to zero in the early 1960s to a little over three recently . on the other hand , the slope of the power - law component has gradually been flattening , from to ; i.e. , the power - law component has been gaining in prominence .figure 6 ( middle panel ) shows the relative contributions of the three modes of authorship in astronomy over the time period of 50 years , obtained by integrating the best - fit functional components .remarkably , the contributions have remained relatively stable , with articles in the power - law component ( i.e. , articles produced by extended teams ) making % .this stability in the fraction of power - law articles is directly connected to the fixed propensity of authors to write articles with extended teams , as indicated in the model simulation . in all timeperiods most papers ( % ) have been published by standard core teams ( the poisson component ) .core teams with an extra member seem to appear in the early 1970s , but their contribution has remained at around 10% . as pointed out earlier , many studies have emphasized the impressive growth of _ mean _ team sizes .we can now explore this trend in the light of the various authorship modes . in figure 6 ( right panel )we show the change in the mean size of all teams , and separately of core teams ( standard and core + 1 " teams together ) and of power - law ( extended ) teams . in the early 1960s both the core and the extended teams were relatively small ( 1.1 and 2.5 members , respectively ) .subsequently , the mean size of core teams has increased linearly to 3.2 members . on the other hand ,the mean size of extended teams has grown _ exponentially _ , and most recently averages 11.2 members .the exponential increase in the size of extended teams is affecting the overall mean , despite the fact that the extended teams represent the minority mode of authorship .while the growth of core teams is more modest , it nevertheless indicates that the level of collaboration , as measured by article team size , increases for this traditional mode of producing knowledge as well .whether this increase is a reflection of a real change in the level of collaborative work or simply a change in the threshold for a contributor to be considered a coauthor is beyond the scope of this work . in a similar fashion, we explored the evolution of fit parameters , mode contributions , and team sizes for mathematics and ecology ( figures s3 and s4 ) .mathematics features a small extended team component ( 10% ) that emerged in the mid-1980s .extended teams in mathematics are still only slightly larger in size than the core teams .the share of `` core + 1 '' teams is increasing .the mean size of all core teams has increased , albeit moderately ( from 1.2 to 1.8 members ) . in ecologythe overall increase in mean team size mostly reflects the increase of the characteristic size of standard core teams in the 1980s .the observed increase of the share of extended teams appears to come at the expense of standard core teams .the model proposed in this paper successfully explains the evolution of the sizes of scientific teams as manifested in author lists of research articles .it demonstrates that team formation is a multi - modal process .primary mode leads to relatively small core teams the size of which may represent the typical number of researchers required to produce a research paper . the secondary mode results in teams that expand in size , and which are presumably employed to carry out research that requires expertise or resources from outside of the core team .these two modes are responsible for producing the hook and the power law - tail in team size distribution , respectively .this two - mode character may not be exclusive to team sizes .interestingly , a similarly shaped distribution consisting of a hook and a power - law tail is characteristic of another bibliometric distribution , that of the number of citations that an article receives .recently a model was proposed that successfully explained this distribution [ 33 ] by proposing the existence of two modes of citation , direct and indirect , where the latter is subject to cumulative advantage .understanding the distribution of the number of coauthors in a publication is of fundamental importance , as it is one of the most basic distributions that underpin our notions of scientific collaboration and the concept of `` team science '' .the principles of team formation and evolution laid out in this work have the potential to illuminate many questions in the study of scientific collaboration and communication , and may have broader implications for research evaluation .gibbons m , et al .( 1994 ) _ the new production of knowledge : the dynamics of science and research in contemporary societies _ ( sage , london ) .guimer r , uzzi b , spiro j , & amaral la ( 2005 ) team assembly mechanisms determine collaboration network structure and team performance ._ science _ 308:697 - 702 .jones bf , wuchty s , & uzzi b ( 2008 ) multi - university research teams : shifting impact , geography , and stratification in science ._ science _ 322:1259 - 1262 .newman mej ( 2004 ) who is the best connected scientist ?a study of scientific coauthorship networks . _ complex networks _ , eds ben - naim e , frauenfelder h , & toroczkai z ( springer , berlin ) , pp 337370 .wuchty s , jones bf , & uzzi b ( 2007 ) the increasing dominance of teams in production of knowledge . _science _ 316(5827):1036 - 1039 .brner k , et al .( 2010 ) a multi - level systems perspective for the science of team science ._ science translational medicine_ 2(49):49cm24 .price djds ( 1963 ) _ little science , big science _( columbia university press , new york ) .beaver dd ( 1978 ) possible relationships between the history and sociology of science . _ sociological inquiry _ 48(3 - 4):140 - 161 .babchuk n , keith b , & peters g ( 1999 ) collaboration in sociology and other scientific disciplines : a comparative trend analysis of scholarship in the social , physical , and mathematical sciences ._ american sociologist _ 30(3):5 - 21 .glnzel w ( 2002 ) coauthorship patterns and trends in the sciences ( 1980 - 1998 ) : a bibliometric study with implications for database indexing and search strategies ._ library trends _ 50(3):461 - 473 .kretschmer h ( 1997 ) patterns of behaviour in coauthorship networks of invisible colleges ._ scientometrics _40(3):579 - 591 .cronin b ( 2001 ) hyperauthorship : a postmodern perversion of evidence of a structural shift in scholarly communication practices ?journal of the american society for information science and technology 52(7):558 - 569 .cronin b , shaw d , & la barre k ( 2003 ) a cast of thousands : co - authorship and sub - authorship collaboration in the twentieth century as manifested in the scholarly literature of psychology and philosophy. _ journal of the american society for information science and technology _ 54(9):855 - 871 .bordons m & gomez i ( 2000 ) collaboration networks in science . _ the web of knowledge : a festschrift in honor of eugene garfield _ , eds cronin b & atkins hb ( information today , medford , nj ) , pp 197 - 213 .shrum w , genuth j , & chompalov i ( 2007 ) _ structures of scientific collaboration _( mit press , cambridge ) .wagner cs ( 2008 ) the new invisible college : science for development ( brookings institution press , washington , dc ) .uzzi b , et al .( 2013 ) atypical combinations and scientific impact ._ science _ 342(6157):468 - 472 hagstrom wo ( 1965 ) _ the scientific community _ ( basic books , new york ) .melin g ( 2000 ) pragmatism and self - organization : research collaboration on the individual level ._ research policy _ 29(1):31 - 40 .price djds & beaver dd ( 1966 ) collaboration in an invisible college .american psychologist 21(11):1011 - 1018 .epstein rj ( 1993 ) six authors in search of a citation : villains or victims of the vancouver convention ? bmj 306:765 - 767 .fernndez ja ( 1998 ) the transition from an individual science to a collective one : the case of astronomy .scientometrics 42(1):61 - 74 .milojevi s ( 2010 ) modes of collaboration in modern science - beyond power laws and preferential attachment ._ journal of the american society for information science and technology _ 61(7):1410 - 1423 .henneken ea , et al .( 2007 ) e - print journals and journal articles in astronomy : a productive co - existence ._ learned publishing _ 20(1):16 - 22 .milojevi s ( 2012 ) how are academic age , productivity and collaboration related to citing behavior of researchers ?_ plos one _ 7(11):e49176 .kingman jfc ( 1993 ) _ poisson processes _( oxford university press , oxford ) .ross sm ( 1995 ) _ stochastic processes _ ( john wiley & sons , new york ) 2nd ed .feller w ( 1968 ) _ an introduction to probability theory and its applications _( john wiley & sons , new york ) 3rd .gerlough dl & andre s ( 1955 ) _ use of poisson distribution in highway traffic .the probability theory applied to distribution of vehicles on two - lane highways _( eno foundation for highway traffic control , saugatuck ) .karlis d & ntzoufras i ( 2003 ) analysis of sports data by using bivariate poisson models .the statistcian 52:381 - 393 .newman mej ( 2005 ) power laws , pareto distributions and zipf s law ._ contemporary physics _ 46(5):323 - 351 .clauset a , shalizi cr , & newman mej ( 2009 ) power - law distributions in empirical data ._ society for industrial and applied mathematics review _ 51:661 - 703 .peterson gj , press s , & dill ka ( 2010 ) nonuniversal power law scaling in the probability distribution of scientific citations ._ pnas _ 107(37):16023 - 16027 .barabsi a - l & albert r ( 1999 ) emergence of scaling in random networks ._ science _ 286:509 - 512 .barabsi a - l , et al .( 2002 ) evolution of the social network of scientific collaborations ._ physica a _ 311:590 - 614 .newman mej ( 2001 ) clustering and preferential attachment in growing networks ._ physical review e _64(2):025102(025104 ) .newman mej ( 2001 ) the structure of scientific collaboration networks ._ pnas _ 98(2):404 - 409 .brner k , maru j , & goldstone r ( 2004 ) the simultaneous evolution of author and paper networks ._ pnas _ 101:5266 - 5273 .abdo aa , et al .( 2010 ) fermi large area telescope first source catalog . _the astrophysical journal supplement _ 188(2):405 - 436 .moody j ( 2004 ) the structure of a social science collaboration network : disciplinary cohesion from 1963 to 1999 . _ american sociological review _ 69(2):213 - 238 .
research teams are the fundamental social unit of science , and yet there is currently no model that describes their basic property : size . in most fields teams have grown significantly in recent decades . we show that this is partly due to the change in the character of team - size distribution . we explain these changes with a comprehensive yet straightforward model of how teams of different sizes emerge and grow . this model accurately reproduces the evolution of empirical team - size distribution over the period of 50 years . the modeling reveals that there are two modes of knowledge production . the first and more fundamental mode employs relatively small , _ core _ teams . core teams form by a poisson process and produce a poisson distribution of team sizes in which larger teams are exceedingly rare . the second mode employs _ extended _ teams , which started as core teams , but subsequently accumulated new members proportional to the past productivity of their members . given time , this mode gives rise to a power - law tail of large teams ( 10 - 1000 members ) , which features in many fields today . based on this model we construct an analytical functional form that allows the contribution of different modes of authorship to be determined directly from the data and is applicable to any field . the model also offers a solid foundation for studying other social aspects of science , such as productivity and collaboration . significance : science is an activity with far - reaching implications for modern society . understanding how the social organization of science and its fundamental unit , the research team , forms and evolves is therefore of critical significance . previous studies uncovered important properties of the internal structure of teams , but little attention has been paid to their most basic property : size . this study fills this gap by presenting a model that successfully explains how team sizes in various fields have evolved over the past half century . this model is based on two principles : ( a ) smaller ( core ) teams form according to a poisson process , and ( b ) larger ( extended ) teams begin as core teams but consequently accumulate new members through the process of cumulative advantage based on productivity . ontemporary science has undergone major changes in the last half century at all levels : institutional , intellectual , and social , as well as in its relationship with society at large . science has been changing in response to increasingly complex problems of contemporary society and the inherently challenging nature of unresolved questions , with an expectation to serve as a major driver for economic growth . consequently , the contemporary science community has adopted a new , problem - driven approach to knowledge production that often blurs the lines between pure and applied , and is more permeable around disciplinary borders , leading to cross-/multi-/inter-/trans - disciplinarity [ 1 ] . the major staple of this approach is team effort [ 2 - 5 ] . the increased prominence of scientific teams has recently led to a new research area , _ science of team science _ , which is ... centered on examination of the processes by which scientific teams organize , communicate , and conduct research " [ 6 ] . if we wish not only to understand contemporary science , but also to create and promote viable science policies , we need to uncover principles that lead to the formation and subsequent evolution of scientific research teams . studies of collaboration in science , and co - authorship as its most visible form , have a long history [ 7 - 11 ] . the collaborative mode of knowledge production is often perceived as being in contrast to the individualistic mode of the past centuries [ 12 , 13 ] . previous studies have established that the fraction of co - authored papers has been growing with respect to single - authored papers [ 5 ] , that in recent decades teams have been growing in size [ 14 ] , and that inter - institution and international teams are becoming more prevalent [ 15 , 16 ] . in addition , high - impact research is increasingly attributed to large teams [ 5 , 6 ] , as is research that features more novel combination of ideas [ 17 ] . the reasons for an increase in collaborative science have been variously explained as due to the shifts in the types of problems studied [ 1 ] and the related need for access to more complex instruments and broader expertise [ 15 , 18 , 19 ] . a research team is a group of researchers collaborating in order to produce scientific results , which are primarily communicated in the form of research articles . researchers who appear as authors on a research article represent a visible and easily quantifiable manifestation of a collaborative , team - science effort . we refer to such a group of authors as an `` article team . '' in this study we focus on one of the most fundamental aspects of team science : _ article team - size distribution _ and its change / evolution over time . many studies focused only on the mean or the median sizes of teams , implicitly assuming that the character of the distribution of team sizes does not change . relatively few studies examined _ full _ team - size distribution , albeit for rather limited data sets [ 10 , 20 , 21 ] , with some of them noticing the changing character of this distribution [ 10 ] . the goal of the current study is to present a more accurate characterization and go beyond empirical observations to provide a model of scientific research team formation and evolution that leads to the observed team - size distributions . despite a large number of studies of co - authorship and scientific teams , there are few explanatory models . one such exception is guimera et al.s model of the self assembly of teams [ 2 ] , which is based on the role that newcomers and repeated collaborations play in the emergence of large connected communities and the success of team performance . although their model features team size as a parameter , its values were not predicted by the model but were taken as input from the list of actual publications . the objective of the current study is to go beyond the internal composition of teams in order to explain the features of team - size distribution and its change over the past half century . thus , the model we propose in this paper is complementary to guimera et al.s efforts . our model is based on several simple principles that govern team formation and its evolution . the validity of the model is confirmed by constructing simulated team - size distributions that closely match the empirical ones based on 150,000 articles published in the field of astronomy since 1960s . we reveal the existence of two principal modes of knowledge production : one that forms small core teams based on a poisson process , and the other that leads to large , extended teams that grow gradually on the principle of cumulative advantage .
there are two methods which allow to determine the equilibrium shape ( ground state ) of the nuclei : the constrained hartree - fock method and the so - called macroscopic - microscopic method . though the latest generation of computers is able to perform very complicated calculations , in terms of running time , it is no so obvious to make systematic calculations for a large number of nuclei .a good alternative is to use the strutinsky method .the latter consists of associating the classical liquid drop model with some shell and pairing corrections built from a realistic microscopic model . based on such a model, we present a numerical method with its associated fortran program .the potential energy of deformation is deduced as a function of the shape of the nucleus .triaxial ( quadrupole ) shapes are considered in this work .the three semi axes of the ellipsoid are in fact connected to the both bohr parameters which are actually used in the calculations .the different steps of calculations are : i)the energy of deformation of the liquid drop model is first calculated [ 4 ] . ii)the schrodinger equation of a microscopic hamiltonian is built and solved to obtain eigenvalues and eigenvectors .in fact we use the fortran program named `` triaxial '' already published in cpc . the microscopic model is explained in details in this paper and also in ref.[3 ] .\iii ) the semiclassical energy is deduced from the same hamiltonian as ( ii ) is calculated on the basis of the wigner - kirkwood expansion [ 5 ] .iv)the shell correction is deduced as the difference between the sum of single - particle ( point ( ii ) ) energies and the same quantity smoothed semiclassically ( point ( iii ) ) .we use the so - called macroscopic - microscopic method to evaluate the potential energy of deformation of the nucleus .this method is based on the liquid drop model plus shell and pairing corrections deduced from a microscopic model .the deformation ( or potential ) energy of the nucleus is defined as the difference between the binding energy of the deformed drop and the non - deformed drop ( nucleus ) . here is a set of parameters defining the deformation .the case represents the spherical shape ( i.e. , the non - deformed nucleus ) .we recall that in the liquid drop model , the minimum is always obtained for the spherical deformation .this involves .of course , the liquid drop or weizsaker formula model contains several terms , but only two depend on the deformation of the nucleus , namely the surface and the coulomb energies . consequently , the other terms do not survive in the difference given by eq .( [ 209 ] ). the liquid drop energy reads \label{210}%\ ] ] with and .the quantities , et are the surface and the coulomb contributions .it is to be noted that and are dimentionless and normalized to the unity so that the deformation energy of the non deformed nucleus is equal to zero ( i.e. , ) .the reduced fissility has been determined empirically : , triaxial ellipsoidal shape with semi axes , the coulomb and surface contributions are deduced analytically with the help of elliptic integrals of the first and second kind et so that if , we will have : \right\ } \label{212}\\ \sin\varphi & = ( 1-c^{2}/a^{2})^{1/2},\ \ k^{\prime2}=a^{2}b^{-2}(b^{2}% -c^{2})/(a^{2}-c^{2})\nonumber\end{aligned}\ ] ] the condition of the volume conservation of the nucleus being ( equal to ) . with this conditionit is clear that only two deformation parameters are necessary to specify the shape of the nucleus .the bohr parameters are more commonly employed in this type of calculation . for moderate deformations ,the link between the semi axes and the bohr parameters is given in ref .the elliptic integrals are evaluated with gauss quadrature formulae with 64 points .according to the strutinsky prescription , the shell correction to the liquid drop model is defined as the difference between the sum of the single - particle energies of the occupied states and the `` smoothed part '' of the same quantity : in fact the strutinsky procedure is done in such a way that the smoothed sum does anymore contains shell effects so that the above difference represents only the contribution due to the shell structure . in the strutinsky s method the smoothed sum is derived through the smoothed density of states here , is the level density and and are respectively the so called order and smearing parameter of the strutinsky s procedure .the major defect of this method is that generally the results are usually more or less dependent on these two parameters .a method to diminish this dependence is to use the plateau condition , however in the case of finite wells this is not systematically guaranteed . in this respect , it has been demonstrated in ref . that the level density given by the strutinsky method is nothing but an approximation of the semiclassical level density , i.e. a quantum level density from which the shell effects have been washed out .consequently , even though the strutinsky is simpler in practice , it is more interesting to work straightforwardly with the semiclassical density because the problem of the dependence on the two above parameters is in this way avoided .thus , it is simply recommended to perform the smoothing procedure with the semiclassical level density .the previous formula becomes in this case : where is the semiclassical level density.even though it is not so obvious to derive a semiclassical density of states from a given quantum hamiltonian , there is for our case a rigorous solution ( in the sense where it the same quantum hamiltonian which is `` treated '' semiclassically ) .indeed , for exactly the same hamiltonian employed to determine the eigenstates , the semiclassical level density is deduced following the wigner - kirkwood method .the latter is based on the thomas - fermi approximation plus a few corrections appearing as a power series of in this theory , the particle - number is expressed as a function of the fermi level as follows : \right\ } \label{213}%\end{aligned}\ ] ] where and are the central field ( including the coulomb potential for the protons ) and the spin - orbit field ( see ref .the classical turning points are defined by .the domain of integration is defined by : the semiclassical level density is thus derived as follows: and the semiclassical energy is therefore: as already mentioned , the fermi level is obtained from the following equation: where is the particle - number ( neutrons or protons).the semiclassical energy which is of course free from shell effects can be cast under a power series of : where for example contains the term , etc ... here means the term related to the spin - orbit interaction.the expressions of et are very complicated and become simple only for the non - deformed case ( spherical shape ) .the importance of these terms decreases rapidly .the rfrence gives the following percentages with respect to the total semiclassical energy : .in addition , it is to be noted that the `` active part '' due to the deformation is even smaller . for this reasonthe contributions et ( which are not given explicitly here ) are simply approached by their values for the spherical shape: \nonumber\\ & -\kappa_{j}^{3}\frac{1}{r}\left ( \frac{ds}{dr}\right ) ^{3}+\frac { \kappa_{j}^{4}}{2}\left ( \frac{ds}{dr}\right ) ^{4}{\huge \ } } \label{kap}%\end{aligned}\ ] ] the different integrals ( [ 217 ] ) , ( [ 218 ] ) , ( [ 219 ] ) are calculated by the three dimensional gauss - legendre quadrature formulae .the set of lattice points must verify eq .( [ domaine ] ) .in fact , for convenience , in each direction , each interval is divided in elementary intervals in which the quadrature formula is applied with a restricted number of nodes .the number of points is increased in such a way to obtain stable numerical results.the fermi level is not determined straightforwardly from eq .( [ nombre de p ] ) , but solved as follows : from: simple integration by parts gives: with .with the condition of the fermi level , we will have differentiation with respect to gives means that for the constraint , the value of is the one which makes minimum . consequently , for a fixed it is sufficient to look for this minimum with the help of eq .( [ 216 ] ) ( this is what is done in the fortran program ) without employing subsidiary eq.([nombre de p ] ) . knowing , the correctives terms et are deduced in the spherical approximation ( as mentioned before , the dependence on the deformation being very small for these terms).unlike the previous case , the integral ( [ e1 ] ) and ( [ kap ] ) are one - dimensional and are also treated by gauss - legendre formula .it is to be noted that the nodes of the quadrature do not make any problem for the term , i.e. , we have always .expressions and are derived analytically , for the result is: with .$ ] it is worth to note that the spin - orbit coupling constant ( and present work ) is related to of ref . by the following equation: this is due to the fact that in these references , the spin - orbit constant is not defined in the same way . for being defined by eq .( [ 223 ] ) .finally , the shell correction is calculated by replacing the strutinsky s level density by the semiclassical energy : this leads to: the shell corrections are calculated separately for the neutrons and the protons and then added to obtain the total shell correction .we have took into account the pairing correction via the simple bcs approximation .the fermi level and the gap parameter are solved from the well known system of coupled equations: in these equations is the pairing strength and the eigenvalues of the microscopic hamiltonian .the upper index of the sums represents the number of pairs of quasiparticles actually taken in the calculations ( with above and below the fermi level ) . is the number of pairs of quasiparticles , taken in this work as the number of levels between the fermi and the first level of the spectrum.for convenience , we have adopted the prescription of ref . , , which has been widely used for realistic potentials such as the woods - saxon potential ( used here ) or the folded - yukawa potential . in this prescription , the force of the pairing is deduced from the empirical value of the gap and from ( see text just above): here denotes the smoothed level density determined from the strutinsky s procedure or by a semiclassical method as in the present work . the nonlinear system is solved by successive iterations until a given precision . at each iteration , we deduce the occupation probabilities from new couple and : conversely , from the `` new '' occupations amplitudes we deduce the `` new '' gap: and so onfor one kind of particles , the pairing correction to the liquid drop model is defined as : were is the usual energie for a correlated system of fermions , and is its smooth part ( i.e. , without shell effects ) assumed already contained in the liquid drop model .finally , with obvious notation , the potential energy of deformation can be summarized as follows : where the shell and pairing corrections are due to separates contributions of neutrons and protons .two codes have been built for calculating the semiclassical energy .the first code is based on the general deformed case which consists of three fold integral ( subroutine scdefor ) and the second can only be used for the spherical shape with a one dimensional integral ( subroutine sclspher1 ) . then, it is possible to make a cross checking in the spherical ( non - deformed ) case . to make further comparisons with other works, we have chosen the same examples as those of the ref .the different contributions to the semiclassical energy eq.([216 ] ) are detailed in the following tables : [ c]l|l|l|l|l|l|l|l|l & + routine & & & & & & & & + scdefor ( present code ) & & & & & & & from sclspher1 & from sclspher1 + sclspher1 ( present code ) & & from scdefor & & & & & & + ref . & & & & & & & & [ c]l|l|l|l|l|l|l|l|l & + routine & & & & & & & & + scdefor ( present code ) & & & & & & & from sclspher1 & from sclspher1 + sclspher1 ( present code ) & & from scdefor & & & & & & + ref . & & & & & & & & the numerical values of the parameters of the potential are displayed in the tables themselves .these calculations are performed for neutrons .the dependence on the proton number appears only through the parameters of the woods - saxon potential .appart from numerical uncertainties due to different numerical approaches , the results are found very close . to our knowledge , semiclassical calculations forthe hamiltonian such as the one considered in this paper do not exist in the literature .for this reason the only way to test the code in the deformed case is to compare the results with those of the strutinsky type .however , it is well known that the latter method often gives results with some uncertainty . consequently , as demonstrated in ref. , in performing these tests , we must keep in mind that the strutinsky calculations are only approximation of the semiclassical limit . in this respect ,the smallness of the relative error gives a good idea on the quality of the results the essential point is to verify that the code runs properly .in fact the code has been checked extensively a longtime ago . as examples , we give two deformed cases in fig .( [ fig1 ] ) .the parameters are given in the readme4.pdf file .for different orders of the curvature correction .the semiclassical energy as well as the sum of single - particle energies are given by a straight line .the calculations are made for n = 54 ( bottom ) and n=80 ( top).,width=604 ] it is very clear that an approximative plateau exists in the region represented by a circle . for the order do not obtain any plateau . in the region of the plateauthe relative error is less than per in the both cases .this program has been designed on the compac visual fortran version 6.6.0 ( optimized settings ) .in fact , the structure of the code is somewhat complicated .so , it is no need to give too much details . the essential point is to handle the basic input data and to be able to read the desired data from the output files .the fortran source code denoted by `` enerdef.f '' can be downloaded from : * http://macle.voila.fr/index.php?m=c9ae77e8&a=7d397569&share=lnk80764b6393d92f388 * the microscopic model and the associated fortran code is the same as the one of ref. and .therefore the parameters of the woods saxon potential are read from the file parameters.dat .renamed in the present work as ws_parameters.dat .see pdf file readme1_woods saxon parameters see pdf file readme2_input data .they must be prcised at the beginning of the main program : nmax=10 to 20 is linked to the size of the oscillator basis iuno=1 ( for single deformation ) or 0 ( for lattice mesh points ) if iuno=0 the three following data must be prcised : betamax=0.0 up to about 1.0 is the maximal value of the parameter beta ibetapoints= is the number of points ( minus one ) in the beta direction igamapoints = is the number of points ( minus one ) in the gamma direction the kind of nucleons , the number of protons and the number of neutrons have to be entered manually on the keyboard .if iuno=1 the deformation must also be prcised in the terminal ( do not forget that the deformation parameters are real quantities ) the fissility parameter and other miscellaneous data for the liquid drop model are fixed in the subroutines eld , bbs , bbc in the module `` liquid drop '' . additionally , this code is able to perform strutinsky calculations.two routines are devoted to this task .the first ( nstrutinsky ) solves the fermi level .the second ( strutinsky ) calculates the smooth energy once the fermi level is known from the first routine .the essential points are the following for the rwo routines -(see readme3_strut.pdf file ) : ggam ( input ) = is the smearing parameter ( ) the numbers 0,8,16,18 ( input , up to 18 ) = correspond to the curvature correction of the shell correction= does not exceed 18 ( here four calculations are done ) .rnumb0 , rnumb8 , etc ... (output for nstrutinsky)= number of particle found after solving equation = checking hnew0,hnew8, ....(output for nstrutinsky and input for strutinsky)= fermil level for different orders of the curv . correct .res0,res8, .... = shell correction for different orders of the curv .correct .the code performs shell corrections in loop do for several values of ggam and four values of the order of the curvature correction .in addition , the input and output data for the checkings are detailed in the readme4.pdf . file .the files eigenvalues and eigenvectors give the solutions of the schrodinger equation .all results are given separately for neutrons and protons .due to the coulomb interaction , the calculations in the proton case are significantly slower .however for a family of isotope the calculations for the protons must be taken only once .give in the third column respectively the gap parameter , the energy of the liquide drop model and the deformation energy ( all in mev ) for neutrons ( _ n ) and protons ( _ p ) .the two first columns specify the deformation in the sextan , beta - gamma .gamma is given in degrees .the file 2000n.dat ( neutrons ) or 2000p.dat ( protons ) gives the shell correction ( columns 2 to 5 ) .each column corrresponds to a given order of the shell correction .each row corresponds to a given value of the smearing parameter .the first column gives the smearing parameter ( in hw units ) and the last column gives the semiclassical value of the energy .
a numerical method close to the strutinsky procedure ( but better ) is proposed to calculate the deformation energy of nuclei . quadrupole ( triaxial ) deformations are considered . theoretical as well as practical aspects of the method are reviewed in this paper . a complete fortran program illustrates the feasibility of the method . thus , this code will constitute a useful `` ready tool '' for those which deal with numerical methods in theoretical nuclear physics .
in simulations of traffic systems , cellular automata ( ca ) are a common tool .a cellular automaton consists a structure of cells , a set of cell states and the rule of time evolution which transfers a state of a cell with its neighborhood to the state of this cell in the subsequent time moment . in this description , states , space and timeare discrete , while the traffic systems at lest space and time are inherently continuous .still , there is a rich variety of ca which enable to investigate properties and phenomena of traffic systems , reproduced sometimes with surprisingly subtle details . as a rule, the traffic ca are classified as single - cell or multi - cell models , where a vehicle occupies a single cell or more cells . on the other hand, traffic networks are also parametrized in different ways , as a node can represent a stop , a cross - road or a route .so simplified when compared with real systems , the technology of ca suffers known computational limitations : a more detailed description is paid by the smaller size of a simulated system .+ more recently , a modification of ca has been applied where a cell represents a state of the whole considered system .this approach can be seen as an example of the concept of kripke structures , where nodes represent states of the whole system and links represent processes leading from one state to another .the obvious drawback of this parametrization is that the number of nodes increases exponentially with the system size . in ,this difficulty is evaded by taking into account only the states which appear during the time evolution .the same idea was developed into a technique of time series analysis , known as recurrence networks .briefly , the rule of time evolution is used to generate new states which are attached as nodes to the simulated network ; in this way the signal is characterized in terms of a growing network .for details see and references cited therein .+ our aim here is to discuss a new cellular automaton designed for modeling jams in traffic systems .the novelty of this automaton is that cells represent sections of road which can be either jammed or passable .a jam can grow at its end and flush at its front ; the competition between these two processes depends on the local topology of the traffic network .our description , inspired by percolation , is more coarse - grained , than in other models . according to the classification of traffic models , presented in , our model belongs to macroscopic queueing models .some model elements remind the cell transmission model by daganzo : namely , the rates of inflow and outflow in the cell transmission model are similar to the rates of grow and flush of traffic jam , defined below . however , as it is explained in details in the next section , it is only jammed and passable cells what is differentiated here , and the flows of vehicles are not identified .the price paid is that a range of dynamic phenomena as synchronization and density waves are excluded from the modeling .these phenomena , essential at the scale of a road , can be less important at the scale of a city .consequently , our approach should be suitable for macroscopic modeling of large traffic systems .+ our second aim is to construct the kripke structure on the basis of the same cellular automaton .this in turn limits again the size of the system , because of the exponential size dependence of the number of states .we are going to demonstrate that our recent tool , i.e. the symmetry - induced reduction of the network of states , is useful to partially reduce the computational barrier .+ in the next section we describe the automaton in general terms and we recapitulate the method of reduction of the network of states , mentioned above .as the direct form of the automaton rules depends on the traffic system under considerations , the exact description of the rules is given in section 3 , together with the information on the analyzed systems , both artificial ( the square lattice ) and real ( a small city ) .two last sections are devoted to the numerical results and their discussion .we analyze a simple automaton , where each cell - a road section - can be either in the state or .the state means that a fluent motion via a given road section is possible , while the state means a traffic jam . as each road section is a part of larger system , the cell state depends on the state of roads where one can enter from a given road section .namely , the probability of a traffic jam back propagation as well as the probability of a traffic jam to be flushed depend on the number and state of neighboring road sections . to initialize calculations, one has to assign values of three parameters .two of them , and , describe the whole system , and the last one is related to the boundaries .specifically , is the probability that a traffic jam arises on a given road section due to its presence on the roads directly preceding the currently considered one ( jam behind jam ) , is the probability of a jam flush ( jam behind passable gets passable ) , and is the probability that a traffic jam appears at a road section at the boundary , but out of the system .the latter parameter describes the system interaction with the outer world . the parameters and can be related to the flows used in for the discussion for congestion near on - ramps .+ the probability of a change of the state of a given road section is obtained as the result of the analysis of the state of this section and the state of its neighborhood .we ask for which ranges of the parameters the system is passable in the stationary state. + the detailed realization of the model depends of the topology of the traffic network . in section 3 the exact algorithmis presented together with the presentation of the analyzed systems .the automaton defined above can be used for simulations , and the results of these simulations are reported below .the same automaton is used here also to construct the network of states , as in .this network , equivalent to the kripke structure , is formed by all possible combinations of states of roads which play the role of nodes .next , an appropriate master equation is constructed , which reflects all possibilities of states obtained from the current state .the obtained matrix of transitions between states , i.e. the transfer matrix , is equivalent to the connectivity matrix of our state network . for a given set of model parameters ( , and ) , eigenvector of the matrix associated with the eigenvalue equal 1 serves to calculate probabilities of particular states in the stationary state . having these values, one can evaluate how passable the system is under given conditions from the average number of unjammed ( ) states where : is the size of the system ( the number of considered road sections ) , - probability of -th state and - number of zeros in -th state .we note that in this equation , me make an average over the states of the whole network , and not over the states of local cells . +as the obtained number of states is large even for moderate systems , we reduce the system size by the application of the procedure proposed in our previous papers .the method of the reduction of the system size is based on the symmetry observed in the system , which manifests in the fact that properties of some elements of the system are exactly the same .the starting point is the network of states , and the core of the method is to divide nodes into classes ; the stationary probability of each node in the same class is the same . to begin , for each node the list of its neighboring nodes is specified , with the consideration of weights of particular connections .provisionally , the class of each state is determined by its degree ; for each state its symbol is replaced by the symbol of class , which discriminate nodes which have different number of neighbors . at the next stagewe examine the lists of neighboring nodes in terms of class symbols assigned to a particular neighbors and weights of appropriate ties . if for nodes assigned with the same symbol the symbols assigned to its neighbors are different or their are the same but their weights are different , an additional class distinction is introduced . at the end of the algorithm , the classes , i.e. subsets of nodesare indicated , which have identical lists of neighbors with respect of the number of neighbors , the symbol assigned to each of them , and weights of particular connections .as a reference system we analyze a system of directed roads placed on edges of a regular square lattice . the lattice is finite , with open boundary conditions .for such a system each road has two in - neighbors and two out - neighbors . as it will be explained in detail, the probability of the state change depends only on the state of out - neighbors . at the boundaries , a road has one or none out - neighbors ( none at the corner ) . at each road, the traffic takes place only in one direction , say upwards and right .this setup is borrowed from the biham - middleton - levine automaton .the algorithm of a change of the state of the road for the square lattice is presented in fig.[alg1 ] . in the above algorithm a state of a given road , is the number of roads given road is connected to , the quantity refers to the state of the road which is a neighbor of the currently considered one ( if the road has two neighbors their states are marked respectively as and ) .the probability of a change of the state depends , as it was mentioned above , on the state of the considered road and the state of the neighborhood which determine how probable the change is .namely , the transition from jammed to passable ( 1 to 0 ) mean that the jam at a given road section is flushed by free motion of vehicles at the jam front .this is possible only if the out - neighboring section is empty .further , the transition from passable to jammed ( 0 to 1 ) is possible only if the out - neighboring section is jammed . in both cases ,the transition depends on the state of the out - neighbors ; the state of in - neighbors is not relevant .+ in fig.[art ] a piece of the system is presented .each road can be either passable ( in our notation a road is in a state ) which is marked as a dashed - line or a traffic jam can be formed ( a road is in a state ) which is marked as a solid - line .the direction of traffic was ticked on roads in the state , but the rule is the same for all roads ; we keep down - up and left - right direction . herewe present one of the possible changes of the state of the system .( fluent flow ) , and a solid - line refers to the state ( traffic jam ) .arrows indicate the direction of traffic . ] ( fluent flow ) , and a solid - line refers to the state ( traffic jam ) .arrows indicate the direction of traffic . ]the method was also applied to a real road network of a small polish town rabka .the structure of roads which matter in traffic was selected - dead ends are removed ( fig.[map ] ) .each road , if necessary , was divided into sections of approximately equal length .we get sections . herethe number of neighbors for different roads varies as it results from the town topology .each section is a two - way street . in this casethe algorithm has a form presented in fig.[alg2 ] .approximately equal sections .the black dots mark the exit / entrance roads . ] in the algorithm presented in fig.[alg2 ] summing goes through the states of the roads outgoing from a given one , and is a number of outgoing roads .all presented results are a time average in the steady state over realizations for the square lattice of the size . to check that the results do not depend on the initial conditions , we use three options for the initial state : states of all roads set to , states of all roads set to , and a state of each roadis set randomly to be or .+ the results depend on the values of the model parameters , and . as a result for the whole system the percentage of roads in the state ( $ ] )is calculated .the higher the number of zeros the more passable the system is .the results for two different values of the parameter are presented in fig .[ sq ] for and for .the increase of the percentage of zeros with the parameter , visible in fig.[sq ] , can be interpreted as an indication of a phase transition . to verify its sharpness dependence on the system size, we calculated the curve vs for a selected case : , =0.5 and various system sizes .the results are shown in fig.[pf ] .indeed , the sharpness increases with , and the curve for is close to the step function .+ for square lattices of different sizes for and ( average in the steady state over realisations ) . ]we also check how removal of some number of roads changes the obtained results , to check if the symmetry of the square lattice is necessary . in fig.[fig4 ]the results obtained when randomly chosen road sections is removed .the removal is done separately for each realization .if , in a consequence of the removal , some part of lattice is isolated , it is removed as well .the results , shown in fig.[fig4 ] , indicate that the phase transition , found for the square lattice , is observed also in a randomized lattice .the maximal number of zeros in this case is less than 90 percent , because the plot is normalized to the whole square lattice , including the removed links .the result obtained for the simulations of the traffic network in rabka , formed by road sections , are presented in figs.[fig5 ] .the main difference between this network , as constructed from the map in fig.[map ] , and the square lattice ( with removals or not ) is that the rabka network is less connected .there , often the road sections form long chains . comparing figs.[sq ] and [ fig4 ]we see that the consequence of this difference is that jammed state is less likely .the origin of this result is that jams are created behind the jammed road sections ; the more in - neighbors of these sections , the more jams appear . besides of that , the obtained plot are similar to those for the square lattice. + exact calculations of can be performed merely for systems much smaller than hundreds of road sections . for the sake of comparison of the methods, we simplified the map of rabka leaving only nine two - way roads .this leads to the system of states .the results of the simulations for this system are shown in [ fig6 ] . as the system is much simplified , the results for the full ( fig.[fig5 ] ) and reduced ( fig.[fig6 ] ) traffic networks differ substantially for . surprisingly , those for are quite similar .the same simplified network is solved exactly by the solution of master equations for the stationary state for different sets of the model parameters of the related kripke structure . in this exact method, the parameters enter to the weights of links between states , or - equivalently - to the rates of the processes which drive the system from one state to another .for each case we can then calculate , in accordance with eq.[e1 ] , the mean stationary probability that the road sections are passable .obtained results are presented in figs.[fig7 ] .the same figure shows the solution for the classes of states , as described in . in this case, the class identification procedure allows for the reduction of the system size about twice , to classes .the goal of this paper is to describe large traffic networks with a cellular automaton , where states of road sections are reduced to two : passable and jammed .the coarse - grained character of the new automaton is close in spirit to the percolation effect .the results of our simulations allow to identify a phase transition between two macroscopic phases , again passable and jammed .additionally , the calculations are repeated for a much smaller traffic network , constructed by a strong simplification of a map of a small polish city .these calculations are performed to compare the results with the exact solution of the stationary state , obtained by two equivalent methods .this comparison suggests , that the accordance of simulation with the exact solution is better for more jammed systems , i.e. more close to the phase transition .+ + the drawback of our automaton is that all information about specific local conditions of traffic jams can not be reproduced .the model captures merely the jam spreading .the parameters and depend on the external state , and serve as an input for the calculations .the parameter should be calibrated separately for each traffic system .after this calibration , the main result of the model - the probability of the jammed phase - should be reproducible and useful to control the traffic phases .the advantage of the model is its simplicity , which allows to to simulate larger traffic systems in real time . + * acknowledgement : * the research is partially supported within the fp7 project socionical , no .231288 and by the polish ministry of science and higher education and its grants for scientific research and by pl - grid infrastructure .00 d. chowdhury , l. santen and a. schadschneider , _ statistical physics of vehicular traffic and some related systems _reports 329 ( 2000 ) 199 .d. helbing , _ traffic and related self - driven many - particle systems _ , rev .73 ( 2001 ) 1067 .t. nagatani , _ the physics of traffic jams _ , rep .65 ( 2002 ) 1331 .s. maerivoet and b. de moor , _ cellular automata models of road traffic _ , phys .reports 419 ( 2005 ) 1 .zhu zhen - tao , zhou jing , li ping and chen xing - guang , _ an evolutionary model of urban bus transport network based on b - space _ , chinese physics b 17 ( 2008 ) 2874 .gao zi - you and li ke - ping , _ evolution of traffic flow with scale - free topology _ , chinese phys .( 2005 ) 2711 .jian - feng zheng and zi - you gao , _ a weighted network evolution with traffic flow _ , physica a 387 ( 2008 ) 6177 .s. kripke , _ semantical considerations on modal logic _ , acta philosophica fennica , 16 ( 1963 ) 83 .r. v. donner , yong zou , j. f. donges , n. marvan and j. kurths , _ recurrence networks - a novel paradigm for nonlinear time series analysis _, new j. phys . 12 ( 2010 ) 033025 . t. van woensel and n. vandaele , _ modelling traffic flows with queueing models : a review _ , asia - pacific journal of operational research 24 ( 2007 ) 435 . c. f. daganzo , _ the cell transmission model : a dynamic representation of highway traffic consistent with the hydrodynamic theory _ , transportation research b 28 ( 1994 ) 269 .c. f. daganzo , _ the cell transmission model .part ii : network traffic _ , transportation research b 29 ( 1995 ) 79 .b. kerner and h. rehborn , _ experimental properties of complexity in traffic flow _ , phys .e 53 ( 1996 ) r4275 .d. helbing , _ empirical traffic data and their implications for traffic modeling _ , phys .e 55 ( 1997 ) r25 .b. s. kerner , _ experimental features of self - organization in traffic flow _ , phys .( 1998 ) 3797 .l. neubert , l. santen , a. schadschneider and m. schreckenberg , _ single - vehicle data of highway traffic : a statistical analysis _ , phys .e 60 ( 1999 ) 6480 .m. j. krawczyk , _ topology of space of periodic ground states in antiferromagnetic ising and potts models in selected spatial structures _ ,lett . a 374 ( 2010 ) 2510 .m. j. krawczyk , _ symmetry induced compression of discrete phase space _ , physica a 390 ( 2011 ) 2181 .m. treiber , a. hennecke and d. helbing , _ congested traffic states in empirical observations and microscopic simulations _ ,e 62 ( 2000 ) 1805 .d. helbing and k. nagel , _ the physics of traffic and regional development _ , contemporary physics 45 ( 2004 ) 405 .n. g. van kampen , _stochastic processes in physics and chemistry _ ,3-rd edition , elsevier , amsterdam 2007 .o. biham , a. a. middleton and d. levine , _ self - organization and a dynamical transition in traffic - flow models _ , phys .a 46 ( 1992 ) r6124 .
a coarse - grained cellular automaton is proposed to simulate traffic systems . there , cells represent road sections . a cell can be in two states : jammed or passable . numerical calculations are performed for a piece of square lattice with open boundary conditions , for the same piece with some cells removed and for a map of a small city . the results indicate the presence of a phase transition in the parameter space , between two macroscopic phases : passable and jammed . the results are supplemented by exact calculations of the stationary probabilities of states for the related kripke structure constructed for the traffic system . there , the symmetry - based reduction of the state space allows to partially reduce the computational limitations of the numerical method .
fermilab uses slip - stacking in the recycler ( and previously main injector ) to double the proton bunch intensity it can deliver to experiments .the us particle physics community has come to a consensus that the fermilab should upgrade its proton beam intensity in a cost - effective manner . to this end , the fermilab proton improvement plan - ii calls for an improvement in beam power from 700 kw ( with slip - stacking ) to 1.2 mw with an eye towards multi - mw improvements .the increase in proton intensity requires a commensurate decrease in the slip - stacking loss - rate to limit activation in the tunnel . a substantial improvementto either the booster beam quality or stable slip - stacking bucket area would accomplish this objective .this note describes the implications of both approaches . slip - stacking allows two beams to accumulate in the same cyclic accelerator by using two rf cavities at near but distinct frequencies .slip - stacking has been used at fermilab since 2004 to nearly double the protons per ramp cycle . slip - stacking in the main injector originally suffered significant beam - loading effects that were addressed through rf feedback and feedforward .previously slip - stacking took place in the main injector , but now takes place in the recycler to avoid loading time .the complete ramp cycle with slip - stacking in the recycler and a 15-hz booster cycle - rate is shown in fig .[ ss ] .the slipping rate of the buckets must be properly synchronized to the injection rate of new batches .the difference between the two rf frequencies must be equal to the product of the harmonic number of the booster rf and the cycle rate of the fermilab booster .so for a booster with a 15-hz cycle - rate we have and for a possible 20-hz cycle - rate the difference in the frequency of the two rf cavities is related to the difference in momentum of the two beams by : where is the harmonic number of the recycler and is the phase - slip factor of the recycler ( see table [ param ] ) .consequently , the momentum difference between the two beams is for the 15-hz booster and for the 20-hz booster .the gains in slip - stacking efficiency under the 20-hz booster scenario also require an increase in rf cavity voltage ( see fig .[ ar](b ) ) .the ideal rf cavity voltage increases from 64 kv to 114 kv , which is a factor of .the duty factor may also decrease ( by no more than ) in the case of a 20-hz booster ; the power dissipation would increase by at least .the maximum recycler rf voltage is 150 kv and the maximum recycle rf power is 150 kw , according to .the possibility of the recycler rf cavities overheating would have to be investigated .k. seiya , t. berenc , b. chase , w. chou , j. dey , p. joireman , i. kourbanis , j. reid , and d. wildman , in proceedings of hb2006 , tsukuba , japan , 2006 , edited by y. h. chin , h. yoshikawa , and m. ikegami .
we examine the potential impacts to slip - stacking from a change of the booster cycle - rate from 15- to 20-hz . we find that changing the booster cycle - rate to 20-hz would greatly increase the slip - stacking bucket area , while potentially requiring greater usage of the recycler momentum aperture and additional power dissipation in the rf cavities . in particular , the losses from rf interference can be reduced by a factor of 4 - 10 ( depending on booster beam longitudinal parameters ) . we discuss the aspect ratio and beam emittance requirements for efficient slip - stacking in both cycle - rate cases . using a different injection scheme can eliminate the need for greater momentum aperture in the recycler .
we next outline the tree and network construction methods . consider the pedagogic simulation in figure [ pedagogicsample ] , where we have a region of interest ( such as an influenza segment , for example ) that has undergone mutational and selective processes encapsulated by the evolution tree in figure [ pedagogicsample]a .this tree contains five mutations that lie on various branches of the tree .these combine into the six clones that are the leaves of the tree . for example , the second leaf is labeled , indicating a clone with haplotype consisting of mutations but not . note that the path from the root of the tree to this leaf crosses the two branches corresponding to mutations and .the number at the leaf indicates that this clone makes up of the viral population , and is termed the _prevalence_. note that these prevalences form a _ conserved flow network _ through the tree .for example , the prevalence of mutation is , which accounts for the two haplotypes and , with prevalences and , respectively . in general , we find that the prevalence flowing into a node of the tree must equal the sum of the exiting prevalences .this represents conservation of the viral sub - populations .the total prevalence across all the leaves is therefore .in reality we are not privy to this information and perform a sequencing experiment to investigate the structure .this takes the form of molecular sequencing , where we detect the five mutations , which each have a different _ depth _ of sequencing , as portrayed in figure [ pedagogicsample]b .we will later see with real influenza data in that percentage depth can be reasonably interpreted as prevalence .furthermore , we can look at the mutations arising on individual sequencing reads and group them into clusters . for our example ,this groups the mutations into two clusters , giving the haplotype tables in figure [ pedagogicsample]c .we first construct an evolution tree for each of these tables .our approach is based upon two sources of information ; one utilizes mutation sequencing depth with a pigeon hole principle , the other utilizes linkage information from haplotype tables .now we have mutation present in of viruses and mutation present in of viruses .if these mutations are not both simultaneously present in a sub population of viruses , then the mutations are exclusive .this implies the two populations of size and do not overlap .however , the total population of viruses containing either of these viruses would then be greater than .this is not possible , and the only explanation is that a subpopulation of viruses contain both mutations ; the pigeon hole principle .the only tree - like evolutionary structure possible is that is a descendant of , as indicated by the rooted , directed tree in figure [ pedagogicsample]di .note that we have not utilized any haplotype information to infer this , just the mutation prevalence of the two mutations and a pigeon hole principle .mutation has a prevalence that is too low to repeat a prevalence based argument .however , we have a second source of information ; the paired read data that can link together mutations into the haplotypes in figure [ pedagogicsample]ci .this table is based on three mutations , which group into possible haplotypes .however , a tree structure with three mutations will only contain four leaves and we see that four of the halpotypes ( emboldened ) have notably larger counts of reads and are likely to be genuine .the four haplotypes with a notably lower read counts are likely to be the result of sequencing error at the mutant base positions , or template switching from a cycle of rtpcr , and are ignored .the presence of genuine haplotypes and , lead us to conclude that is descendant from but not , resulting in the tree of figure [ pedagogicsample]di . from the mutation prevalences , and of , and , we can also use the conserved network flow to measure the haplotypes prevalence .for example , the leaf descending from , but not or ( clone of figure [ pedagogicsample]di ) must represent the remaining of the population .this provides us with two sources of information ( sequencing depth and linkage information ) we can utilize to reconstruct the clone haplotypes , prevalence , and evolution .however , not all mutations can be connected by sequencing reads .they may be either separated by a distance beyond the library insert size , or may lie on distinct ( unlinked ) segments .our approach is then as follows .we first construct a tree for each cluster of linked mutations .this will be a subtree of the full evolutionary structure .we then construct a supertree from this set of subtrees .now both of the trees in figure [ pedagogicsample]d must be subtrees of a full evolutionary tree for the collective mutation set so we need to construct a supertree of these two trees .we can do this recursively as follows .we take the mutations and place them in decreasing order according to their prevalence , as given in figure [ pedagogicsample]e .we then attach branches corresponding to these mutations to the supertree in turn , checking firstly network flow conservation , and secondly that the haplotype information in the subtrees is preserved .the steps for this example can be seen in figure [ pedagogicsample]f .we start with a single incoming edge with prevalence ; the entire viral population .we next place an edge corresponding to , the mutation with maximum prevalence of .the next mutation in the tree either descends from the root or this new node .any descendants of must have a prevalence less than this .any other branches must descend from the top node but can only account for up to of the remaining population .these two values are the _ capacities _ indicated in square brackets .the next value we place is with prevalence .this is beyond the capacity of the top node , so is descendant to , accounting for of the , leaving .we thus have a three node tree with capacities , and . the third ordered mutation has prevalence , which can only be placed at the bottom node with maximum capacity . outnext mutation has a prevalence that is less than any of the four capacities available , and no useful information on the supertree structure is obtained .this branch is the first to use haplotype information .we know from the first subtree that the corresponding branch is a descendant of and not .the only node we can use ( in red ) has capacity and we place the branch . for the final branch corresponding to mutation ,the prevalence is less than four available capacities .the second subtree tells us that is not a descendant of .this only rules out one of the four choices , and any of the three ( red ) nodes will result in a tree consistent with the data .the top node selected results in a tree equivalent to that in figure [ pedagogicsample]a . to see this tree equivalence , the internal nodes in the last tree of figure [ pedagogicsample]f have additional leaves attached ( dotted lines ) to obtain figure [ pedagogicsample]a .we thus find that a single dataset can result in several trees that are consistent with the data .however , having a time series of samples means a tree consistent with all days of data is required , which will substantially reduces the solution space .note that the prevalences of the clones at the leaves of the tree results from this recursive process .we thus find that supertree construction is relatively straightforward with the aid of prevalence .however , trees do not always fit the data .this can be due to recombination occurring within segments , or re - assortment occurring between segments . in the next section we construct recombination networks to cater for this , although we will see that they can not be constructed as efficiently as trees . and undergo within segment recombination into , with a crossover site between and .( ciii ) clones and undergo between segment recombination ( reassortment ) into .( d ) recombination networks arising from the haplotype tables in ( b ) .( e ) the prevalence of the four mutations across five days .( f ) phylogenetic network associated with ( a ) . (g ) point and range prevalence estimates .( hi ) a network consistent with the two networks of ( d ) .( hii ) incompatible prevalence conditions associated with ( hi).,width=461 ] in figure [ networkfigure]a we see another simulated evolution based upon the two segments in figure [ networkfigure]ci that accumulate four mutations , , , and .first we have mutations and .then we have the first of two recombination events , , where we have recombination within the first segment as described in figure [ networkfigure]cii .we then have mutations and , followed by the second recombination event in figure [ networkfigure]ciii , a re - assortment between the two segments .this results in the seven clones given at the leaves of figure [ networkfigure]a .the prevalences of the four mutations across five time points are given in figure [ networkfigure]e .note that we no longer have the conservation of prevalence observed in trees .for example , mutations and are on distinct branches extending from the root , yet their total prevalence is in excess of ( on day 5 for example ) .this is due to recombination resulting in the presence of a clone containing both mutations .the use of the prevalence to reconstruct this structure from observable data thus requires more care .now we see in figure [ networkfigure]ci that the four mutations cluster into two groups of mutations each bridged by a set of paired reads , resulting in two tables of read counts in figure [ networkfigure]bi , ii .we would like to reconstruct the evolution in figure [ networkfigure]a from these data .firstly , we need to decide which of the haplotypes in figure [ networkfigure]b are real . the haplotypes with consistently low entries are classified as artifact ( in opaque ) .we next use a standard approach ( such as a canonical splits network ) to construct sub - networks from the real haplotypes in each of these tables , such as those given in figure [ networkfigure]dii .we then build super - networks ensuring that all sub - networks are contained as a sub - graph .there does not appear to be an efficient way of doing this ( such as ordering by prevalence which works so well with trees ) so a brute force approach is taken , where we construct all possible networks that contain four mutations and the haplotypes observed in figure [ networkfigure]bi , ii .this results in many candidate super - networks .we now find that the prevalence information can be used to reject many cases .for example , the super - network in figure [ networkfigure]hi contains both sub - networks of figure [ networkfigure]di , ii as subgraphs .note that the root node , representing the entire of the population , has daughter branches containing mutations , and .however , from the prevalences on day 5 we see that has prevalence and and ( which recombine ) have a collective prevalence ( from clones , , , and in figure [ networkfigure]bi ) of .this is in excess of the possible available and the network is rejected .application of filtering by prevalence ( see methods section for full details ) rejects all networks with one recombination event , so we try all networks with two recombination events , resulting in just seven possible recombination networks .these all contain the same set of clones , all of which correspond to the single phylogenetic network in figure [ networkfigure]f .although only one recombination event is present across the subnetworks , all super - networks with one recombination event were filtered out and two recombination events were required .lastly we require estimates of the prevalences of each of the seven clones .we would like to match these to the prevalences in the tables of figure [ networkfigure]b .this is a linear programming problem , the full details of which are given in the methods section .the resulting estimates are given in figure [ networkfigure]g where we see that some clones have point estimates , whereas others have ranges .for example , we see that clone has a point estimate for each day .this is because it is the only clone of the super - network that corresponds to clone of figure [ networkfigure]bi and their prevalences can be matched .conversely , we see ranges for the prevalences of clones and . this is because both clones correspond to clone of figure [ networkfigure]bi and prevalence estimates for each clone can not be uniquely specified. full details of this approach can be found in the methods section . in the next sectionwe describe the results obtained when applying these methods to a time series of influenza samples .the data used in this study were generated from a chain of horse infections with influenza a h3n8 virus ( murcia , unpublished ) .an inoculum was used to infect two horses labeled 2761 and 6652 .these two animals then infected horses labeled 6292 and 9476 . this latter pair then infected 1420 and 6273 .the chain continued and daily samples were collected from the horses resulting in 50 samples in total .for the present study we used 16 samples ; the inoculum and hosts 2761 ( days 2 to 6 ) , 6652 ( days 2 , 3 and 5 ) , 6292 ( days 3 to 6 ) and 1420 ( days 3 , 5 and 6 ) . influenza a virus is a member of the family orthomyxoviridae which contains eight segmented , negative - stranded genomic rnas commonly referred to as segments and numbered by their lengths from the longest 2341 to the shortest 890 bps , as summarized in figure [ flusegments]a .daily samples were collected from each host and paired end sequenced was performed with hi - seq and mi - seq machines .the samples sequences were aligned with bowtie2 with default parameters .we obtain for each sample a sam file containing mapping information of all the different reads in the sample .any mapped read whose average phred - quality per base was less than were discarded . in order to identify mutations from real datawe need a reference sequence to compare the read sequences to .consistent differences between the two can then be classified as a mutation .we constructed a majority consensus sequence from the inoculum sample .this consensus sequence was then used as a starting reference for the chain of infected animals .an amplification procedure was used to produce viral dna .this involves pcr , which will result in different levels of amplification and mutations .this in turn is likely to introduce significant differences between the sequencing depth and prevalence . to combat this ,all identical paired end reads ( with equal beginning and endpoints ) were grouped , classified as a single pcr product , deriving from a single molecule and only counted once .the depth of sequencing with these adjusted counts then provided an improved measurement of the prevalence of viral subpopulations .we compared an identical sample that was sequenced separately , the results of which can be seen across two samples in figures[flusegments]b , c i , ii .both the position and prevalence of mutations were reproducible to good accuracy suggesting proportional sequencing depth is a good surrogate for prevalence .we then applied the methods to sets of high prevalence mutations in each of the eight segments individually , and also to a set of three mutations from distinct segments .the main observations are below . for segments 1 , 3 , 5 , 6 , 7 and 8 we obtained tree like evolutions for the segments . in all cases the mutations involved lay on distinct branches and were indicative of mutations arising in independent clones .segment 6 can be seen in figure [ miseqhiseq]a , where we see five mutations on six branches .we also see from the stacked bar chart in figure [ miseqhiseq]b that many of the mutations arose during different periods in the infection chain .however , the evolution structure of mutations within segments did not always appear to be tree like , with segment 2 containing one putative recombination event and seven mutations , and segment 4 containing three putative recombination events and six mutations .this latter case arose because we found three pairs of mutations in putative recombination . using nucleotide positions as labels , these were ( 431 , 674 ) , ( 431 , 709 ) and ( 709 , 1401 ) .that is , we found significant counts of all four combinations of mutations , labeled , , and , lying on paired reads .examples of typical counts for three ( out of sixteen ) samples are given in the top table in figure [ templateswitching]b ( see supplementary information for full details ) .if the evolution is tree - like , reads from one of the types , or should only arise as an artifact .note that we have high read counts of all four categories , which is indicative of recombination .however , various studies have shown that there is very little evidence of genuine recombination that occurs within segments of influenza , , , and these kind of observations can arise from template switching across different copies of segments during the rtpcr sequencing cycle .we developed an analytic approach to consider this possibility in more detail .now if the true underlying structure is tree - like , it suggests that one of , or arises purely from template switching ( the wild type is assumed to always occur ) .this gives us the three models ( labeled i - iii in figure [ templateswitching]a , b ) to consider .we let , and be the population proportions of the three real genotypes .we let be the probability that a cycle of rtpcr causes template switching .we then treat template switching as a continuous time three state random process .this allows us to derive probabilities that genotypes , , and arise on paired end reads , as given in figure [ templateswitching]a ( see methods for derivations ) .the counts of the four classes of read then follow a corresponding multinomial distribution .maximum likelihood was used to estimate parameters , obtain log - likelihood scores , and a chi - squared measure of fit was obtained for each of the three models . for the pair ( 431 , 674 ) we found that the best log - likelihood , on all sixteen sampled days , was model ( figure [ templateswitching]i ) , where reads of type are artifacts arising from template switching alone .the parameters obtained provided an almost perfect fit ; the expected counts were almost equal to the observed counts and the goodness of fit significance values were close to .the other two models had substantially lower likelihoods and significantly bad fits .this tells us that if the underlying structure is a tree , it involves the three genotypes , and and mutations 431 and 674 lie on distinct branches . for the pair ( 431 , 709 ) we found that the best log - likelihood , over the sixteen sampled days , was model 3 ( figure [ templateswitching]iii ) , where reads of type are artifacts .the parameters obtained provided an almost perfect fit on most days with goodness of fit significance values close to .a couple of days had relatively poor fits , but were not significant when multiple testing across all sixteen days was considered .the model with as an artifact had very similar likelihoods , but the data exhibited significantly poor fits on multiple days .the model with as an artifact performed very badly .this tells us that if the underlying structure is a tree , it involves the three genotypes , and and mutation 431 is a descendant of 709 . for the pair ( 709 , 1401 ) we found that the best log - likelihood , on all sixteen sampled days , was the model 2 ( figure [ templateswitching]ii ) , where reads of type are artifacts .the parameters obtained provided an almost perfect fit on all days with goodness of fit significance values close to . the other two models performed very badly .this tells us that if the underlying structure is a tree , it involves the three genotypes , and and mutation 1401 is a descendant of 709 .thus the three cases where data are indicative of recombination can be explained purely by template switching during rtpcr .this is reinforced somewhat by the fact that the same model emerged across all sampled days for each mutation pair .however , this does not definitively rule out recombination , which could also exhibit these consistent patterns across sampled days , and so care is needed when interpreting data .furthermore , the rates of template switching required to explain the data without recombination were not always consistent . for example , in the sample from host 2761 day 4 , the estimated template switching between mutations ( 431,674 ) was ( c.i - ) . between mutations ( 431,709 )it was ( c.i - ) , giving reasonable agreement . between mutations ( 709 , 1401 ) it was somewhat higher , at ( c.i - ) , although this may be expected due to the greater distance between the mutations .however , in sample 1420 day 3 , the template switching rate for the pair ( 431,674 ) , at ( c.i - ) , was notably higher than both the mutation pair ( 431,709 ) , at ( c.i - ) , and mutation pair ( 709,1401 ) , at ( c.i - ) .although differences between samples ( and so library preparations ) may be expected , differences such as this in the same library are harder to explain without implicating genuine recombination .we thus have two explanations of the data ; genuine recombination or template switching artifacts .we consider both cases and then draw comparisons .firstly we consider segment 4 assuming recombination has taken place .the results can be seen in figure [ miseqhiseq2 ] .the prevalences of six mutations of interest are given in figure [ miseqhiseq2]a .reasonable linkage information was available across the segment , including the two haplotype tables in figure [ miseqhiseq2]c .the first is linkage information between mutations 709 and 1401 , where all four combinations of mutation occur to reasonable depth , implying recombination between the mutations .the second is between mutations 1387 and 1401 , where we see only three haplotypes occur to significant depth , suggesting a tree like evolutionary structure between the two mutations .the full set of tables is in supplementary information .although the sequencing depth in the first table is lower , due to the rarer occurrence of sufficiently large insert sizes , the information gleaned is just as crucial .the most parsimonious evolution found involved three recombination events , resulting in the single cloneset contained in the phylogenetic network given in figure [ miseqhiseq2]d . there were possible recombination networks that fit this phylogenetic network , one example of which is given in figure [ miseqhiseq2]e .the relatively complete linkage information resulted in point estimates for the clone prevalences ( rather than ranges ) , as given in figure [ miseqhiseq2]b .if we now assume that the recombination like events are template switching during rtpcr , then from above , we observed that mutations 431 and 674 are on distinct branches , mutation 431 is a descendant of 709 , and 1401 is also a descendant of 709 .this resolves all three reticulation events in the network of figure [ miseqhiseq2]e and we end up with the tree given in figure [ miseqhiseq2]f .however , this structure still has two minor conflicts .firstly , the tree like structure suggests that mutation 431 should have a lower prevalence than 1401 , and on most days it does .however , the sample from host 2761 day 4 has prevalences and for mutations 431 and 1401 , respectively .similarly , the samples from host 1420 day 3 are and , respectively .secondly , the four mutations 674 , 709 , 1013 and 1401 all descend from the root on separate branches and should have a total prevalence that is less than and on fifteen of sixteen samples this is true. however , on sample 6292 day 3 the prevalences are , , and , which combine to .although the conflicts are relatively small , these differences are larger than would be expected from poisson sampling of such deep data . however , this is the most plausible tree structure we found .re - assortments occur when progeny segments from distinct viral parents are partnered into the same viral particle , resulting in a recombinant evolutionary network .now re - assortment is a form of recombination .this is usually possible to detect in diploid species such as human because linkage information is available across a region of interest , such as a chromosome , and recombination can be inferred .furthermore human samples have distinct sequencing samples for each member of the species . inferring re - assortment across distinct viral samplesis more difficult because firstly we do not have linkage information across distinct segments , and secondly , we have mixed populations within each sample . however , we show that re - assortment can still be detected within mixed population viral samples with the aid of information provided by prevalence. consider figure [ reassortment ] .we have three mutations in segments , and , along with their mutation nucleotide positions 2037 , 201 and 709 , respectively .we refer to the mutations as , and accordingly .we see in figure [ reassortment]d that and have prevalences that alternate in magnitude across the 16 days sampled .if we assume a tree like structure , these two mutations can not lie on a single branch , because one prevalence would have to be consistently lower than the other ; they must therefore lie on distinct branches .now mutation can ; i ) be on a distinct third branch , ii ) be a descendant of , iii ) be a descendant of , iv ) be an ancestor of , v ) be an ancestor of , or vi ) be an ancestor of both .we can rule out all of these choices as follows .firstly we note that has a prevalence that is consistently larger than that of or , so can not be a descendant of either mutation , ruling out ii ) and iii ) .we see from sample 6292 day 3 that and have a total prevalence greater than , meaning can not be an ancestor of both mutations , ruling out vi ) . in this sample ,the total prevalence of all three mutations is in excess of , ruling out i ) . nowif and lie on distinct branches , we see from 2761 day 4 that their combined prevalence is in excess of , ruling out v ) .finally , if and lie on distinct branches , we see from 6292 day 4 that their combined prevalence is in excess of , ruling out iv ) .no tree structure is possible and we conclude the presence of re - assortment as the most likely explanation . in fact ,application of the full method reveals that two re - assortment events are required to explain the data .this results in 51 possible recombination networks , one such example is given in figure [ reassortment]b .these correspond to the four clonesets given in figure [ reassortment]c , arising from two possible phylogentic networks .the four clonesets have prevalences that could not be uniquely resolved ; their possible ranges are shown in figure [ reassortment]d .although we can not uniquely identify the network or the prevalences , all solutions involved two re - assortments , one involving mutations and , the other involving and .this observation was only possible because of inferences made with the prevalence .we have introduced a methodology to analyze time series viral sequencing data .this has three aims ; to identify the presence of clones in mixed viral populations , to quantify the relative population sizes of the clones , and to describe underlying evolutionary structures , including reticulated evolution .we have demonstrated the applicability of these methods with paired end sequencing from a chain of infections of the h3n8 influenza virus . although we could identify underlying evolutionary structures , some properties of the viruses and the resulting data make interpretation difficult . in particular ,template switching during the rtpcr cycle of sequencing an rna virus is known to occur , and can result in paired reads that imply the presence of recombination .although any underlying tree like evolutions can still be detected , these artifacts confound the signal of any genuine recombination that may be taking place , making it harder to identify .the prevalence of mutations , measured as sequencing depth proportion , offers an alternative source of information that can help resolve these conflicts in theory , although more work is needed to evaluate how robust this metric is in practice .for example , although tree like evolutions were identified in six of the segments , in the two remaining segments the approach found reticulated networks , with three distinct reticulated nodes in the hemagglutinin segments network .although each of these nodes were consistent with template switching artifacts , the resultant tree structure could not quite be fitted to the mutation prevalences .although this conflict implies the original network is correct and recombination has taken place , within segment recombination in influenza is rare , , and other explanations may be required . in particular , we note in figure [ flusegments]b that there are slight differences between the prevalences obtained from independent mi - seq and hi - seq runs . although some of this will be due to poisson variation of depth , there could be some biases in pcr over certain mutations , for example .the application of prevalence thus needs to be used with caution , and further studies are needed to fine tune this type of approach .when the approach was applied to mutations in distinct segments , two re - assortment events were inferred .the differences in mutation prevalences were more marked in this case suggesting the inference is more robust and re - assortment more likely to have taken place .this is also biologically more plausible , with events such as this accounting for the emergence of new strains .we note that although re - assortment may have genuinely taken place , only one of the original clones ( containing just mutation 709 on segment 4 ) survived the infection chain and a longitudinal study would not have picked up such transient clonal activity .these methods utilized paired end sequencing data and showed that even when paired reads do not extend the full length of segments , or bridge distinct segments , we can still make useful inferences on the underlying evolutionary structures .the two main sources of information are the linkage offered by two or more mutations lying on the same paired reads , and the prevalence information .it is by utilizing the variability of the prevalence in a time series dataset that we can narrow down the predictions to a useful degree ; application of this method to individuals days will likely result in too many predictions to be useful .furthermore , this has greatest application to mutations of higher prevalence ; this places more restrictions on possible evolutions consistent with the data .subsequently , this kind of variability is most likely to manifest itself under conditions of differing selectional forces .a stable population is less likely to contain mutations moving to fixation under selective forces .lower prevalence mutations will result , meaning less predictive power .simulations also suggest that although clone - sets may be uniquely identified , prediction of the underlying reticulation network is difficult , with many networks explaining the same dataset . as we lower the minimum prevalence of analyzed mutations , their number will increase. the number of networks will likely explode and raise significant challenges .furthermore , single strand rna viruses such as influenza mutate quickly , suggesting a preponderance of low prevalence mutations likely exist .this is further exacerbated by the fact that sequencing uses rt - pcr , introducing point mutations and template switching artifacts that create noise in the data .these processes are likely responsible for the grass - like distribution of low prevalence mutations visible in figure [ flusegments]b , c .thus as we consider lower prevalence mutations we are likely to get a rapidly growing evolution structure of increasingly complex topology .the methods we have introduced , however , can provide useful information at the upper - portions of these structures .the software viralnet is available at www.uea.ac.uk/computing/software .the raw data is available from the ncbi ( project accession number srp044631 ) .more detailed outputs from the algorithm are available in supplementary information .we now consider tree and network construction methods , a template switching model , and validation of the methods . the construction of phylogenetic trees is a well established area .trees are frequently constructed from tables of haplotypes of different species .however , we have two properties that change the situation .firstly , if we have a set of mutations linked by reads , we can have up to distinct haplotypes .however , a consistent set of splits from such a table should only have up to distinct haplotypes , in a split - compatible configuration . to construct a phylogenetic tree we thus need to classify the genotypes as real or artifact .secondly , we have prevalence information , in the form of a conserved network flow through the tree .this can help us to both decide which haplotypes to believe and to construct a corresponding tree . to describe the algorithm we first introduce some notation .now , the evolutionary structure is represented by two types of rooted directed tree ; one where each edge represents a mutation , such as in figure [ pedagogicsample]f , and one where all leaves represent clones in the population , such as in figures [ pedagogicsample]a , d .the first is a subtree of the latter .the latter has a conserved flow network .these will be termed the _ compact prevalence tree _ and _ complete prevalence tree _ respectively .now to each edge in the compact prevalence tree , we assign _ prevalence _ .this represents the proportion of population containing the mutation represented by the edge .the single directed edge pointing toward a vertex ( away from the root ) represents a viral population of prevalence , all containing the mutation corresponding to edge , along with its predecessor mutations .the set of daughter edges leading away from node represent populations containing subsequent mutations , each with prevalence .the remaining population from contains just the original mutation set , having a prevalence described by the _ capacity _ .the conservation of prevalence satisfied by each vertex is then represented by the condition : this describes the mutation based trees such as that in figure [ pedagogicsample]f . to obtain a complete tree containing all the clones , we need to extend an edge from each internal node to represent the associated clone ( these are the dashed lines in figure [ pedagogicsample]a ) .the prevalence of the additional edges are equal to the capacities of the parental nodes .we saw in figure [ pedagogicsample]b that mutations can be clustered together , and evolution trees constructed for each cluster .we refer to these as _ subtrees_. we then look for a tree that contains all such subtrees as a subset of edges .we refer to these as _* step 1 * _ subtree construction _ now , for mutations we have possible haplotypes , with corresponding counts , and a tree with haplotypes to fit .this implies that of those counts are artifacts .for example , in figure [ cayley]d we see the simulated counts for haplotypes on mutations .now cayley s formula states that there are different labeled trees that can be constructed on n vertices .these are easily constructed with the aid of prfer sequences , which are any integer sequence ] and the vector ] , where the root node is treated as the minimum value , along with prfer sequence $ ] .the smallest element of not in is .the corresponding node is then joined to the node for the smallest element of , such as exemplified in figure [ cayley]biii .these two elements are removed from and and the process repeated until we are left with two elements in .our example leaves us with the two elements and , the corresponding nodes of which are then joined by an edge .the edges are then directed away from the root , resulting in the prevalence clonal tree in figure [ cayley]cii .the corresponding complete prevalence tree is in figure [ cayley]ciii . , and vertices .( bi ) vertex list * v * for example ( * ) .( bii ) prfer sequence * p*. ( biii ) tree construction .( ci ) the graph directed away from the root .( ii ) the equivalent compact clonal tree . ( iii ) the corresponding complete clonal tree . ( d ) alignment of trees * and to haplotype tables.,width=415 ]once we have all the possible subtrees constructed , we use maximum likelihood to select the most plausible tree .consider , for example , the penultimate column of figure [ cayley]d , which correspond to the four haplotypes for the tree in figure [ cayley]a - c .note that the haplotype with a count 550 is an artifact for this tree .if each mutation artifact arises with probability , then an artifact read of type contains two mutant bases and occurs with probability .we can then construct log - likelihoods ( summed across time points ) for the artifact counts arising from clones that do not belong to the putative tree being tested .we then assume poisson distributed counts and construct the following likelihood function for a given putative clonal tree : here indexes the time point , and are the total depth and the error rate , respectively .the values represents the number of mutants in haplotype .the tree with maximum likelihood is selected .* step 2 * _ supertree construction _ we next build supertrees of the evolution from the subtrees .as we saw in the example in figure [ pedagogicsample]f , this involves ranking the subtree branches by prevalence , and adding mutations sequentially as in the example in figure [ pedagogicsample]f , checking pairwise ancestry relationships between mutations ( from the subtrees ) , along with the capacity of prevalence available at each node ( by checking equation ( 1 ) for every time point ) .we would like to use data such as figure [ networkfigure]b to reconstruct the evolutionary structure .the splits method is used to construct phylogenetic networks such as figure [ networkfigure]g . there are many recombination networks that correspond to any given phylogenetic network . a standard method to identify recombination networks is to look for an optimal path of trees across the recombination sites .these methods generally have the full mutation profile of a set of species of interest to compare .our problem is exacerbated by missing data and the full haplotypes of distinct species ( clones in our case ) are not available .however , we have prevalence information which can help identify structures consistent with the data . we construct recombination networks in five steps ; haplotype classification , super - network construction , super - network filtering , prevalence maximum likelihood estimation , and prevalence range estimation .we describe these steps in detail .* step 1 * _ haplotype classification_. in order to distinguish the real and artifact haplotypes in any table such as figure [ miseqhiseq2]c we do the following . for any count associated with haplotype and time point , we calculate the probability it arises as an error using the poisson distribution .this gives a term of the form , where is the total read depth from that time point , is the number of mutations distinct from the wild type , and is a user selected error rate per base per read .we then take the combined log - likelihood across all time points .all log - likelihoods below a threshold are classified as artifact .the values and were used in implementation .* step 2 * _ super - network construction_. this is a brute force approach where we construct all possible recombination networks using reticulated nodes in turn .any networks that do not contain the real haplotypes of the individual haplotype tables of step 1 are rejected .the value of selected is the smallest value with any valid networks after steps 3 and 4 are implemented .* step 3 * _ filtering_. we need to utilize the prevalence to identify and remove invalid networks .each leaf of the recombination graph represents a single clone of the mixed population .we let denote the prevalences of that clone .we then have the conditions : now we have the estimated prevalence of each mutation from the proportional sequencing depth at the mutations position .if we let denote the set of clones from the super - network that contain mutation , we have conditions of the form : we solve the linear programming problem defined by equations [ simplex ] and [ filter ] with the simplex method .if no solution exists on any day the network is rejected .if a solution is found , it is the input to the ( more precise ) calculation in step 4 .this step generally reduces the number of networks to manageable levels .* step 4 * _ prevalence point estimation _ in reality is an estimate and we have more information than just the depth of mutations . for each cluster of mutationswe have the count for each real genotype ( artifacts are ignored ) and time points in the corresponding table .conditioning on the total count of real genotypes results in a binomial log - likelihood of the following form : here the sum is over the set of clones that contain haplotype .we then sum this over all tables and time points and maximize for estimates of the clone prevalences .we use gradient descent to maximize , projecting each step onto the simplex in equation [ simplex ] .projecting onto the simplex is relatively straightforward , the updated prevalence vector * * just becomes , where negative components are set to zero .* step 5 * _ range estimation_. step 4 does not always result in a unique estimate , because there may be ranges of values on the simplex of equation [ simplex ] that yield identical terms .then if are the estimates from the gradient descent , we use the simplex method to maximize subject to equation [ simplex ] and conditions of the form : valid clonesets with the maximum likelihood are then selected .this can be applied to any putative network to either conclude that the network is not feasible , or produce a range of possible prevalences associated with the network . we model template switching during rtpcr as follows .suppose we have two mutations of interest and four possible genotypes , labeled , , and .we have corresponding read depth counts , , and .now , if tree like evolution exists , one of , or is an artifact arising from template switching during rtpcr ( the wild type c00 is assumed to always occur ) .we demonstrate the case where is an artifact ( model 2 in figure [ templateswitching]aii ) .the derivation for the other two models is similar .then we assume that the real clones , and have prevalences of , and , respectively , so that .we model rtpcr as a time continuous three state process , where template switching occurs at a rate , jumping to any of the three templates , or with probabilities , and , respectively .we also refer to the states as , and .we let , and be the probabilities of occupying a copy of the corresponding templates at position . then conditioning over a time interval results in the following expression ( see for typical derivations ) : probabilities for all types , , and can now be defined , which we demonstrate for .derivations for the other terms can be obtained in a similar manner . from figure[ templateswitching]aii we see that to obtain a read of the form , we can start in either state or and end in either state or .this gives us four terms to add : the counts , , and then follow a multinomial distribution , from which log - likelihoods can be derived . a chi - squared goodness of fit can then be obtained .we note that in many cases , solutions for the four terms , , and in terms of , , and can be obtained , resulting in a perfect fit .when this is not possible , one or more of the three models can be rejected if the fit is sufficiently bad . notethat none of these three models necessarily explain the data . in the last column of figure [ templateswitching]d , for example, we have four artificial counts , , and corresponding to genotypes , , and .all three models are a bad fit suggesting recombination is present .however , this relies on small counts for , which were not observed in the real data that was examined .note that template switching has no effect on the prevalence of individual mutations .for example , considering figure [ miseqhiseq]ciii , if we add and , we get , which is precisely the prevalence of mutation .the validation of the method is based upon simulated data .this will give some idea of the reconstruction capabilities of the methods and allow benchmarking with other existing approaches . in particular , we compared our tree construction algorithm to the benchmark software shorah using the same simulation approaches as zagordi et al and astrovskaya et al . . to measure the performance of the mixed population estimation, we computed the _ precision _ , the _ recall _ , and the _ accuracy _ of prevalence estimation for the methods of interest .the recall ( or sensitivity ) gives the ratio of correctly reconstructed haplotypes to the total number of true haplotypes , where we have true positives ( ) , false negatives ( ) and false positives ( ) .the precision gives their ratio to the total number of generated haplotypes , .the accuracy measures the ability of the method to recover the true mixture of haplotypes , and was defined as measuring the mean absolute error of the prevalence estimate . where a range estimate is obtained for the prevalence, we calculate the shortest distance from the true value to the range .comparison with shorah was done on simulated deep sequencing data from a 1.5 kb - long region of hiv-1 .simulated reads have been generated by metasim , a meta - genomic simulator which generates collections of reads reproducing the error model of some given technologies such as sanger and 454 roche .it takes as input a set of genome sequences and an abundancy profile and generates a collection of reads sampling the inputted genomic population . for up to haplotypes and reticulations we performed 100 runs as follows .we randomly constructed a network by attaching each new branch to a random selected node .reticulations were also randomized .the prevalences of the resulting clones ( at the leaves ) were randomly selected from a dirichlet distribution .this is repeated for 10 time points of data .we used metasim to generate a collection of 5,000 reads having an average length of 500bp and replicating the error process of roche 454 sequencing .the methods were then applied to the resulting data . shorah output can display mismatches or gaps in the outputted genomes , with increasing frequency at the segment edges .we applied a modification on shorah output by trimming the edge and we then corrected one or two mismatches or gaps on all the genomes before addressing the comparison .figures [ validation]a - c provide the comparison for recall , precision and error indicators .we found slight improvements for recall , especially for tree like evolution . the precision and error also had improved results .we acknowledge that the simulations were based upon evolutionary structures that the models are designed to fit so such improvement might be expected .furthermore , shorah likely have better performance on low prevalence clones .however , these simulations demonstrate that reasonable results can be obtained from the techniques we have introduced .
rna virus populations will undergo processes of mutation and selection resulting in a mixed population of viral particles . high throughput sequencing of a viral population subsequently contains a mixed signal of the underlying clones . we would like to identify the underlying evolutionary structures . we utilize two sources of information to attempt this ; within segment linkage information , and mutation prevalence . we demonstrate that clone haplotypes , their prevalence , and maximum parsimony reticulate evolutionary structures can be identified , although the solutions may not be unique , even for complete sets of information . this is applied to a chain of influenza infection , where we infer evolutionary structures , including reassortment , and demonstrate some of the difficulties of interpretation that arise from deep sequencing due to artifacts such as template switching during pcr amplification . [ [ section ] ] rna viruses have evolutionary dynamics characterized by high turnover rates , large population sizes and very high mutation rates , , resulting in a genetically diverse mixed viral population , . subpopulations in these mixtures containing specific sets of mutations are referred to as clones and their corresponding mutation sets as haplotypes . unveiling the diversity , evolution and clonal composition of a viral population will be key to understanding factors such as infectiousness , virulence and drug resistance . high throughput sequencing technologies have resulted in the generation of rapid , cost - effective , large sequencing datasets . when applied to viruses , the set of reads obtained from a deep sequencing experiment represents a sample of the viral population which can be used to infer the underlying structure of that population at an unprecedented level of detail . in this study , we aim to identify the haplotypes of clones and quantify their prevalence within a viral population . the method also constructs evolutionary histories of the process consistent with the data . reconstructing the structure of a mixed viral population from sequencing data is a challenging problem . only a few works address the issue of viral mixed population haplotype reconstruction which infer both the genomes of sub - populations and their prevalence . reviews of the methods and approaches dealing with these issues can be found in , , , and . these works frequently make use of read graphs , which consist of a graph representation of pairs of mutations linked into haplotypes . haplotypes then correspond to paths through these graphs , although not every path will necessarily be realized as a genuine haplotype , which can lead to over - calling haplotypes . different formalizations of this problem has led to different optimization problems in the literature , including minimum - cost flows , minimum sets of paths , , probabilistic and statistical methods , network flow problems , minimum path cover problems , maximizing bandwidth , graph coloring problems or k - mean clustering approaches . after the haplotypes are constructed , in many cases an expectation - maximisation ( em ) algorithm is used to estimate their prevalence in the sampled population . some other works , use a probabilistic approach instead of a graph - based method . in this work we take an integrative approach to address both the genetic diversity and the evolutionary trajectory of the viral population . the method presented is not read graph based and constructs evolutionary trees and recombination networks weighted by clone prevalences . this reduces the size of the solution set of haplotypes . the method does not rely exclusively on reads physically linking mutations so is applicable to longer segments . the method will also be shown to have particular utility with time series data and is highlighted on a chain of infections by influenza ( h3n8 ) . the question of the influenza genome diversity has been addressed in the literature largely between strains or samples from different hosts , considering one single dominant genome for each host . within - host evolution is a source of genetic diversity the understanding of which may lead to the development of models that link different evolutionary scales . kuroda et al . addressed the question of evolution within a single host of influenza extracted on a patient who died of an a / h1n1/2009 infection , but with a focus on ha segment using a de - novo approach . our approach provides a method to further understand within host evolution of such viruses . the next section highlights the approach with an overview of the tree and network construction methods with simulated data , followed by an application of the method to a daily sequence of real influenza data . the methods section describes the construction of the trees and networks in more detail .
the experimental study of quantum mechanical systems has made huge progress recently motivated by quantum information science .producing and manipulating many - body quantum mechanical systems have been relatively easier over the last decade .one of the most essential goals in such experiments is to reconstruct quantum states via quantum state tomography ( qst ) .the qst is an experimental process where the system is repeatedly measured with different elements of a positive operator valued measure ( povm ) .most popular methods for estimating the state from such data are : linear inversion , , maximum likelihood , , , , and bayesian inference , , ( we also refer the reader to and references therein ) .recently , different approaches brought up - to - date statistical techniques in this field .the estimators are obtained via minimization of a penalized risk .the penalization will subject the estimator to constraints . in penalty is the von neumann entropy of the state , while , use the penalty , also known as the lasso matrix estimator , under the assumption that the state to be estimated has low rank .these last papers assume that the number of measurements must be minimized in order to recover all the information that we need .the ideas of matrix completion is indeed , that , under the assumptions that the actual number of underlying parameters is small ( which is the case under the low - rank assumption ) only a fraction of all possible measurements will be sufficient to recover these parameters .the choice of the measurements is randomized and , under additional assumptions , the procedure will recover the underlying density matrix as well as with the full amount of measurements ( the rates are within factors slower than the rates when all measurements are performed ) . in this paper, we suppose that a reasonable amount ( e.g. ) of data is available from all possible measurements .we implement a method to recover the whole density matrix and estimate its rank from this huge amount of data .this problem was already considered by gu , kypraios and dryden who propose a maximum likelihood estimator of the state .our method is relatively easy to implement and computationally efficient .its starting point is a linear estimator obtained by the moment method ( also known as the inversion method ) , which is projected on the set of matrices with fixed , known rank .a data - driven procedure will help us select the optimal rank and minimize the estimators risk in frobenius norm .we proceed by minimizing the risk of the linear estimator , penalized by the rank . when estimating the density matrix of a -qubits system , our final procedure has the risk ( squared frobenius norm ) bounded by , where between 1 and is the rank of the matrix .the inversion method is known to be computationally easy but less convenient than constrained maximum likelihood estimators as it does not produce a density matrix as an output .we revisit the moment method in our setup and argue that we can still transform the output into a density matrix , with the result that the distance to the true state can only be decreased in the proper norm .we shall indicate how to transform the linear estimator into a physical state with fixed , known rank .finally , we shall select the estimator which fits best to the data in terms of a rank - penalized error .additionally , the rank selected by this procedure is a consistent estimator of the true rank of the density matrix .we shall apply our procedure to the real data issued from experiments on systems of 4 to 8 ions .trapped ion qubits are a promising candidate for building a quantum computer .an ion with a single electron in the valence shell is used .two qubit states are encoded in two energy levels of the valence electrons , see , , .the structure of the paper is as follows .section 2 gives notation and setup of the problem . in section 3we present the moment method .we first change coordinates of the density matrix in the basis of pauli matrices and vectorize the new matrix . we give properties of the linear operator which takes this vector of coefficients to the vector of probabilities .these are the probabilities to get a certain outcome from a given measurement indexed by and that we actually estimate from data at our disposal .we prove the invertibility of the operator , i.e. identifiability of the model ( the information we measure enables us to uniquely determine the underlying parameters ) .section 4 is dedicated to the estimation procedure .the linear estimator will be obtained by inversion of the vector of estimated coefficients .we describe the rank - penalized estimator and study its error bounds .we study the numerical properties of our procedure on example states and apply them to experimental real - data in section 5 .the last section is dedicated to proofs .we have a system of qubits .this system is represented by a density matrix , with coefficients in .this matrix is hermitian , semidefinite positive and has .the objective is to estimate , from measurements of many independent systems , identically prepared in this state . for each system ,the experiment provides random data from separate measurements of pauli matrices on each particle .the collection of measurements which are performed writes where is a vector taking values in which identifies the experiment .the outcome of the experiment will be a vector .it follows from the basic principles of quantum mechanics that the outcome of any experiment indexed by is actually a random variable , say , and that its distribution is given by : where the matrices denote the projectors on the eigenvectors of associated to the eigenvalue , for all from 1 to . for the sake of simplicity, we introduce the notation as a consequence we have the shorter writing for : .the tomographic inversion method for reconstructing is based on estimating probabilities by from available data and solving the linear system of equations it is known in statistics as the method of moments .we shall use in the sequel the following notation : denotes the frobenius norm and the operator sup - norm for any hermitian matrix , is the euclidean norm of the vector . in this paper , we give an explicit inversion formula for solving ( [ loi ] ) .then , we apply the inversion procedure to equation ( [ loihat ] ) and this will provide us an unbiased estimator of . finally ,we project this estimator on the subspace of matrices of rank ( between 1 and ) and thus choose , without any a priori assumption , the estimator which best fits the data .this is done by minimizing the penalized risk where the minimum is taken over all hermitian , positive semidefinite matrices .note that the output is not a proper density matrix .our last step will transform the output in a physical state .the previous optimization program has an explicit and easy to implement solution .the procedure will also estimate the rank of the matrix which best fits data .we actually follow here the rank - penalized estimation method proposed in the slightly different problems of matrix regression .this problem recently received a lot of attention in the statistical community and chapter 9 in . here , we follow the computation in . in order to give such explicit inversion formula we first change the coordinates of the matrix into a vector on a convenient basisthe linear inversion also gives information about the quality of each estimator of the coordinates in .thus we shall see that we have to perform all measurements in order to recover ( some ) information on each coordinate of .also , some coordinates are estimated from several measurements and the accuracy of their estimators is thus better . to our knowledge, this is the first time that rank penalized estimation of a quantum state is performed .parallel work of gu _ et al . _ addresses the same issue via the maximum likelihood procedure .other adaptive methods include matrix completion for low - rank matrices and for matrices with small von neumann entropy .note the problem of state tomography with mutually unbiased bases , described in section [ section_notations ] , was considered in refs . . in this section ,we introduce some notation used throughout the paper , and remind some facts that were proved for example in about the identifiability of the model . a model is identifiable if , for different values of the underlying parameters , we get different likelihoods ( probability distributions ) of our sample data .this is a crucial property for establishing the most elementary convergence properties of any estimator .the first step to explicit inversion formula is to express in the -qubit pauli basis .in other words , let us put and .for all , denote similarly to ( [ mes ] ) then , we have the following decomposition : we can plug this last equation into to obtain , for and , finally , elementary computations lead to for any and , while for any , and denotes the kronecker symbol . for any ,we denote by .the above calculation leads to the following fact , which we will use later .[ propprob ] for , and , we have let us consider , for example , , then the associated set is empty and is the only probability depending on among other coefficients .therefore , only the measurement will bring information on this coefficient .whereas , if , the set contains 2 points . there are measurements , ... , that will bring partial information on .this means , that a coefficient is estimated with higher accuracy as the size of the set increases . forthe sake of shortness , let us put in vector form : our objective is to study the invertibility of the operator thanks to fact [ propprob ] , this operator is linear .it can then be represented by a matrix _ { ( \mathbf{r},\mathbf{a})\in(\mathcal{r}^n\times \mathcal{e}^{n}),\b \in\mathcal{m}^n} ] ; 2 .it has variance bounded as follows 3 . for any , note again that the accuracy for estimating is higher when is large .indeed , in this case more measurements bring partial information on .the concentration inequality gives a bound on the norm which is valid with high probability .this quantity is related to in a way that will be explained later on .the bound we obtain above depends on , which is expected as is the total number of parameters of a full rank system .this factor appears in the hoeffding inequality that we use in order to prove this bound .we investigate low - rank estimates of defined in ( [ mathat ] ) .from now on , we follow closely the results in which were obtained for a matrix regression model , with some differences as our model is different .let us , for a positive real value study the estimator : , \ ] ] where the minimum is taken over all hermitian matrices . in order to compute the solution of this optimization program, we may write it in a more convenient form since \nonumber \\ & = \min_k \min_{r : { \rm rank}(r)=k } \left [ \left\|r - \hat{\rho } \right\|_f^{2 } + \nu \cdot k \right ] .\end{aligned}\ ] ] an efficient algorithm is available to solve the minimization program as a spectral - based decomposition algorithm provided in .let us denote by the matrix such that ] , then from an asymptotic point of view , this corollary means that , if is the rank of the underlying matrix , then our procedure is consistent in finding the rank as the number of data per measurement increases . indeed ,as is an upper bound of the norm , it tends to 0 asymptotically and therefore the assumptions of the previous corollary will be checked for . with a finite sample ,we deduce from the previous result that actually evaluates the first eigenvalue which is above a threshold related to the largest eigenvalue of the noise .in this section we implement an efficient procedure to solve the optimization problem from the previous section .indeed , the estimator will be considered as an input from now on .it is computed very efficiently via linear operations and the real issue here is how to project this estimator on a subspace of matrices with smaller unknown rank in an optimal way .we are interested in two aspects of the method : its ability to select the rank correctly and the correct choice of the penalty .first , we explore the penalized procedure on example data and tune the parameter conveniently . in this way , we evaluate the performance of the linear estimator and of the rank selector .we then apply the method on real data sets .the algorithm for solving is given in .we adapt it to our context and obtain the simple procedure .* algorithm * : + : the linear estimator and a positive value of the tuning parameter + : an estimation of the rank and an approximation of the state matrix . 1 .compute the eigenvectors $ ] corresponding to the eigenvalues of the matrix sorted in decreasing order .2 . let .3 . for ,let and be the restrictions to their first columns of and , respectively .4 . for ,compute the estimators .compute the final solution , where , for a given positive value , is defined as the minimizer in over of the constant in the above procedure plays the role of the rank and then is the best approximation of with a matrix of rank . as a consequence , this approach provides an estimation of both of the matrix and of its rank by and , respectively .obviously , this solution is strongly related to the value of the tuning parameter . before dealing with how to calibrate this parameter ,let us present a property that should help us to reduce the computational cost of the method .the above algorithm is simple but requires the computation of matrices in step 3 and step 4 .we present here an alternative which makes possible to compute only the matrix that corresponds to , and then reduce the storage requirements .remember that is the value of minimizing the quantity in step 5 of the above algorithm .let be the ordered eigenvalues of . according to ( * ? ? ?* proposition 1 ) , it turns out that is the largest such that the eigenvalue exceeds the threshold : as a consequence , one can compute the eigenvalues of the matrix and set as in .this value is then used to compute the best solution thanks to step 1 to step 4 in the above algorithm , with the major difference that we restrict step 3 and step 4 to only .example data we build artificial density matrices with a given rank in .these matrices are with and 5 .to construct such a matrix , we take as , the diagonal matrix with its first diagonal terms equal , whereas the others equal zero. we aim at testing how often we select the right rank based on the method illustrated in as a function of the rank , and of the number of repetitions of the measurements we have in hand .our algorithm depends on the tuning parameter .we use and compare two different values of the threshold : denote by and the values the parameter provided in theorem [ res ] and corollary [ corbornnudata ] respectively .that is , as established in theorem [ res ] , if the tuning parameter is of order of the parameter , the solution of our algorithm is an accurate estimate of .we emphasize the fact that is nothing but the estimation error of our linear estimator .we study this error below . on the other hand, the parameter is an upper bound of that ensures that the accuracy of estimation remains valid with high probability ( _ cf ._ corollary [ corbornnudata ] ) .the main advantage of is that it is completely known by the practitioner , which is not the case of .* rank estimation . *our first goal consists in illustrating the estimation power of our method in selecting the true rank based on the calibrations of given by .we provide some conclusions on the number of repetitions of the measurements needed to recover the right rank as a function of this rank .figure [ figerrorvsrank ] illustrates the evolution of the selection power of our method based on ( blue stars ) on the one hand , and based on ( green squares ) on the other hand .[ figerrorvsrank ] two conclusions can be made .first , the method based on is powerful .it almost always selects the right rank .it outperforms the algorithm based on .this is an interesting observation .indeed , is an upper bound of .it seems that this bound is too large and can be used only for particular settings .note however that in the variable selection literature , the calibration of the tuning parameter is a major issue and is often fixed by cross - validation ( or other well - known methods ) .we have chosen here to illustrate only the result based on our theory and we will provide later an instruction to properly calibrate the tuning parameter .the second conclusion goes in the direction of this instruction . as expected , the selection power of the method ( based on both and ) increases when the number of repetition of the measurements increases . compare the figure for repetitions to the figure for repetitions in figure [ figerrorvsrank ] .moreover , for ranks smaller than some values , the methods always select the good rank . for larger ranks , they perform poorly .for instance with ( a small number of measurements ) , we observe that the algorithm based on performs poorly when the rank , whereas the algorithm based on is still excellent .+ actually , the bad selection when is large does not mean that the methods perform poorly .indeed our definition of the matrix implies that the eigenvalues of the matrix decrease with .they equal to .therefore , if is of the same order as , finding the exact rank becomes difficult since this calibration suggests that the eigenvalues are of the same order of magnitude as the error .hence , in such situation , our method adapts to the context and find the effective rank of .as an example , let consider our study with , and .based on repetitions of the experiment , we obtain a maximal value of equal to .this value is quite close to , the value of the eigenvalues of .this explains the fact that our method based on failed in one iteration ( among ) to find the good rank . in this context is much larger than and then our method does not select the correct rank with this calibration in this setting .+ let us also mention that we explored numerous experiments with other choices of the density matrix .the same conclusion remains valid .when the error of the linear estimator which is given by is close to the square of the smallest eigenvalue of , finding the exact rank is a difficult task .however , the method based on is still good , but fails sometimes .we produced data from physically meaningful states : the ghz - state and the w - state for qubits , as well as a statistical mixture , for and note that the rank of is 4 .[ figopnorm ] * calibration of the tuning parameter . * the quantity seems to be very important to provide a good estimation of the rank ( or more precisely of the effective rank ) .then it is interesting to observe how this quantity behaves .figure [ figopnorm ] ( above and , and middle and ) illustrates how varies when the rank increases . except for , it seems that the value of is quite stable .these graphics are obtained with particular values of the parameters and , but similar illustrations can be obtained if these parameters change .+ the main observation according to the parameter is that it decreases with ( see figure [ figopnorm ] - below ) and is actually independent of the rank ( with some strange behavior when ) .this is in accordance with the definition of which is an upper bound of .real - data analysis in the next paragraph , we propose a 2-steps instruction for practitioners to use our method in order to estimate a matrix ( and its rank ) obtained from the data we have in hand with and . *real data algorithm : * + : for any measurement we observe .+ : and , estimations of the rank and respectively .+ the procedure starts with the linear estimator and consists in two steps : _ step a. _ use to simulate repeatedly data with the same parameters and as the original problem .use the data to compute synthetic linear estimators and the mean operator norm of these estimators .they provide an evaluation of the tuning parameter ._ step b. _ find using and construct .we have applied the method to real data sets concerning systems of 4 to 6 ions , which are smolin states further manipulated . in figure [ figeigenvalues ]we plot the eigenvalues of the linear estimator and the threshold given by the penalty . in each case, the method selects a rank equal to 2 .[ figeigenvalues ]we present here a method for reconstructing the quantum state of a system of qubits from all measurements , each repeated times .such an experiment produce a huge amount of data to exploit in efficient way .we revisit the inversion method and write an explicit formula for what is here called the linear estimator .this procedure does not produce a proper quantum state and has other well - known inconvenients .we consider projection of this state on the subspace of matrices with fixed rank and give an algorithm to select from data the rank which best suits the given quantum system .the method is very fast , as it comes down to choosing the eigenvalues larger than some threshold , which also appears in the penalty term .this threshold is of the same order as the error of the linear estimator .its computation is crucial for good selection of the correct rank and it can be time consuming .our algorithm also provides a consistent estimator of the true rank of the quantum system .our theoretical results provide a penalty term which has good asymptotic properties but our numerical results show that it is too large for most examples .therefore we give an idea about how to evaluate closer the threshold by monte - carlo computation .this step can be time consuming but we can still improve on numerical efficiency ( parallel computing , etc . ) . in practice ,the method works very well for large systems of small ranks , with significant eigenvalues . indeed , there is a trade - off between the amount of data which will give small estimation error ( and threshold ) and the smallest eigenvalue that can be detected above this threshold .neglecting eigenvalues comes down to reducing the number of parameters to estimate and reducing the variance , whereas large rank will increase the number of parameters and reduce the estimation bias .* acknowledgements : * we are most grateful to mdlin gu and to thomas monz for useful discussion and for providing us the experimental data used in this manuscript .* proof of proposition [ inversion ] * actually , we can compute in case , we have in case , we have either or .if we suppose , indeed , if this is not 0 it means outside the set , that is which contradicts our assumption .if we suppose , we have either on the set and in this case one indicator in the product is bound to be 0 , or we have on the set . in this last case , take in the symmetric difference of sets .then , [ thmtropp ] let , ... , be independent centered self - adjoint random matrices with values in , and let us assume that there are deterministic self - adjoint matrices , ..., such that , for all , is a.s .then , for all , where . we have : note that the , for and , are iid self - adjoint centered random matrices .moreover , we have : this proves that is nonnegative where .so we can apply theorem [ thmtropp ] , we have : and so we put this leads to : * proof of theorem [ res ] * from the definition ( [ pen ] ) of our estimator , we have , for any hermitian , positive semi - definite matrix , we deduce that further on , we have we apply two times the inequality for any real numbers and .we actually use and , respectively , and get by rearranging the previous terms , we get that for any hermitian matrix provided that . by following ,the least possible value for is if the matrices have rank .moreover , this value is obviously attained by the projection of on the space of the eigenvectors associated to the largest eigenvalues .this helps us conclude the proof of the theorem . * proof of corollary [ hatrank ] * recall that is the largest such that .we have now , and .thus , and this is smaller than , by the assumptions of the corollary .
we introduce a new method to reconstruct the density matrix of a system of -qubits and estimate its rank from data obtained by quantum state tomography measurements repeated times . the procedure consists in minimizing the risk of a linear estimator of penalized by given rank ( from 1 to ) , where is previously obtained by the moment method . we obtain simultaneously an estimator of the rank and the resulting density matrix associated to this rank . we establish an upper bound for the error of penalized estimator , evaluated with the frobenius norm , which is of order and consistency for the estimator of the rank . the proposed methodology is computationaly efficient and is illustrated with some example states and real experimental data sets .
the study of opinion dynamics has recently started to attract the attention of the control community .this interest is in large part motivated with the bulk of knowledge which has been developed about methods to approximate and stabilize consensus , synchronization , and other coherent states .however , in contrast with many engineering systems , social systems do not typically exhibit a consensus of opinions , but rather a persistence of disagreement , possibly with the formation of opinion parties .it is then essential to understand which features of social systems prevent the formation of consensus . to the authors understanding, scholars have focused on two key reasons : opinion - dependent limitations in the network connectivity and obstinacy of the agents .the first line of research has seen a growth of models involving `` bounded confidence '' between the agents : if the opinions of two agents are too far apart , they do not influence each other .these models typically result into a clusterization of opinions : the agents split into non - communicating groups , and each group reaches an internal consensus .influential models have been defined in , and their understanding has been recently deepened by the control community , which has studied evolutions both in discrete time and in continuous time , possibly including heterogenous agents and randomized updates . although interesting and motivated , these `` bounded confidence '' models do not seem to be sufficient to explain the persistence of disagreement in real societies , in spite of persistent contacts and interactions between agents . instead , a persistent disagreement is more likely a consequence of the agents being unable , or unwilling , to change their opinions , no matter what the other agents opinions are .this observation has been made by social scientists , as in the models introduced in , and more recently by physicists . since this idea has spread to applied mathematics and systems theory , several modelshave already been studied in detail , using techniques from stochastic processes and from game theory . following the latter line of research , in this paperwe define a _ gossip dynamics _ such that at each time step a randomly chosen agent updates its opinion to a convex combination of its own opinion , the opinion of one of its neighbors , and its own initial opinion or `` prejudice '' .we show that , although the resulting dynamics persistently oscillates , its average is a stable opinion profile , which is not a consensus .this means that the expected beliefs of an agent will not in general achieve , even asymptotically , an agreement with the other agents in the society .furthermore , we show that the oscillations of opinions are ergodic in nature , so that the averages along sample paths are equivalent to the ensemble averages .our work has been deeply influenced from reading the papers and , which also include agents with prejudices .compared to the former paper , our contribution is a new model of communication between agents and , thus , of opinion evolution : a more precise discussion is given below in section [ sect : relation - gossip - friedkin ] .compared to the latter paper , which also proves an ergodic theorem , we allow the agents to have a continuum of degrees of obstinacy , rather than a dichotomy stubborn / non - stubborn .the qualitative picture , however , shows strong similarities .finally , we point out that we have recently performed a similar analysis of ergodicity for a randomized algorithm , which solves the so - called localization problem for a network of sensors , .we are confident that these techniques may foster the understanding of other randomized algorithms and dynamics , including for instance distributed pagerank computation .sections [ sect : friedkin - dynamics ] and [ sect : gossip - dynamics ] are devoted to present the two models of opinion dynamics which we are interested in : the classical friedkin and johnsen s model and our new gossip algorithm , respectively . for both dynamics, we state a convergence result .section [ sect : analysis ] is then devoted to provide a proof of these statements .a few comments are given in the concluding section .real and nonnegative integer numbers are denoted by and , respectively .we use to denote the cardinality of set , and to denote the euclidean norm .provided is a directed graph with node set and edge set , we define for each node the set of neighbors and the degree .we assume that for all , so that for every .such an edge is said to be a self - loop .we refer the reader to for a broader introduction to graph theory and for related definitions .we consider two models of opinion dynamics : one is the well - known friedkin and johnsen s model , which we describe below , while the other is a related randomized model which we describe in the next section .we consider a set of agents , whose potential interactions are encoded by a directed graph , which we refer to as the _ social network_. each agent is endowed with a state , which evolves in discrete time , and represents its _ belief or opinion_. we denote the vector of beliefs as .an edge means that agent may directly influence the belief of agent . to avoid trivialities , we assume that . herewe recall friedkin and johnsen s model and we give a convergence result based on the topology of the underlying social network .let be a nonnegative matrix which defines the strength of the interactions ( if ) and be a diagonal matrix describing how sensitive each agent is to the opinions of the others , based on interpersonal influeneces .we assume that is row - stochastic , i.e. , , where denotes the vector of ones , and we set , where collects the self - weights given by the agents .the dynamics of opinions proposed in is with and .the vector , which corresponds to the individuals preconceived opinions , also appears as an input at every time step .the presence of this input is the main feature of this model , and marks its difference with , for instance , the mentioned models which are based on bounded confidence . as a consequence of , the opinion profile at time is equal to the limit behavior of the opinions is described in the following result .[ prop : convergence - friedkin ] assume that from any node there exists a path from to a node such that .then , the opinions converge and due to the assumption , is a substochastic matrix , that is , a matrix with positive entries which sum to less than or equal to one along each row. then , is substochastic also , and schur stable by lemma [ lemma : substoch_stab ] ( proved in section [ sect : analysis ] ) .thus , the dynamics in ( 1 ) with the constant input term is convergent to .the assumption of the proposition implies that each agent is influenced by at least one stubborn agent . as shown in the proof ,this is sufficient to guarantee the stability of the opinion dynamics . in practice, we expect that in a social network most agents will have some level of obstinacy , thus a positive .let , which is referred to as the total effects matrix in .since is stochastic , we observe that under the assumption of proposition [ prop : convergence - friedkin ] also is stochastic : this means that the limit opinion of each agent is a convex combination of the preconceived opinions of the group .however , we note that is not a `` consensus '' , but a more general opinion profile such that .note that instead a consensus is reached if has zero diagonal ( _ i.e. _ , ) and the graph is aperiodic and has a globally reachable node ; see for instance . here, we briefly describe an example from to illustrate how the opinion dynamics arise in the context of social networks .we consider a group of agents and study how opinions are formed through interactions .the model is an abstraction of experiments conducted and reported in the reference .the general flow of the experiments is as follows : 1 .the agents are presented with an issue ( related to sports , surgery , school , etc . )on which opinions can range , say , from 1 to 100 .each agent forms an initial opinion on the issue .3 . the agents can communicate over phone with other agents ( predetermined by the experiment organizer ) individually to discuss the issue .4 . after several rounds of discussion, they settle on final opinions that may or may not be in agreement .they are also asked to provide estimates of the relative interpersonal influences of other group members on their final opinions . as a simple example, we describe a case with four agents .let the initial and final opinion vectors be ^{\top},\\ x ' & = [ 60~ 60~ 75~ 75]^{\top}.\end{aligned}\ ] ] the matrix which determines the influence network for this group is given by slightly differs from that on page six of because the rows of the latter do not sum exactly to one , due to rounding errors . ] where the entries represent the distribution of relative interpersonal influences on the issue .note that agent 3 in this example is `` totally stubborn '' , meaning that it does not change its opinion at all during the evolution .this matrix is obtained from the experiment data , , and the estimate of the relative interpersonal influences .we take ; the entries represent the agents susceptibilities to interpersonal influence .the off - diagonal entries of are the weights of the influence by the others .for example , shows that the direct relative influence of agent 2 on agent 1 is .120 .the matrix is this matrix indicates the influence of each agent on every other agent in the final opinions through the flow of direct and indirect interactions .for example , shows that almost 55% of the final opinion of agent 2 is determined by agent 3 .the evolution of the opinions is illustrated by the simulations in figure [ fig : fjmodel ] , which respectively plot the state and the corresponding limit point ( marked by blue circles ) . .the opinions converge to the limit ( marked by blue circles ) . ]we propose here a class of randomized opinion dynamics , which translates the idea of friedkin and johnsen s model into a `` gossip '' communication model .the agents beliefs evolve according to the following stochastic update process .each agent starts with an initial belief . at each time a directed link is randomly sampled from a uniform distribution over .if the edge is selected at time , agent meets agent and updates its belief to a convex combination of its previous belief , the belief of , and its initial belief .namely , where the weighting coefficients and satisfy the following assumption .[ assmp : coefficients]let the diagonal matrix be defined by and the matrix defined by .we assume that ( i ) ] , and since if then with . if is such that , then and from which we observe we deduce that all the entries of are nonnegative .let us compute now \\ & = \frac{1}{h_i}\left[\lambda_{ii}+1-\lambda_{ii}+h_i-1\right]=1.\end{aligned}\ ] ] we conclude that is row - stochastic and has all entries in the interval [ 0,1 ] .the thesis is then obtained by noticing that the expressions in and imply and in words , we may say that , under the assumption that and are chosen as in proposition [ prop : relations - with - friedkin ] , then the expected dynamics is a `` lazy '' ( slowed down ) version of the friedkin and johnsen s dynamics associated to the matrix .hence , theorem [ thm : gossip - opinions ] shows that the average dynamics ] and =\frac1{{|{\mathcal{e}}|}}$ ] , for all .the expected dynamics of is = { \mathds{e}}{[a(k)]}{\mathds{e}}[x(k ) ] + { \mathds{e}}{[b(k)]}u,\ ] ] and the two average matrices can be explicitly computed as follows , provided assumption [ assmp : coefficients ] holds .}&=\frac{1}{|{\mathcal{e}}|}\sum_{(i , j)\in { \mathcal{e}}}a^{(i , j)}\\ & = \frac{1}{|{\mathcal{e}}|}\sum_{i\in \mathcal{v}}\sum_{j\in { \mathcal{n}}_i}\big[i-{\mathrm{e}}_i{\mathrm{e}}_i^\top(i - h)+\gamma_{ij}\left({\mathrm{e}}_i{\mathrm{e}}_j^\top-{\mathrm{e}}_i{\mathrm{e}}_i^\top\right ) \\ & \qquad-(i - h)\gamma_{ij}{\mathrm{e}}_i{\mathrm{e}}_i^\top\left({\mathrm{e}}_i{\mathrm{e}}_j^\top-{\mathrm{e}}_i{\mathrm{e}}_i^\top\right)\big]\\ & = \frac{1}{|{\mathcal{e}}|}\sum_{i\in \mathcal{v}}\sum_{j\in { \mathcal{n}}_i}\big[i-{\mathrm{e}}_i{\mathrm{e}}_i^\top(i - h)+\gamma_{ij}\left({\mathrm{e}}_i{\mathrm{e}}_j^\top-{\mathrm{e}}_i{\mathrm{e}}_i^\top\right ) \\ & \qquad -(i - a)\gamma_{ij}\left({\mathrm{e}}_i{\mathrm{e}}_j^\top-{\mathrm{e}}_i{\mathrm{e}}_i^\top\right)\big]\\ & = \frac{1}{|{\mathcal{e}}|}\sum_{i\in \mathcal{v}}\sum_{j\in { \mathcal{n}}_i}\left[i-{\mathrm{e}}_i{\mathrm{e}}_i^\top(i - h)+h\gamma_{ij}\left({\mathrm{e}}_i{\mathrm{e}}_j^\top-{\mathrm{e}}_i{\mathrm{e}}_i^\top\right)\right]\\ & = i-\frac{1}{|{\mathcal{e}}|}\left[d(i - h)-h\gamma+h\right].\end{aligned}\ ] ] similarly , } & = \frac{1}{|{\mathcal{e}}|}\sum_{(i , j)\in { \mathcal{e}}}b^{(i , j)}\\ & = \frac{1}{|{\mathcal{e}}|}\sum_{i\in \mathcal{v}}\sum_{j\in { \mathcal{n}}_i}{\mathrm{e}}_i{\mathrm{e}}_i^\top(i - h)u\\ & = \frac{1}{|{\mathcal{e}}|}\sum_{i\in \mathcal{v}}| { \mathcal{n}}_i|{\mathrm{e}}_i{\mathrm{e}}_i^\top(i - h)u\\&=\frac{1}{|{\mathcal{e}}|}d(i - h)u.\end{aligned}\ ] ] before showing the stability of the expected dynamics , which is studied in the proposition [ stability ] , we present a technical lemma .although the result is already known , we prefer to include a short proof for completeness . in order to state the lemma, we need some terminology .the graph _ associated _ to a given square matrix is the graph with node set and an edge if and only if .we recall that a matrix is said to be substochastic if it is nonnegative and the entries on each of its rows sum up to no more than one .moreover , every node corresponding to a row which sums to less than one is said to be a _ deficiency _ node .[ lemma : substoch_stab ] consider a substochastic matrix . if in the graph associated to there is a path from every node to a deficiency node , then is schur stable .first note that is substochastic for all .more precisely , if we let to be the set of deficiency nodes of , then for every positive integer . moreover , there exists such that , that is all nodes for are deficiency nodes. we can then define .given any , we can write with and integer , and notice that ( provided inequalities are understood componentwise ) .the last inequality implies that converges to as . [ stability ] under assumptions [ assmp : coefficients ] and [ assmp : hm ] , the matrix is schur stable ( _ i.e. _ , it has all eigenvalues in the open unit disk ) .note that if , and . from these formulas , we observe that all entries of are nonnegative .next , we compute note that by the presence of self - loops : consequently if . hence , under assumption [ assmp : hm ] , we have that is a _ substochastic _ matrix corresponding to a graph with a path from any node to a node whose row sums up to less than one . by lemma [ lemma : substoch_stab ]such a matrix is schur stable . as a consequence of this result, we deduce that the matrix is invertible , and then \\=&(i-{\mathds{e}}{[a(k)]})^{-1}{\mathds{e}}{[b(k)]}u\\=&(d(i - h)+h(i-\gamma))^{-1}d(i - h)u . \end{aligned}\ ] ] we have by now completed the proof of the first claim of theorem [ thm : gossip - opinions ] .we are now ready to complete the proof of theorem [ thm : gossip - opinions ] , by showing the ergodicity property .our argument follows the same lines of the convergence results of and .preliminarily , we observe that by the definition of , the opinions are bounded , as they satisfy for all and . in particular , all moments of are uniformly bounded .let now be the error from the limit average , and observe that we thus have + 2 \sum_{\ell=0}^{k } \sum_{\ell = r}^{k-\ell } { \mathds{e}}\left[e(\ell)^{\top } e(\ell+r)\right].\end{aligned}\ ] ] in view of , there exists such that \leq\eta\qquad \forall k.\ ] ] next , we note that & = { \mathds{e}}\left [ { \mathds{e}}\left[e(\ell)^\top e(\ell+r)|x(\ell)\right]\right]\\ & = { \mathds{e}}\left [ e(\ell)^\top { \mathds{e}}\left[e(\ell+r)|x(\ell)\right]\right ] \label{eq : x_l}\\ \nonumber & = { \mathds{e}}\left [ e(\ell)^\top \left ( { \mathds{e}}\left[x(\ell+r ) |x(\ell)\right]-x^{\star } \right ) \right].\end{aligned}\ ] ] by repeated conditioning on we obtain ={\mathds{e}}\left[a(k)\right]^{r } x(\ell)+ \sum_{s=0}^{r-1 } { \mathds{e}}[a(k)]^{s } { \mathds{e}}[b ] u\label{eq : x_l},\end{aligned}\ ] ] and by recalling that is a fixed point for the expected dynamics we get ^{r } x^{\star}+ \sum_{s=0}^{r-1 } { \mathds{e}}[a(k)]^{s } { \mathds{e}}[b(k ) ] u.\ ] ] from equations and we obtain &= { \mathds{e}}\left[e(\ell)^\top { \mathds{e}}\left[a(k)\right]^{r } e(\ell)\right]\\ & \leq{\eta}\rho^r,\end{aligned}\ ] ] where , by lemma [ stability ] , . finally , we have &\leq\frac{\eta}{(k+1)^2}\left(k+1 + 2\sum_{\ell=0}^{k-1}\sum_{r=0}^{k-\ell}\rho^r\right)\\ & \leq\frac{\eta}{(k+1)}\left(1+\frac{2}{1-\rho}\right),\end{aligned}\ ] ] from which we obtain the thesis .in this paper , we have defined a new model of opinion dynamics , within the framework of randomized gossip dynamics .we have shown that , for suitable choices of the update parameters , the well - known friedkin and johnsen s dynamics is equivalent to the average behavior of the dynamics .significantly , the average has a very practical meaning , as the gossip dynamics is ergodic , so that local averages ( computed along time ) match the expectation .we note that recent related works on opinion dynamics with stubborn agents have given intuitive characterizations of the ( average ) limit opinion profile , in terms of harmonic functions or potentials .we leave similar studies on our model as a topic of future research .blondel , v.d . ,hendrickx , j.m . , and tsitsiklis , j.n .continuous - time average - preserving opinion dynamics with opinion - dependent communications ._ siam journal on control and optimization _ , 48(8 ) , 52145240 .ravazzi , c. , frasca , p. , tempo , r. , and ishii , h. ( 2013 ) .almost sure convergence of a randomized algorithm for relative localization in sensor networks . in _ieee conference on decision and control_. submitted .
in this paper we study a novel model of opinion dynamics in social networks , which has two main features . first , agents asynchronously interact in pairs , and these pairs are chosen according to a random process . we refer to this communication model as `` gossiping '' . second , agents are not completely open - minded , but instead take into account their initial opinions , which may be thought of as their `` prejudices '' . in the literature , such agents are often called stubborn " . we show that the opinions of the agents fail to converge , but persistently undergo ergodic oscillations , which asymptotically concentrate around a mean distribution of opinions . this mean value is exactly the limit of the synchronous dynamics of the expected opinions .
understanding the degree to which rational agents will participate in and contribute to joint projects is critical in many areas of society . with the advent of the internet and the consideration of rationality in the design of multi - agent and peer - to - peer systems, these aspects are becoming of interest to computer scientists and subject to analytical computer science research .not surprisingly , the study of contribution incentives has been an area of vital research interest in economics and related areas with seminal contributions to the topic over the last decades . a prominent example from experimental economicsis the _ minimum effort coordination game _ , in which a number of participants contribute to a joint project , and the outcome depends solely on the minimum contribution of any agent . while the nash equilibria in this game exhibit a quite simple structure , behavior in laboratory experiments led to sometimes surprising patterns see , e.g. , for recent examples . on the analytical sidethis game was studied , for instance , with respect to logit - response dynamics and stochastic potential in . in this paperwe propose and study a simple framework of _ network contribution games _ for contribution , collaboration , and coordination of actors embedded in networks .the game contains the minimum effort coordination game as a special case and is closely related to many other games from the economics literature .in such a game each player is a vertex in a graph , and the edges represent bilateral relationships that he can engage in .each player has a budget of effort that he can contribute to different edges .budgets and contributions are non - negative numbers , and we use them as an abstraction for the different ways and degrees by which actors can contribute to a relationship , e.g. , by allocating time , money , and personal energy to maintaining a friendship or a collaboration . depending on the contribution of the involved actors a relationship will flourish or drown , and to measure the success we use a reward function for each relationship .finally , each player strives to maximize the total success of all relationships he is involved in .a major issue that we address in our games is the impact of collaboration .an incentive for collaboration evolves naturally when agents are embedded in ( social ) networks and engage in relationships .we are interested in the way that a limited collaboration between agents influences properties of equilibria in contribution games like existence , computational complexity , the convergence of natural dynamics , as well as measures of inefficiency . in particular , in addition to unilateral strategy changes we will allow pairs of players to change their strategies in a coordinated manner .states that are resilient against such bilateral deviations are termed _2-strong _ or _ pairwise equilibria _this adjustment raises a number of interesting questions .what is the structure of pairwise equilibria , and what are conditions under which they exist ?can we compute pairwise equilibria efficiently or at least efficiently decide their existence ?are there natural improvement dynamics that allow players to reach a pairwise equilibrium ( quickly ) ?what are the prices of anarchy and stability , i.e. , the ratios of the social welfare of the best possible state over the worst and best welfare of an equilibrium , respectively ?these are the main questions that motivate our study . before describing our results, we proceed with a formal introduction of the model .we consider _ network contribution games _ as models for the contribution to relationships in networked environments . in our gameswe are given a simple and undirected graph with nodes and edges .every node is a _ player _ , and every edge represents a _ relationship _ ( collaboration , friendship , etc . ) .a player has a given _ budget _ of the total amount of _ effort _ that it is able to apply to all of its relationships ( i.e. , edges incident to ) .budgets are called _ uniform _ if for any . in this case , unless stated otherwise , we assume that for all and scale reward functions accordingly .we denote by the set of edges incident to . a _ strategy _ for player is a function that satisfies and specifies the amount of effort that puts into relationship .a _ state _ of the game is simply a vector .the success of a relationship is measured by a _ reward function _ , for which and non - decreasing in .the _ utility _ or _welfare _ of a player is simply the total success of all its relationships , i.e. , , so both endpoints benefit equally from the undirected edge . in addition , we will assume that reward functions are symmetric , so for all , and for ease of presentation we will assume they are continuous and differentiable , although most of our results can be obtained without these assumptions .we are interested in the existence and computational complexity of stable states , their performance in terms of social welfare , and the convergence of natural dynamics to equilibrium .the central concept of stability in strategic games is the _ ( pure ) nash equilibrium _ , which is resilient against unilateral deviations , i.e. , a state such that for each and all possible strategies .for the _ social welfare _ of a state we use the natural utilitarian approach and define .a _ social optimum _ is a state with for every possible state of the game .note that we restrict attention to states without randomization and consider only pure nash equilibria .in particular , the term `` nash equilibrium '' will only refer to the pure variant throughout the paper . in games such as ours, it makes sense to consider multilateral deviations , as well as unilateral ones .nash equilibria have shortcomings in this context , for instance for a pair of adjacent nodes who would although being unilaterally unable to increase their utility benefit from cooperating and increasing the effort jointly .the prediction of nash equilibrium that such a state is stable is quite unreasonable .in fact , it is easy to show that when considering pure nash equilibria , the function is an exact potential function for our games .this means that is an optimal nash equilibrium , the price of stability for nash equilibria is always 1 , and iterative better response dynamics converge to an equilibrium .additionally , for many natural reward functions , the price of anarchy for nash equilibria remains unbounded . and and , for some large number . ]following the reasoning in , for example , , we instead consider pairwise equilibrium , and focus on the more interesting case of _ bilateral deviations_. an improving bilateral deviation in a state is a pair of strategies such that and .a state is a _ pairwise equilibrium _ if it is a nash equilibrium and additionally there are no improving bilateral deviations .notice that we are actually using a stronger notion of pairwise stability than described in , since any pair of players can change their strategies in an arbitrary manner , instead of changing their contributions on just a single edge .in particular , in a state a coalition has a _ coalitional deviation _ if the reward of every player in is strictly greater when all players in switch from strategies to . is a _ strong equilibrium _ if no coalition has a coalitional deviation .our notion of pairwise equilibrium is exactly the notion of _ 2-strong equilibrium _ , the restriction of strong equilibrium to deviations of coalitions of size at most 2 .we evaluate the performance of stable states using prices of anarchy and stability , respectively .the _ price of anarchy ( stability ) _ for pairwise equilibria in a game is the worst - case ratio of for the worst ( best ) pairwise equilibrium in this game . for a class of games ( e.g. , with certain convex reward functions ) that have pairwise equilibria ,the price of anarchy ( stability ) for pairwise equilibria is simply the worst price of anarchy ( stability ) for pairwise equilibria of any game in the class .if we consider classes of games , in which existence is not guaranteed , the prices are defined as the worst prices of any game in the class that has pairwise equilibria . note that unless stated otherwise , the terms price of anarchy and stability refer to pairwise equilibria throughout the paper .we already observed above that in every game there always exist pure nash equilibria .in addition , iterative better response dynamics converge to a pure nash equilibrium , and the price of stability for nash equilibria is 1 . the price of anarchy for nash equilibria , however , can be arbitrarily large , even for very simple reward functions .if we allow bilateral deviations , the conditions become much more interesting .consider the effort expended by player on an edge .the fact that is monotonic nondecreasing tells us that increases in .depending on the application being considered , however , the utility could possess the property of `` diminishing returns '' , or on the contrary , could increase at a faster rate as puts more effort on . in other words , for a fixed effort amount , as a function of could be a concave or a convex function , and we will distinguish the treatment of the framework based on these properties .[ tab:1 ] .summary of some of our results for various types of reward functions . for the cases where equilibrium always exists, we also give algorithms to compute it , as well as convergence results .all of our poa upper bounds are tight .( * ) if , , np - hard otherwise .( * * ) if budgets are uniform , np - hard otherwise .[ cols="^,^,^",options="header " , ] in section [ sec : convex ] we consider the case of convex reward functions .for a large class of convex functions defined below ( definition [ def : classc ] ) we can show a tight bound for the price of anarchy of 2 ( theorem [ thm.classc ] ) .however , for games with functions from pairwise equilibria might not exist .in fact , we show that it is np - hard to decide their existence , even when the edges have simple reward functions of either the form or for constants ( theorem [ thm : generalhardness ] ) . if , however , _ all _ functions are of the form , then existence and efficient computation are guaranteed .we show this existence result for a substantially larger class of functions that may not even be convex , although it includes the class of all convex functions with ( theorem [ thm.convex ] ) . our procedure to construct a pairwise equilibrium in this case actually results in a strong equilibrium , i.e. , the derived states are resilient to deviations of every possible subset of players .as the prices of anarchy and stability for pairwise equilibria are exactly 2 , they extend to strong equilibria simply by restriction . as an interesting special case, we prove that if all functions are , it is possible to determine efficiently if pairwise equilibria exist and to compute them in polynomial time in the cases they exist ( theorem [ thm : convexexists ] ) .in section [ sec : concave ] we consider pairwise equilibria for concave reward functions .in this case , pairwise equilibria may also not exist . nevertheless ,in the cases when they exist , we can show tight bounds of 2 on prices of anarchy and stability ( theorem [ thm : poaconcave ] ) .sections [ sec : min ] and [ sec : max ] treat different special cases of particular interest . in section [ sec : min ] we study the important case of minimum effort games with reward functions .if functions are convex , pairwise equilibria do not necessarily exist , and it is np - hard to decide the existence for a given game ( theorem [ thm : minhardness ] ) . perhaps surprisingly , if budgets are uniform , i.e. , if for all , then pairwise equilibria exist for all convex functions ( theorem [ thm : convexminexists ] ) , and the prices of anarchy and stability for pairwise equilibria are exactly 2 ( theorem [ thm : poaconvexminuniform ] ) .if functions are concave , we can always guarantee existence ( theorem [ thm : concaveminexists ] ) .our bounds for concave functions in section [ sec : concave ] imply tight bounds on the prices of anarchy and stability of 2 .most results in this section extend to strong equilibria .in fact , the arguments in all the existence proofs can be adapted to show existence of strong equilibria , and tight bounds on prices of anarchy and stability follow simply by restriction . in section [ sec : max ]we briefly consider maximum effort games with reward functions .for these games bilateral deviations essentially reduce to unilateral ones . hence ,pairwise equilibria exist , they can be found by iterative better response using unilateral deviations , and the price of stability is 1 ( theorem [ thm : maxexist ] ) .in addition , we can show that the price of anarchy is exactly 2 , and this is tight ( theorem [ thm : maximum ] ) .sections [ sec : approx ] to [ sec : convergence ] treat additional aspects of pairwise equilibria . in section [ sec : approx ] we consider approximate equilibria and show that a social optimum is always a 2-approximate equilibrium ( theorem [ thm.approxeq ] ) .in section [ sec : convergence ] we consider sequential and concurrent best response dynamics .we show that for general convex functions and minimum effort games with concave functions the dynamics converge to pairwise equilibria ( theorems [ thm : convexconverge ] and [ thm : concaveconverge ] ) . for the former we can even provide a polynomial upper bound on the convergence times .note that allmost all of our results on the price of anarchy for pairwise equilibria result in a ( tight ) bound of 2 .this bound of 2 is essentially due to the dyadic nature of relationships , i.e. , the fact that edges are incident to at most two players .the case when edges are projects among arbitrary subsets of actors is termed _ general contribution game _ and treated in section [ sec : general ] .here we consider setwise equilibria , which allow deviations by subsets of players that are linked via a joint project .for some classes of such games we show similar results for setwise equilibria as for pairwise equilibria in network contributions games . in particular , we extend the results on existence and price of anarchy for general convex functions and minimum effort games with convex functions .the price of anarchy for setwise equilibria becomes essentially , where is the cardinality of the largest project .however , many of the aspects of this general case remain open , and we conclude the paper in section [ sec : conclude ] with this and other interesting avenues for further research .the model most related to ours is the co - author model .the motivation of this model is very similar to ours , although there are many important differences .for example , in the usual co - author model , the nodes can not choose how to split their effort between their relationships , only which relationship to participate in .moreover , we consider general reward functions , and as described above , our notion of pairwise stability is stronger than in .our games are potential games with respect to unilateral deviations and can thus be embedded in the framework of congestion games . the social quality of nash equilibrium in non - splittable atomic congestion games , where the quality is measured by social welfare instead of social cost , has been studied in .our games allow players to split their effort arbitrarily between incident edges ( i.e. , they are atomic _ splittable _congestion games ) , and we focus on coalitional equilibrium notions like pairwise stability , not nash equilibrium .in addition , the reward functions ( e.g. , in minimum effort games ) are much more general and quite different from delay functions usually treated in the congestion game literature .in , bramoull and kranton consider an extremely general model of network games designed to model public goods .nevertheless , our game is not a special case of this model , since in the strategy of a node is simply a level of effort it contributes , not how much effort it contributes _ to each relationship ._ there are many extensions to this model , e.g. , corbo et al . consider similar models in the context of contributions in peer - to - peer networks .their work closely connects to the seminal paper on contribution games by ballester et al . , which has prompted numerous similar follow - up studies .the literature on games played in networks is too diverse to survey here we will address only the most relevant lines of research . in the last few years , there have been several fascinating papers on network bargaining games ( e.g. , ) , and in general on games played in networks where every edge represents a two - player game ( e.g. , ) .all these games either require that every node plays the same strategy on all neighboring edges , or leaves the node free to play any strategy on any edge . while every edge in our game can be considered to be a ( very simple ) two - player game , the strategies / contributions that a node puts on every edge are neither the same nor arbitrarily different : specifically they are constrained by a budget on the total effort that a node can contribute to all neighboring edges in total . to the best of our knowledge, there have been no contributions ( other than the ones mentioned below ) to the study of games of this type .our game bears some resemblance to network formation games where players attempt to maximize different forms of network centrality , although our utility functions and equilibrium structure are very different .minimum effort coordination games as proposed by van huyck et al . represent a special case of our general model .they are a vital research topic in experimental economics , see the papers mentioned above and for a recent survey .we study a generalized and networked variant in section [ sec : min ] .slightly different adjustments to networks have recently appeared in .our work complements this body of work with provable guarantees on the efficiency of equilibria and the convergence times of dynamics .some of the special cases we consider are similar to stable matching , and in fact correlated variants of stable matching can be considered an `` integral '' version of our game .our results generalize existence and convergence results for correlated stable matching ( as , e.g. , in ) , and our price of anarchy results greatly generalize the results of .it is worth mentioning the connection of our reward functions with the `` combinatorial agency '' framework ( see , e.g. , ) . in this framework, many people work together on one project , and the success of this project depends in a complex ( usually probabilistic ) manner on whether the people involved choose a high level of effort .it is an interesting open problem to extend our results to the case in which every project of a game is an instance of the combinatorial agency problem .a related but much more coordinated framework is studied in charity auctions , which can be used to obtain contributions of rational agents for charitable projects .this idea has been first explored by conitzer and sandholm , and mechanisms for a social network setting are presented by ghosh and mahdian .in this section we start by considering a class of reward functions that guarantee a small price of anarchy .we first introduce the notions of a _ coordinate - convex _ and _ coordinate - concave _ function .[ def.coordinate ] a function is * _ coordinate - convex _ if for all of its arguments , we have that .a function is _ strictly coordinate - convex _ if all these are strict inequalities . * _ coordinate - concave _ if for all of its arguments , we have that .a function is _ strictly coordinate - concave _ if all these are strict inequalities .note that _ every convex function is coordinate - convex _ , and similarly every concave function is coordinate - concave .however , coordinate - convexity / concavity is necessary but not sufficient for convexity / concavity .for instance , the function is coordinate - concave , but not concave indeed , it is convex if ] must be the interval of smallest increase , and the interval $ ] is the interval of largest increase .this implies that a contradiction with inequality [ eqn : convexexistence.1 ] .therefore , we only need to address bilateral deviations .suppose that a node is willing to deviate by switching some amount of its effort from edge to edge as part of a bilateral deviation with .we can assume w.l.o.g .that was the first edge of that was added to , and so . for to be willing to deviate , it must be that inequality [ eqn : convexexistence.1 ] is satisfied .the rest of the argument proceeds as before .theorem [ thm.convex ] establishes existence and efficient computation of equilibria for many functions from class .in particular , it shows existence for all convex functions that are 0-valued when one of its arguments is 0 , as well as for many non - convex ones , such as the weighted product function .in fact , when considering deviations of arbitrary coalitions of players , then it is easy to verify that the player of the coalition incident to the edge with maximum possible reward ( of all edges incident to the players in the coalition ) does not make a strict improvement in the deviation . thus , as a corollary we get existence of strong equilibria .[ cor.convex.strong ] a strong equilibrium always exists and can be computed efficiently when for all . in general , we can show that deciding existence for pairwise equilibria for a given game is np - hard , even for very simple reward functions from such as and with constants .[ thm : generalhardness ] it is np - hard to decide if a network contribution game admits a pairwise equilibrium even if all functions are either or .( 0,0)(200,90 ) ( 30,60)x ( 30,60)4a ( 38,60) ( 50,85)4b ( 50,93) ( 70,60) ( 70,60)4c ( 62,60) ( 100,35) ( 100,35)4d ( 100,27) ( 130,60)x ( 130,60)4e ( 138,60) ( 150,85)4f ( 150,93) ( 170,60) ( 170,60)4 g ( 162,60) ( 80,10) ( 80,10)4h ( 72,10) ( 120,10) ( 120,10)4i ( 128,10) ( 100,39)(100,55 ) ( 70,56)(70,45 ) ( 67,57)(60,50 ) ( 30,56)(30,45 ) ( 27,57)(20,50 ) ( 130,56)(130,45 ) ( 170,56)(170,45 ) ( 167,57)(160,50 ) ( 173,57)(180,50 ) we reduce from 3sat as follows .we consider a 3sat formula with variables and clauses .for each clause we insert the game of example [ exm : noeq ] . for each variablewe introduce three players as follows .one is a _ decision player _ that has budget 1 .he is connected to two _ assignment players _, one true player and one false player .both the true and the false player have a budget of .the edge between decision and assignment players has .finally , each assignment player is connected via an edge with to the player of every clause triangle , for which the corresponding clause has an occurrence of the corresponding variable in the corresponding form ( non - negated / negated ) .suppose the 3sat instance has a satisfying assignment .we construct a pairwise equilibrium as follows .if the variable in the assignment is set true ( false ) , we make the decision player contribute all his budget to the edge to the false ( true ) assignment player .this assignment player will contribute his full budget to , because has steeper slope than , which is the maximum slope attainable on the edges to the triangle gadgets .it is clear that none of these players has an incentive to deviate ( alone or with a neighbor ) .the remaining set of assignment players can now contribute their complete budget towards the triangle gadgets .as the assignment is satisfying , every triangle player of the triangle gadgets has at least one neighboring assignment player in .we now create a maximum bipartite matching between players in and the players of the triangles .we then extend this and connect the remaining ( if any ) triangle players arbitrarily to assignment players from .this creates a one - to - many matching of triangle players to players in , with every triangle player being matched to exactly one player in , and some players in possibly unmatched .we set each triangle player to contribute all of his budget towards his edge in the matching .each assignment player splits his effort evenly between the incident edges in the matching ; if the assignment player is unmatched his strategy can be arbitrary . in thismatching , each matched assignment player can get up to matching edges .as the triangle players contribute all their budget to their matching edge , then each edge in the matching yields a reward of , with being the contribution of the assignment player . by splitting his budget evenly, the assignment player contributes at least to each matching edge .also he receives reward exactly , which is the maximum achievable for a player in ( given that the decision player does not contribute to the incident edge ) .thus , every matched assignment player in is stable and will not join a bilateral deviation . consider a triangle player . as the assignment playerhe is matched to contributes at least ( we assume w.l.o.g . ) , the reward function on the matching edge grows at least as quickly as , i.e. , with a larger slope than the maximum slope achievable on the triangle edges .in addition , the reward for by contributing all budget to the matching edge is at least 9 .note that the maximum payoff that he can obtain by contributing only to triangle edges is 8 , and therefore he has no incentive to join other triangle players in a deviation .note that could potentially achieve higher revenue by deviating with a different assignment player in .however , as noted above no matched assignment player has an incentive to deviate jointly with .hence , can only join an unmatched assignment player .this is only a profitable deviation if currently shares his current assignment player with at least one other triangle player .however , the possibility that could deviate to such an unmatched assignment player contradicts the fact that we created a maximum matching between assignment and triangle players .thus , will also stick to his strategy choice , and has no incentive to participate in bilateral or unilateral deviations .we can stabilize the remaining pairs of triangle players by assigning an effort of 1 towards their joint edge .finally , the unmatched assignment players in are stable since their reward is always 0 : no player adjacent to puts any effort on edges incident to , and no player adjacent to is willing to participate in a bilateral deviation due to the arguments above. now suppose there is a pairwise equilibrium .note first that the decision player will always contribute his full budget , and there is always a positive contribution of at least one assignment player towards the decision player edge otherwise there is a joint deviation that yields higher reward for both players .in particular , the decision player contributes only to edges , where the maximum contribution of the assignment players is located .as the decision player contributes his full budget , there is at least one incident edge that grows at least as quickly as in the contribution of the assignment player .hence , at least one assignment player will be motivated to remove all contributions from the edges to the triangle players , as these edges grow at most by in the contribution of the assignment player .he will instead invest all of his budget towards the decision player .this implies that every pairwise equilibrium must result in a decision for the variable , i.e. , if the ( false ) true assignment players contributes all of his budget towards the decision player , the variable is set ( true ) false .if both players do this , the variable can be chosen freely .as there is a stable state , the contributions of the remaining assignment players must stabilize all triangle gadgets .in particular , this means that for each clause triangle there must be at least one neighboring assignment player that does not contribute all of his budget towards his decision player .this implies a satisfying assignment for the 3sat instance .finally , let us focus on an interesting special case .the hardness in the previous theorem comes from the interplay of reward functions that tend to a clustering of effort and that create cycles .we observed above that if all functions are , then equilibria exist and can be computed efficiently .here we show that for the case that for all , we can decide efficiently if a pairwise equilibrium exists . furthermore ,if an equilibrium exists , we can compute it in polynomial time .[ thm : convexexists ] there is an efficient algorithm to decide the existence of a pairwise equilibrium , and to compute one if one exists , when all reward functions are of the form for arbitrary constants .moreover , the price of anarchy is 1 in this case .let be the set of socially optimum solutions .these are exactly the solutions where every player puts effort only on edges with maximum . are exactly the solutions that are stable against unilateral deviations , which immediately tells us that if a pairwise equilibrium exists , then the price of anarchy is 1 .not all solutions in are stable against bilateral deviations , however .denote .let be the set of edges incident to with value . in any unilaterally stable solution, a node must put all of its effort on edges in .we first show how to determine if a pairwise stable solution exists if the only edges in the graph are .consider an edge such that but ( call such an edge `` type 1 '' ) . then in any pairwise stable solution, the node must contribute _ all _ of its effort to edge .otherwise and could deviate by adding some amount to , and adding to .this is possible for small enough , and would improve the reward for both ( by ) and ( by ) .therefore , for every edge of this type , we can fix the contributions of node , since they will be the same in any stable solution . if the same node has two or more such incident edges , then by the above argument we immediately know that there does not exist any pairwise equilibrium .now consider edges which are in , which implies that . for any such edge , either or in any pairwise equilibrium .if this were not the case , then both and could add some amount of effort to and benefit from this deviation by amount .consider a connected component consisting of such edges .we can use simple flow or matching arguments to find if there exists an assignment of nodes to edges such that every edge has at least one adjacent node assigned to it .we then set if node is assigned to edge .we also make sure not to assign a node that already used its budget on a type 1 edge to _ any _ edge in this phase .as argued above , if such an assignment does not exist , then there is no pairwise equilibrium .conversely , any such assignment yields a pairwise equilibrium , since for every edge , at least one of the endpoints of this edge is using all of its effort on this edge .thus we are able to determine exactly when pairwise equilibria exist on the set of edges .call the set of such solutions .all that is left to check is if one of these solutions is stable with respect to bilateral deviations on edges .if one solution in is a pairwise equilibrium in the entire graph , then all of them are , since when moving effort onto an edge , a node does not care which edge it removes the effort from : all the edges with positive effort have the same slope . to verify that a pairwise equilibrium exists ,we simply consider every edge with reward function , and check if .we claim that a pairwise equilibrium exists iff this is true for all edges .consider a bilateral deviation onto edge where contributes effort and contributes effort .this would be an improving deviation exactly when and .fix to be some arbitrarily small value ; then there exists satisfying the above conditions exactly when , which is true exactly when , as desired .this section is devoted to proving the following theorem .[ thm.classc ] for the class of network contribution games with reward functions for all that have a pairwise equilibrium , the prices of anarchy and stability for pairwise equilibria are exactly 2. we will refer to an edge as being _ slack _ if and , _ half - slack _ if but , and_ tight _ if and .we will call a solution _ tight _ if it has only tight edges .[ claim.optintegral ] if all reward functions belong to class , then there always exists a tight optimum solution .if all reward functions belong to class , then all optimum solutions are tight .let be a solution with maximum social welfare , and let node be a node that uses non - zero effort on two adjacent edges : and . for simplicity, we will denote by and by .furthermore , we denote by , by , by , by .the fact that switching an amount of effort from to or from to does not increase the social welfare means that : and that we know from coordinate - convexity that .therefore , we have that this is not possible if is in , giving us a contradiction , and completing the proof for . for ,this tells us that moving its effort from one of these edges to the other will not change the social welfare , and so we can create an optimum solution with one less half - slack edge by setting . we can continue this process to end up with a tight optimum solution , as desired .theorem [ thm.classc ] let be a pairwise stable solution , and an optimum solution . by claim [ claim.optintegral ]we can assume that is tight .define to be the reward of edge in , and to be the reward of in .recall that for a node , the utility of is .let be an arbitrary tight edge in . if and , then consider the bilateral deviation from where both and put all their effort on edge .since is pairwise stable , there must be some node ( wlog node ) such that .make this node a witness for edge .if instead and , then consider the unilateral deviation from where puts all its effort on edge . since is stable against unilateral deviation , then .make this node a witness for edge .notice that every node can be a witness for at most one edge , since for a node to be a witness to edge , it must be that .therefore , we know that . since the total social welfare in is exactly , we know that the price of anarchy for pairwise equilibria is at most 2 . finally , let us establish tightness of this bound .consider a path of four nodes with uniform budgets and edges , and .the reward functions are and . and achieve their maximum reward by contributing their full budget to , hence they will apply this strategy in every pairwise equilibrium .this leaves no reward for and and gives a total welfare of .if the players contribute only to and , the total welfare is .hence , the price of stability for pairwise equilibria is at least 2 , which matches the upper bound on the price of anarchy . for completeness, we also present a result similar to claim [ claim.optintegral ] for pairwise equilibrium solutions .[ claim.stableintegral ] if all reward functions belong to class , with every reward function having the property that , then for every pairwise equilibrium , there exists a pairwise equilibrium of the same welfare without slack edges .let be a pairwise stable solution , and suppose it contains a slack edge .this means that nodes and have other adjacent edges where they are contributing non - zero effort .let those edges be and . for simplicity, we will denote , by , by , and by .furthermore , we denote by , by , by , by and for and accordingly . for any value , it must be that can not unilaterally deviate by moving effort from to , or from to .therefore , we know that , and that . since and coordinate - convex , however , we know that ( and similarly for ) , which implies that the above inequalities hold with equality . specifically , it implies that for both and , increasing s effort by causes the same difference in utility as decreasing it by .this simply quantifies the fact that for a node to put effort on more than one edge in a stable solution , it should be indifferent between those two edges . for any , consider the pairwise deviation where and both move amount of effort to from and .suppose w.l.o.g .that node is not willing to deviate in this manner because it does not increase its utility .this means that by the above argument about s unilateral deviations , we know that , and so we have that . since , this also implies that .we now create solution by having node move effort from edge to . we will prove below that is also a pairwise stable solution of the same welfare as .however , it has strictly more effort on edge .we can continue applying the same arguments for edge until is no longer a slack edge .this process only decreases the number of slack edges , since we only remove effort from edges that are slack or half - slack , and in the latter case we remove the effort from the `` slack '' direction , so the edge remains half - slack afterwards . all that is leftto prove is that is pairwise stable of the same welfare as . to see that the welfare is the same ,notice that we can use for node the same arguments for unilateral deviations that we applied to .therefore , we know that therefore , the reward of all edges in , and thus the utility of all nodes , is the same as in , so the social welfare of both is the same .we must now prove that is pairwise stable .any possibly improving deviation would have to include one of the edges or , since for all other edges the effort levels are the same in and .first consider deviations ( unilateral or bilateral ) including node .any such deviation would also be a valid deviation in , since only the strategy of node has changed .therefore , all such deviations can not be improving deviations .next consider any unilateral deviation by where adds some effort to edge .for this to be a strictly improving deviation , it must be that .consider instead a bilateral deviation from where plays the new strategy ( i.e. , adds to ) , while deviates by moving from to . in this deviation , strictly benefits since it ends in the same configuration as above , and strictly benefits since it loses utility and gains utility .therefore , this contradicts being pairwise stable .next consider a deviation from where removes some amount from .for this deviation to be profitable in , but not profitable in , it must be that this contradicts the fact that .finally , consider deviations by node .if deviates and removes some amount from , then this is profitable only if recall , however , that so the above implies that , which is impossible since is nondecreasing in both its arguments .if adds effort to in its deviation , then it can not possibly be more profitable than the same deviation in , since is using less effort on in than in , and the utility of edge is the same in both .this finishes the proof .if all reward functions belong to class , then all pairwise equilibria have only tight edges .in the proof of claim [ claim.stableintegral ] , we saw that if a node is putting non - zero effort on two edges , then it must be that for some edge and values , we have that .this is not possible for , since is strictly convex in each of its arguments .if all reward functions belong to class , then all strict pairwise equilibria ( where every player has a unique unilateral best response ) have only tight edges . in the proof of claim [ claim.stableintegral ], we saw that if a node is putting non - zero effort on two edges , then its utility does not change by moving some amount of effort from one of these edges to the other .this is not possible in a strict pairwise equilibrium , since then this node would have a deviation that does not change its utility .in this section we consider the case when reward functions are concave .it is simple to observe that a pairwise equilibrium may not exist . consider a triangle graph with three players , uniform budgets , and for all edges .every player has an incentive to invest his full budget due to monotonic increasing functions .due to concavity each player will even out the contributions according to the derivatives .thus , the only candidate for a pairwise equilibrium is when all players put 0.5 on each incident edge .it is , however , easy to see that this state is no pairwise equilibrium .although we might have no pairwise equilibrium , we obtain the following general result for games with concave rewards that have a pairwise equilibrium .[ thm : poaconcave ] for the class of network contribution games with concave reward functions for all that have a pairwise equilibrium , the price of anarchy for pairwise equilibria is at most 2 .consider a social optimum , and the effort used by node on edge in this solution .let be a pairwise equilibrium , with the effort used by on edge in .for an edge , let be its reward in , and be its reward in . we will now attempt to charge to . for any node ,define to be the set of edges incident to where contributes strictly more in than in , i.e. , where .similarly , define to be the set of edges where .let be the set of edges with strictly higher reward in than in ( ) , and be the rest of the edges in the graph , with reward in at least as high as in .furthermore , define and .in other words , is the set of edges with higher reward in where both players contribute more in than in , and is the set of edges where only one player does so .similarly , define and .since reward functions are monotone , every edge must appear in exactly one of , , , or . in the following proof, we will first show that any edge in can be _ assigned _ to one of its endpoints ( say ) such that would never gain in deviating from by removing effort from edges in and making its contribution to equal to , even if did the same .this means that the utility node looses from setting its contribution to instead of on all edges of is at least as much as the difference in utility in versus in on all the edges of assigned to .we then sum up these inequalities , which lets us bound the reward on edges where is better than by the reward on edges where is better than .we now proceed with the proof as described .let be an arbitrary edge of , so .since is nondecreasing , this implies that or , i.e. , at least one of or has strictly lower effort on in than in .consider the deviation from to another state where and increase their contributions to to the same level as in , i.e. , a state yields and .this may be either a bilateral or a unilateral deviation , depending on whether one of or holds .note that there is actually an entire set of such states , as we did not specify from where players and potentially remove effort to be able to achieve the increase .observe , however , that no other player changes his strategy , i.e. , .since is an equilibrium , it must be that for at least one of or the deviation to _ every possible _ such state is unprofitable . without loss of generality , say that this player is , so for every state , and we say that we _ assign _edge to node .note that this implies that , i.e. , , since otherwise all edges incident to would have the same reward in every as in , except for the edge which would have reward in , strictly greater than .since , then there is an increase in due to of at least .however , as every deviation to a state is unprofitable for , it must be that removing effort in _ any arbitrary way _ from other edges incident to and adding it to would not increase s utility .therefore , we know that , in particular , removing effort from edges decreases the reward of those edges by at least .denote by this amount , i.e , is the minimum amount that would decrease if in state player removed any amount of effort from edges in .we have now proven that for any , we can assign it to one of its endpoints ( say ) , such that .we can now sum these inequalities for every edge .consider the sum of just the inequalities corresponding to the edges assigned to a fixed node ( call this set of edges ) .then we have that \leq \sum_{e\in a(v)}\chi_v(\delta_v(e))\enspace.\ ] ] how does compare to ?since all the functions are concave , it is easy to see that removing effort from edges will decrease the reward of these edges by at least as much as the sum of and . therefore , we know that since is the extra effort of on edge in compared to , the sum of for the edges equals .thus , is at most the utility lost by if , starting at state , would set its contribution to instead of on all edges of . for an edge , this is at most , since even after lowering s contribution to , the reward of this edge is at least . for an edge or ,this is still at most . noticing that an edge of can not be in , we now have that + \sum_{e\in s^v\cap(s_1\cup o_1)}w_e(s)\enspace.\ ] ] putting this all together , we obtain that \leq \sum_{e\in s^v\cap s_2}[w_e(s)-w_e(s^ * ) ] + \sum_{e\in s^v\cap(s_1\cup o_1)}w_e(s)\enspace.\ ] ] summing up these inequalities for all nodes , we obtain a way to bound the reward on edges where is better than by the reward on edges where is better than . since the same edge could be in both and , it may be used in the above sum twice .notice , however , that any edge in or will only appear in this sum once , since it will belong to of exactly one node .thus , we obtain that \leq2\sum_{e\in s_2}[w_e(s)-w_e(s^ * ) ] + \sum_{e\in s_1\cup o_1}w_e(s)\enspace.\ ] ] adding in the edges of , and recalling that all edges are in exactly one of , , , or , gives us the desired bound : looking carefully at the proof of the previous theorem yields the following result ( c.f .definition [ def.coordinate ] ) .[ cor : poaconcave ] for the class of network contribution games with coordinate - concave reward functions for all that have a pairwise equilibrium , the price of anarchy for pairwise equilibria is at most 2 .in this section we consider the interesting case ( studied for example in ) when all reward functions are of the form . in other words ,the reward of an edge depends only on the minimum effort of its two endpoints . in our treatmentwe again distinguish between the case of increasing marginal returns ( convex functions ) and diminishing marginal returns ( concave functions ) .note that in this case bilateral deviations are in many ways essential to make the game meaningful , as there is almost always an infinite number of nash equilibria .in addition , we can assume w.l.o.g . that in every pairwise equilibrium there is a unique value for each such that .the same can be assumed for optima .we begin by showing a simple yet elegant proof based on linear programming duality , that shows a price of anarchy of 2 when all functions are linear with slope .we include this proof to highlight that duality is also used in theorem [ thm : poaconvexminuniform ] for convex functions and uniform budgets .[ thm : minlinear ] the prices of anarchy and stability for pairwise equilibria in games with all functions of the form are exactly 2 .we use linear programming duality to obtain the result . consider an arbitrary pairwise equilibrium and an optimum .note that the problem of finding can be formulated as the following linear program , with variables representing the minimum contribution to edge : the lp - dual of this program is now consider the pairwise equilibrium and a candidate dual solution composed of if a player contributes all of his budget in , this is the average payoff per unit of effort .note that is a feasible primal , and , but is not a feasible dual solution .now suppose that for an edge both incident players and have .then both incident players can either move effort from an edge with below - average payoff to , or invest some of their remaining budget on . this increases both their payoffs and contradicts that is stable .thus , for every edge there is a player with .thus , by setting we obtain a feasible dual solution with profit of twice the profit of .the upper bound follows by standard duality arguments .it is straightforward to derive a tight lower bound on the price of stability using a path of length 3 and functions and in a similar fashion as presented in theorem [ thm.classc ] previously . in this sectionwe consider reward functions with convex functions .this case bears some similarities with our treatment of the class in section [ sec : convex ] .in fact , we can show existence of pairwise equilibria in games with uniform budgets .we call an equilibrium _ integral _ if for all .[ thm : convexminexists ] a pairwise equilibrium always exists in games with uniform budgets and when all are convex . if all are strictly convex , all pairwise equilibria are integral .we first show how to construct a pairwise equilibrium .the proof is basically again an adaptation of the `` greedy matching '' argument that was used to show existence for general convex functions in theorem [ thm : convexexists ] . in the beginningall players are asleep .we iteratively wake up the pair of sleeping players that achieves the highest revenue on a joint edge and assign them to contribute their total budget towards this edge .the algorithm stops when there is no pair of incident sleeping players .suppose for contradiction that the resulting assignment is not a pairwise equilibrium .first consider a bilateral deviation , where a pair of players can profit from re - assigning some budget to an edge . by our algorithmat least one of the players incident to is awake .consider the incident player that was woken up earlier .if it is profitable for him to remove some portion of effort from an edge to , this implies however , our choices imply .convexity yields and and results in a contradiction this implies that the algorithm computes a stable state with respect to bilateral deviations . as for unilateral deviations ,no player would ever add any effort to an edge where the other endpoint is putting in zero effort .however , if a player unilaterally re - assigns some budget to an edge from edge with still being asleep at the end of the algorithm , then this implies that and that .this gives a contradiction by the same argument as above .if all functions are strictly convex , then for all . in this case we show that every stable state is integral , i.e. , we have .suppose to the contrary that there is an equilibrium with for .let be an edge with the largest value such that . for player ,let with be other incident edges of such that . then , because of strict convexity , we have this means has an incentive to move all of his effort to if does the same . by the same argument, also has an incentive to move all its effort to .thus , the bilateral deviation of and moving their effort to is an improving deviation for both and , so we have a contradiction to being stable .[ thm : poaconvexminuniform ] the prices of anarchy and stability for pairwise equilibria in network contribution games are exactly 2 when all reward functions with convex , and budgets are uniform . consider a stable solution and an optimum solution .for a vertex we consider the profit and denote this by for every edge , consider the case when both players invest the full effort .due to convexity .suppose for both players .then there is a profitable switch by allocating all effort to .this implies that and thus , thus , we can bound on the other hand hence , and the price of anarchy is 2 .note that this is tight for functions that are arbitrarily convex .the example is a path of length 3 similar to theorem [ thm.classc ] and theorem [ thm : minlinear ] .we use for the inner edge and for the outer edges , and the price of stability becomes arbitrarily close to 2 . for the case of arbitrary budgets and convex functions , however , we can again find an example that does not allow a pairwise equilibrium . [ exm : minnoeq ] our example game consists of a path of length 3 .we denote the vertices along this path with , , , .all players have budget 2 , except for player that has budget 1 .the profit functions are , , and . observe that this game allows no pairwise equilibrium : if , then player has an incentive to increase the effort towards .if , then player has an incentive to increase effort towards .if , both and can jointly increase their profits by contributing on . using this examplewe can construct games in which deciding existence of pairwise equilibria is hard .[ thm : minhardness ] it is np - hard to decide if a network contribution game admits a pairwise equilibrium if budgets are arbitrary and all functions are with convex .we reduce from 3sat and use a similar reduction to the one given in theorem [ thm : generalhardness ] .an instance of 3sat is given by variables and clauses .for each clause we construct a simple game of example [ exm : minnoeq ] that has no stable state .for each variable we introduce three players as follows .one is a _ decision player _ that has budget .he is connected to two _ assignment players _, one true player and one false player .both these players have also a budget of .the edge between decision and assignment players has .finally , each assignment player is connected via an edge with to the node of every clause path , for which the corresponding clause has an occurrence of the corresponding variable in the corresponding form ( non - negated / negated ) .note that the connecting player is the only player with budget 1 in the clause path .suppose the 3sat instance has a satisfying assignment .we construct a stable state as follows .if the variable is set true ( false ) , we make the decision player contribute all his budget to the edge to the false ( true ) assignment player .both assignment player and decision player are motivated to contribute their full budget to , because is the maximum profit that they will ever be able to obtain .clearly , none of these players has an incentive to deviate ( alone or with a neighbor ) .the remaining set of assignment players can now contribute their complete budget towards the clause gadgets .as the assignment is satisfying , every node of the clause gadgets has at least one neighboring assignment player in .we create a maximum bipartite matching of clause players to players in and match the remaining clause players ( if any ) to players from arbitrarily .each clause player contributes all of his budget towards his edge in this one - to - many matching .each assignment player splits his effort evenly between the incident edges in the matching .note that the players from the clause gadgets now receive profit 7 , which is the maximum achievable .thus , they have no incentive to deviate .hence , no player in has a profitable unilateral or a possible bilateral deviation .finally , we obtain a stable state in the clause gadgets by assigning all players and to contribute 2 to .now suppose there is a stable state .note first that the decision player and one incident assignment player can and will obtain their maximum profit by contributing their full budgets towards a joint edge otherwise there is a joint deviation that yields higher profit for both players .hence , this assignment player will not contribute to edges to the clause gadget players .this implies a decision for the variable , i.e. , if the ( false ) true assignment players contributes all of his budget towards the decision player , the variable is set ( true ) false .as there is a stable state , the contributions of the remaining assignment players must stabilize all clause gadgets .in particular , this means that for each clause triangle there must be at least one neighboring assignment player that does not contribute towards his decision player .this implies that the assignment decisions made by the decision players must be satisfying for the 3sat instance .the construction of example [ exm : minnoeq ] and the previous proof can be extended to show hardness for games with uniform budgets in which functions are either concave or convex . in games with uniform budgets and functions with monotonic increasing it is np - hard to determine if a pairwise equilibrium exists .we use the same approach as in the previous proof , however , we assign each player a budget of . for each of the players , , , and in a clause gadget we introduce players , , and . is only connected to , only to , and similar for and .the edges , and have profit function for and otherwise .similarly , we use for and otherwise for .it is easy to observe that in every pairwise equilibrium players and will contribute towards their joint edge .this holds accordingly for every other pair of players , and .the remaining budgets of the players are the budgets used in example [ exm : minnoeq ] above and lead to the same arguments in the above outlined reduction . finally , we observe that the existence result in theorem [ thm : convexminexists ] extends to strong equilibria . in particular , whenever we consider a deviation from a coalition of players , the reward of players incident to the highest reward edgedo not strictly improve by the deviation .in addition , the prices of anarchy and stability are 2 because our lower bound examples continue to hold for strong equilibria , while the upper bounds follow by restriction .[ cor : convexminstrong ] a strong equilibrium always exists in games with uniform budgets and when all are convex . if all are strictly convex , all strong equilibria are integral .the prices of anarchy and stability for strong equilibria in these games are exactly 2 . in this sectionwe consider the case of diminishing returns , i.e. , when all are concave functions .note that in this case the function is coordinate - concave .therefore , the results from section [ sec : concave ] show that the price of anarchy is at most 2 .however , for general coordinate - concave functions it is not possible to establish the existence of pairwise equilibria , which we do for concave below .in fact , if the functions are strictly concave , we can show that the equilibrium is unique .[ thm : concaveminexists ] a pairwise equilibrium always exists in games with when all are continuous , piecewise differentiable , and concave . it is possible to compute pairwise equilibria efficiently within any desired precision . moreover , if all are strictly concave , then this equilibrium is unique .first , notice that we can assume without loss of generality that for every edge , the function is constant for values greater than .this is because it will never be able to reach those values in any solution .we create a pairwise equilibrium in an iterative manner . for any solution and set of nodes ,define as the set of best responses for node _ if it can control the strategies of nodes ._ we begin by computing independently for each player ( is the set of all nodes ) . in particular, this simulates that is the player that always creates the minimum of every edge , and we pick such that it maximizes .this is a concave maximization problem ( or equivalently a convex minimization problem ) , for which it is possible to find a solution by standard methods in time polynomial in the size of , the encoding of the budgets and the number of bits of precision desired for representing the solution . for background on efficient algorithms forconvex minimization see , e.g. , .let be the derivative of in the positive direction , and be the derivative of in the negative direction .we have the property that for calculated as above , for every edge with it holds that for every edge incident to .define as the minimum value of for all edges incident to with .our algorithm proceeds as follows . at the startall players are asleep , and in each iteration we pick one player to wake up .let denote the set of sleeping players in iteration , and the set of awake players ; in the beginning .we will call edges with both endpoints asleep _ sleeping _ edges , and all other edges _ awake _ edges . in each iteration , we pick one player to wake up , and fix its contributions on all of its adjacent edges . in particular, we choose a node with the currently highest derivative value ( see below for tie - breaking rule ) .we set s contribution to an edge to , where .define as the set of best responses in for which for all awake edges . for player exactly matches the contributions of the awake nodes on all awake edges between and . by lemma [ lem : matchingcontribution ] below , is non - empty , and our algorithm sets the contributions of to .moreover , we set the contribution of other sleeping players to be on the sleeping edges , so we assume fully matches s contribution on edge . by lemma [ lem : matchingcontribution ], will not change its contributions on these edges when it is woken up .thus , in the final solution output by the algorithm will receive exactly the reward of .now that node is awake , we compute for all sleeping , as well as new values and iterate .note that values in later iterations are defined as the minimum derivative values on all the _ sleeping _ edges neighboring , not on all edges .to summarize , each iteration of the algorithm proceeds as follows : * for every , compute . * for every , set to be the minimum value of for all _ sleeping _ edges incident to with . *choose a node with maximum ( using tie - breaking rule below ) , fix s strategy to be , and set . to fully specify the algorithm , we need to define a tie - breaking rule for choosing a node to wake up when there are several nodes with equal values .let that we compute .our goal is that for every edge with , we choose node such that .we claim that we can always find a node such that this is true with respect to all its neighbors .suppose a node has two edges and with and but .lemma [ lem : samederivative ] below implies that the functions on and are linear in this range .specifically , lemma [ lem : samederivative ] implies that because , with the inequality being true because is concave .hence , can move some amount of effort from to and still form a best response . continuing in this manner, we can find another best response in for such that has contributions that are either more than both its neighbors , or less than both its neighbors .this implies that there exists with such that for all neighbors , and therefore our tie - breaking is possible .[ lem : samederivative ] consider two nodes and and an edge , and let and be the best responses computed in our algorithm .suppose that . then it must be that either , or .if edge is the edge which achieves the minimum value , then we are done , since then .therefore , we can assume that another edge with achieves this value , so .the fact that we can not increase s reward by assigning more effort to edge means that . since is concave , we know that , which is at least by its definition .this proves that .if this is a strict inequality , then we are done . the only possible way that is if , as desired .first we will prove that our algorithm forms a feasible solution , i.e. , that the budget constraints are never violated . to do this, we must show that when the node is woken up and sets its contribution on a newly awake edge , the other sleeping player must have enough available budget to match . in that our algorithm computes , let be the available budget of node , that is , the budget minus requested contributions on awake edges .this is the maximum amount that node could assign to . for contradiction ,assume that , so our assignment is infeasible .then it must be that , since is concave . by definition of , we know that , and so . now let be the edge that achieves the value , i.e. , .if , then , so can not be a best response , since could earn more reward by switching some amount of effort from to .therefore , we know that . if this is a strict inequality , then we have a contradiction , since would have been woken up before .therefore , it must be that . butthis contradicts our tie - breaking rule we would choose before because it puts less effort onto edge in our choice from than does in .therefore , our algorithm creates a feasible solution .[ lem : matchingcontribution ] for every node and all until node is woken up , there is a best response in that exactly matches the contributions of the awake nodes .in other words , is non - empty .we prove this by induction on ; this is trivially true for .suppose this is true for , and let be the node that is woken up in the iteration , with an existing edge , so that .let be s best response which exists by the inductive hypothesis .first , we claim that . to see this ,notice that if , then by lemma [ lem : samederivative ] , we know that . if this is a strict inequality , then we immediately get a contradiction , since we picked to wake up because it had the highest value .if , this contradicts our tie - breaking rule , since would be woken up first for contributing less to edge .consider the computation of from s point of view . is deciding how to allocate its budget among incident edges , in order to maximize its reward . by putting effort onto an edge with , will obtain reward , since can control the strategy of , and so will make it match the contribution of on edge .if instead , then by putting effort onto , will only obtain reward , since the strategy of is already fixed , and can not change it .then , is simply the set of budget allocations of that maximizes the sum of the above reward functions .now consider the computation of and compare it to .the only difference is that can not control the node when computing , i.e. , by putting effort onto edge , node will only obtain reward , instead of . if , then as well as in , since the computations of and only differ in the reward function of edge , and can not gain any utility by putting more than effort onto edge in . matches all the contributions of nodes in ( including ) , and so is non - empty .suppose instead that .now , let be a strategy of created from as follows .remove effort from edge by setting , and add effort to the other edges of in the optimum way to maximize s utility in .it is easy to see that this is a best response in , since a best response in is simply obtained by repeatedly adding effort to the edges with highest derivative .moreover , matches the contributions on all edges to , so once again we know that is non - empty. re - number the nodes in the order that we wake them .we need to prove that the algorithm computes a pairwise equilibrium . by lemma [ lem : matchingcontribution ] , we know that all the contributions in the final solution are symmetric , and that node gets exactly the reward in the final solution . to prove that the above algorithm computes a pairwise equilibrium , we show by induction on that node will never have incentive to deviate , either unilaterally or bilaterally .this is clearly true for , since it obtains the maximum possible reward that it could have in _ any _ solution , which proves the base case .we now assume that this is true for all nodes earlier than , and prove it for as well .it is clear that would not deviate unilaterally , since it is getting the reward of .this is at least as good as any best response when it can not control the strategies of any nodes except itself . by the inductive hypothesis , would not deviate bilaterally with a node such that . would also not deviate bilaterally with a node such that , since when forming node can set the strategy of node .so in node achieves a reward better than any deviation possible with nodes from .this completes the proof that our algorithm always finds a pairwise equilibrium .now we will consider the case when all are _ strictly _ concave , and prove that there is a unique pairwise equilibrium .consider the algorithm described above .it is greatly simplified for this case : since all are strictly concave , then consists of only a single strategy , and by lemma [ lem : matchingcontribution ] , this strategy is also in .we claim that when this algorithm assigns a strategy to a node , then must have this strategy in _ every _ pairwise equilibrium .we will prove this by induction , so suppose this is true for all nodes earlier than , but there is some pairwise equilibrium where does not use the strategy . since , then there must be some edge such that . if is a node considered earlier than , then by the inductive hypothesis , we know that . is the unique best response of if it were able to control the strategies of nodes in .this means that the gain that could obtain by moving some small amount of effort to edge is greater than the loss that it would obtain from removing effort from any edge to a node of , and so would have a unilateral deviation in . if instead , then the only way that it would not benefit to move some effort onto is if .since was chosen by the algorithm before , we know that it would always benefit to move some effort onto edge in this case , since the derivative it would encounter there is higher than encounters on any other edge . thus , there exists a bilateral deviation where both and move some effort onto edge . for the case of strong equilibria , we observe that the arguments for existence can be adapted , while the upper bounds of 2 on the price of anarchy translate by restriction .in particular , consider a pairwise equilibrium as described in the proof of theorem [ thm : concaveminexists ] .resilience to coalitional deviations can be established in exactly the same way as above , i.e. , the player from the coalition that was the first to be woken up has no incentive to deviate .[ cor : concaveminstrong ] a strong equilibrium always exists in games with when all are continuous , piecewise differentiable , and concave . it is possible to compute strong equilibria efficiently within any desired precision . moreover , if all are strictly concave , then this equilibrium is unique .in this section we briefly consider for arbitrary monotonic increasing functions .our results rely on the following structural observation .[ lem : maxlemma ] if there is a bilateral deviation that is strictly profitable for both players , then there is at least one player that has a profitable unilateral deviation .suppose the bilateral deviation decreases the maximum effort on the joint edge . in this case , both players must receive more profit from other edges .this increase , however , can obviously also be realized by each player himself .suppose the bilateral deviation increases the maximum effort on the joint edge. then the player setting the maximum effort on the edge can obviously also do the corresponding strategy switch by himself , which yields the same outcome for him .it follows that a stable solution always exists , because the absence of unilateral deviations implies that the state is also a pairwise equilibrium .furthermore , the total profit of all players is a potential function of the game with respect to unilateral better responses . on the outer edges and on the inner edge .the inner players have budget 1 , the outer players budget 0 . if both inner players contribute to the outer edges , their utility is 2 .if they both move all their effort to the inner edge , their utility becomes 3 .note , however , that the social welfare decreases from 8 to 6 .] this implies that the social optimum is a stable state and the price of stability is 1 .[ thm : maxexist ] a pairwise equilibrium always exists in games with and arbitrary monotonic increasing functions . the price of stability for pairwise equilibria is 1 .we can also easily derive a tight result on the price of anarchy for arbitrary functions .[ thm : maximum ] the price of anarchy for pairwise equilibria in network contribution games with and arbitrary monotonic increasing functions is at most 2 .this bound is tight for arbitrary convex functions . for an upper bound on the social welfare of the social optimum each player and suppose that he optimizes his effort independently .this yields a reward . clearly , . to see this , notice that in , we can assume that every edge has contribution from only one direction .let be the edges to which contributes in . in this case, s reward from these edges is at most .the reward of the other nodes because of these edges is also at most .therefore , in total . on the other hand , in any pairwise equilibrium player will not accept less profit than , because by a unilateral deviation he can always achieve ( at least ) the maxima used to optimize .thus , , and we have that tightness follows from the following simple example . the graph is a path with four nodes , for .the interior players have budget 1 , the leaf players have budget 0 .the edges and have an arbitrary convex functions , the remaining edge has , for an arbitrarily small .a pairwise equilibrium evolves when player spends his effort on and on .this yields a total profit of .the optimum evolves if contributes on and on with total profit of .we showed above several classes of functions for which pairwise equilibrium exists , and the price of anarchy is small .if we consider _ approximate _ equilibria , however , the following theorem says that this is always the case . by an -approximate equilibrium , we will mean a solution where nodes may gain utility by deviating ( either unilaterally or bilaterally ) , but they will not gain more than a factor of utility because of this deviation .[ thm.approxeq ] in network contribution games an optimum solution is a 2-approximate equilibrium for any class of nonnegative reward functions . first , notice that is always stable against unilateral deviations .this is because when a node changes the effort it allocates to its adjacent edges unilaterally , then the only nodes affected are neighbors of . if is the change in node s reward because of its unilateral deviation , then the total change in social welfare is exactly .therefore , no node can improve their reward in using unilateral deviations .now consider bilateral deviations , and assume for contradiction that nodes and have a bilateral deviation by adding some amounts and to edge , which increases their rewards by more than a factor of 2 .let and be the rewards of and in not counting edge .we denote by and the same rewards after and deviate by adding effort to , and therefore possibly taking effort away from other adjacent edges .in other words , the reward of before the deviation is , and after the deviation it is .note that this change can not increase over , therefore , we know that on the other hand , since both and must improve their reward by more than a factor of 2 , we know that and adding the last two inequalities together , we obtain that which implies that , a contradiction .in this section we consider the convergence of round - based improvement dynamics to pairwise equilibrium .perhaps the most prominent variant is best response , in which we deterministically and sequentially pick one particular player or a pair of adjacent players in each round and allow them to play a specific unilateral or bilateral deviation . while convergence of such dynamics is desirable , a drawback is that convergence could rely on the specific deterministic sequence of deviations . herewe will consider less demanding processes that allow players or pairs of players to be chosen at random to make deviations , and we even allow concurrent deviations of more than one player or pair .we consider random best response , where we randomly pick either a single player or one pair of adjacent players in each round and allow them to play a unilateral or bilateral deviation . in each round, we make this choice uniformly at random , i.e , a specific pair of players gets the possibility to make a bilateral deviation with probability . in concurrent best response , each player decides independently whether he wants to deviate unilaterally or picks a neighbor for a bilateral deviation .obviously , a bilateral deviation can be played if and only if both players decide to pick each other .hence , in a given round a player decides to play a unilateral deviation with probability , where deg is the degree of .a pair of players makes a bilateral deviation with probability .note that in both dynamics , in expectation , after a polynomial number of rounds each single player or pair of players gets the chance to play a unilateral or bilateral deviation .the name `` best response '' in our dynamics needs some more explanation for bilateral deviations , because for a pair of players a particular joint deviation might result in the best reward for but not for .in fact , there might be no joint deviation that is simultaneously optimal for both players . in this casethe players should agree on one of the pareto - optimal alternatives .in this section we consider special kinds of dynamics , which resolve this issue in an intuitive way .the intuition is that if two players decide to play a bilateral deviation , then these strategies should also be unilateral best responses .we assume that players do not pick bilateral deviations , in which they would change the strategies unilaterally .more formally , we capture this intuition by the following definition . a _ bilateral best response _ for a pair of players in a state is a pair of strategies that is * a profitable bilateral deviation , i.e. , , and , and * a pair of mutual best responses , i.e. , for every strategy of player , and similarly for .note that , in principle , there might be states that allow a bilateral deviation , but there exists no bilateral best response .the set of states resilient to unilateral and bilateral best responses is a superset of pairwise equilibria .hence , it might not even be obvious that dynamics using only bilateral best responses converge to pairwise equilibria .our results below , however , show that the latter is true in many of the games for which we showed existence of pairwise equilibria above .[ [ general - convex - functions ] ] general convex functions + + + + + + + + + + + + + + + + + + + + + + + + for games with strictly coordinate - convex functions , the concept of bilateral best response reduces to a simple choice rule . in this case , a unilateral best response of every player places the entire player budget on a single edge .this implies that there is no bilateral best response where players split their efforts .thus , bilateral best responses come in three different forms , in which the players allocate their efforts towards their joint edge ( 1 ) both , ( 2 ) only one of them , or ( 3 ) none of them .consider two incident players and in state connected by edge . to compute a bilateral best response from one of the forms mentioned above, we proceed in two phases . in the first phase , we try forms ( 2 ) and ( 3 ) and remove all contributions from .then player independently picks a unilateral best response under the assumption that .note that in case of equal reward a player always prefers to put the effort on , because this might attract the other player to put effort on as well and , by convexity , increase their own reward even further .similarly , we do this for player .this yields a pair of `` virtual '' best responses under the condition that the other player does not contribute to .now we have to check whether this is a bilateral best response .in particular , if only one of the players puts effort on , by convexity it might become a unilateral best response for the other player to put his effort on as well .if this is the case , the computed state is not a pair of mutual best responses , thus the most profitable candidate for a bilateral best response is of form ( 1 ) .if this is not the case , then by convexity one player is not willing to contribute to at all .hence , his virtual best response is a unilateral best response even though the other player contributes to .for the other player , this means that the assumption made for the virtual best response are satisfied , hence , we have found a set of mutual best responses of the form ( 2 ) or ( 3 ) .however , in this case , this set might not be a bilateral best response because the players do not improve over their current reward .we thus also check , whether a state of form ( 1 ) is a better set of mutual best responses .hence , we consider the state of form ( 1 ) and the resulting reward for each player. each reward must be at least as large as that from the virtual best responses , because otherwise the state does not represent a set of mutual best responses .if this is the case , we accept and as our candidate for the bilateral best response .otherwise , we use the pair of virtual best responses . note that our algorithm computes in each case the most profitable candidate for a bilateral best response , and always finds a bilateral best response if one exists .this can be verified directly for each of the cases , in which there is a bilateral best response of forms ( 1 ) , ( 2 ) , or ( 3 ) .[ thm : convexconverge ] random and concurrent best response dynamics converge to a pairwise equilibrium in a polynomial number of rounds when all reward functions are strictly coordinate - convex and for all and .let us consider the edges in classes of their ( c.f .proof of theorem [ thm.convex ] ) .in particular , our analysis proceeds in phases . in phase 1 ,we restrict our attention to the first class of edges with the highest and the subgraph induced by these edges .consider one such edge and suppose both players contribute their complete budgets to .they are never again willing to participate in a bilateral deviation ( not only a bilateral best response ) , because by strict convexity they achieve the maximum possible revenue .we will call such players _ stabilized_. consider a first class edge where strategies and . in this case , strict convexity , when , and maximality of imply that is a unilateral best response - independently of the current .if both players have and , then the same argument implies that both players allocating their full budget is a bilateral best response , again independently of what the current strategies of the players are .note that each bilateral best response of two destabilized players enlarges the set of stabilized players .phase 1 ends when there are no adjacent destabilized players with respect to first class edges , and this obviously takes only an expected number of time steps that is polynomial in .after phase 1 has ended , we know that the stabilized players are never going to change their strategy again . hence , for the purpose of our analysis , we drop the edges between stabilized players from consideration .the same can be done for all edges incident to exactly one stabilized player , by artificially reducing s budget to and noting that .if there are remaining destabilized players , phase 2 begins , and we consider only the remaining players and the edges among them . in this graph , we again consider only the subgraph induced by edges with highest .again , we have the property that any pair of players contributing their full budget to such an edge is stabilized .additionally , the same arguments show that for destabilized players there are always unilateral and/or bilateral best responses that result in investing the full budget , irrespective of the current strategy . hence , after expected time polynomial in , phase 2 ends and expands the set of stabilized players by at least 2 .repeated application of this argument shows that after expected time polynomial in either all players are stabilized or the remaining subgraph of destabilized players is empty . in this case , a pairwise equilibrium is reached . in particular , using unilateral and bilateral best responses suffices to stabilize all but an independent set of players .it is easy to observe that stabilized players have no profitable unilateral or bilateral deviations .possibly remaining destabilized players in the end are only adjacent to stabilized players and therefore have no profitable unilateral or bilateral deviations .thus , our dynamics converge to a pairwise equilibrium in expected polynomial time .this proves the theorem .[ [ minimum - effort - games - and - convex - functions ] ] minimum effort games and convex functions + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in this section we show that there are games with infinite convergence time of random and concurrent best response dynamics , although in each step bilateral best responses are unique and can be found easily .there are minimum effort games that have convex functions , uniform budgets , and starting states , from which any dynamics using only bilateral best responses does not converge to a stable state .( 0,0)(130,50 ) ( 5,5)3a ( 5,13) ( 35,5)3b ( 35,13) ( 65,5)3c ( 65,13) ( 95,5)3d ( 95,13) ( 5,45)3e ( 5,37) ( 35,45)3f ( 35,37) ( 65,45)3 g ( 65,37) ( 95,45)3h ( 95,37) ( 125,25)3i ( 125,17) we consider two paths of length 4 as in the games of example [ exm : minnoeq ] and introduce a new player as shown in figure [ fig : noconverge ] .all players have budget 2 . in our starting state assign all incident players to contribute 1 to the edges and .this yields a maximum revenue of 2000 for .as long as this remains the case , will never participate in a bilateral deviation . in turn , in every unilateral best response players and will match the contribution of towards their joint edges .note that this essentially creates the budget restriction for the -players that is necessary to show non - existence of a pairwise equilibrium in example [ exm : minnoeq ] .it remains to show that we can implement the cycling of dynamics in terms of bilateral best responses .for this , note that for player it is always a unilateral best response to match any contribution of on their joint edge ( similarly for and ) .the same is true for , he will match the contribution of up to an effort of 1 .finally , the joint deviations of players and are bilateral best responses as well .this implies that the cycling dynamics outlined above in example [ exm : minnoeq ] remain present when we restrict to bilateral best responses .thus , no stable state can be reached .observe that in this game there are sequences of bilateral deviations that converge to a pairwise equilibrium , but the bilateral deviations are not bilateral best responses .consider an arbitrary cycling sequence of bilateral deviations from our starting state , and w.l.o.g .consider the cycling dynamics happen on the upper path in figure [ fig : noconverge ] .then at some point we will see a bilateral deviation of and , in which on their joint edge contributes 1 and increases his effort .this creates a strict improvement of utility for both of them .note that a bilateral deviation allows both and to change their strategies in arbitrary manner .thus , while increasing his contribution towards , can also simultaneously decrease his contribution towards .if the decrease is very tiny , the increase in reward on the edge to outweighs the decrease of reward on the edge to . in this way , this deviation still generates a strict improvement of utility for .hence , both and would make strict improvement although decreases the contribution towards by a tiny amount , hence this represents a profitable bilateral deviation ( but obviously not a best response ) .afterwards , the balance for is broken , and and have a bilateral deviation to put all effort on their joint edge .this quickly leads to a pairwise equilibrium .naturally , the argument works symmetrically for .however , such an evolution is quite unreasonable , as it is always in the interest of the -players to keep their contribution towards as high as possible . [[ minimum - effort - games - and - concave - functions ] ] minimum effort games and concave functions + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + for concave functions , we can use the following simple rule to find a bilateral best response .consider two incident players and in state connected by edge .in the first phase we consider each player independently and compute a unilateral best response and under the assumption that the other player would match his contribution on .then we fix the strategy of the player for which . in the state ,player is perfectly happy with his choice and would not participate in any bilateral deviation .however , player might be willing to deviate , so we recalculate a unilateral best response for under the condition that .note that , due to concavity of the functions , has a unilateral best response that matches .this yields a pair of mutual best responses : has best possible utility ( even if it were able to control s strategy ) , and has the best possible utility given s strategy . as usual , the players switch to if and only if it is a profitable bilateral deviation . [thm : concaveconverge ] random and concurrent best response dynamics converge to a pairwise equilibrium when all reward functions are with differentiable and strictly concave .we measure progress in terms of the derivatives of the edges . for a state consider an edge with highest derivative , where .obviously , for any other edge , so in any unilateral or bilateral best response the incident players will not try to remove effort from this edge once .edge is _ stabilized _ if there is a player with for every edge and spending all his budget , i.e. , . in this case, no player will remove effort from , but at least one player has no interest in increasing effort on .we now consider the dynamics starting in a state and the set of non - stabilized edges with maximum derivative among non - stabilized edges .suppose that a bilateral deviation results in a reduction of the minimum effort on any edge to a value with .this is a contradiction to being the currently highest derivative value and the deviation being composed of mutual unilateral best responses .hence , the value will never increase over the run of the dynamics . as an edge with highest derivativeis not stabilized , both incident players have other incident edges with strictly smaller derivative . hence , if they play a bilateral best response , they strictly increase effort on while strictly decreasing effort on other edges . by strict concavitythis implies that after the step .in addition , both players picking a best response means that the derivative of all edges that were previously lower than now remain at most .this means that no new edge with derivative value is created , but is removed .thus , in each such step we either increase the number of stabilized edges , or we decrease the number of edges of highest derivative among non - stabilized edges .as such a step is played after a finite number of steps in expectation , this argument proves convergence .it remains to show that the resulting state , in which all edges are stabilized , is resilient to all unilateral and bilateral deviations and not only against the type of bilateral best responses we used to converge to it .here we can apply an inductive argument similar to theorem [ thm : concaveminexists ] that no profitable bilateral deviation exists and the state is indeed a pairwise equilibrium .note that the argument simplifies quite drastically for the case of _ strictly _ concave functions .in particular , we consider the edge with maximum derivative . for at least one incident player ,all edges with positive minimum effort have the same derivative , hence this player will never change his strategy .in addition , the other adjacent players have an incentive to keep their efforts on the edges with .thus , we can remove , reduce the budgets of incident players and iterate .this proves the theorem .the previous proof shows that convergence is achieved in the limit , but the decrease of the maximum derivative value is not bounded .if the state is close to a pairwise equilibrium , the changes could become arbitrarily small , and the convergence time until reaching the exact equilibrium could well be infinite .in this section we generalize some of our results to general contribution games .however , a detailed study of such general games remains as an open problem .a general contribution game can be represented by a hypergraph .the set of nodes is the set of players , and each edge is a hyperedge and represents a joint project of a subset of players .reward functions and player utilities are defined as before . in particular, using the notation we get reward functions with . in this case, we extend our stability concept to _ setwise equilibrium _ that is resilient against all deviations of all player sets that are a subset of any hyperedge . in a setwise equilibrium no ( sub-)set of players incident to the same hyperedge has an improving move , i.e. , a combination of strategies for the players such that every player of the subset strictly improves .more formally , a _ setwise equilibrium _ is a state such that for every edge and every player subset we have that for every possible deviation there is at least one player that does not strictly improve , where .note that this definition includes all unilateral deviations as a special case .the most central parameter in this context will be the size of the largest project .we note in passing that nash equilibria without resilience to multilateral improving moves are again always guaranteed for all reward functions .it is easy to observe that is an exact potential function for the game .note that this is equal to and thus equivalent to only for uniform hypergraphs , in which all hyperedges have the same cardinality . in these casesthe price of stability for nash equilibria is always 1 . otherwise , it is easy to construct simple examples , in which all nash equilibria ( and therefore all setwise equilibria ) must be suboptimal ., , , and and edges and . budgets and for .rewards are given by a convex function for edge and a function for .note that in any nash equilibrium . for small gives , whereas we have higher welfare when . ][ [ convex - functions ] ] convex functions + + + + + + + + + + + + + + + + for general convex functions we extend the functions of class to multiple dimensions .in particular , the functions are coordinate - convex and have non - negative mixed partial second derivatives for any pair of dimensions .we first observe that the proof of theorem [ thm.convex ] can be adjusted easily to general games if functions are coordinate - convex and whenever for at least one .[ cor.convex ] a setwise equilibrium always exists and can be computed efficiently when all reward functions are coordinate - convex and whenever for at least one .note that we can also adjust the proof of claim [ claim.optintegral ] for optimum solutions in a straightforward way .in particular , to obtain the social welfare for projects , and we simply multiply each occurrence of the functions , and in the formulas by the corresponding cardinalities of their edges . this does not change the reasoning and proves an analogous statement of claim [ claim.optintegral ] also for general games .the actual proof of theorem [ thm.classc ] then is a simple accounting argument that relies on the cardinality of the projects .the observation that the difference between and is bounded by yields the following corollary . as previously, these results directly extend to strong equilibria , as well .[ cor.classc ] for the class of general contribution games with reward functions in class for all that have a setwise equilibrium , the price of anarchy for setwise equilibria is at most . [[ minimum - effort - games ] ] minimum effort games + + + + + + + + + + + + + + + + + + + + for minimum effort games some of our arguments translate directly to the treatment of general games . for existence with convex functions and uniform budgets, we can apply the same `` greedy matching '' argument and wake up players until every hyperedge is incident to at least one awake player . the argument that this creates a setwise equilibrium is almost identical to the one given in theorem [ thm : convexminexists ] for pairwise equilibria .this yields the following corollary .[ cor : convexminexists ] a setwise equilibrium always exists in games with uniform budgets and when all are convex . if all are strictly convex , all setwise equilibria are integral .the duality analysis for the price of anarchy in theorem [ thm : poaconvexminuniform ] can be carried out as well . in this case, however , the crucial inequality reads .this results in a price of anarchy of .[ cor.poaconvexminuniform ] the price of anarchy for setwise equilibria in general contribution games is at most when all reward functions with convex , and budgets are uniform .again , both corollaries extend also to strong equilibria .[ [ maximum - effort - games ] ] maximum effort games + + + + + + + + + + + + + + + + + + + + for maximum effort games it is not possible to extend the main insight in lemma [ lem : maxlemma ] to general games .there are general maximum effort games without setwise equilibria .this holds even for pairwise equilibria in network contribution games , in which the graph is not simple , i.e. , if we allow multiple edges between agents .we consider a simple game that in essence implements a prisoner s dilemma .there is a path of four players , , and , with edges , and .in addition , there is a second edge between and .the budgets are and .the reward functions are , .note that for and it is a unilateral dominant strategy to put all effort on edges and , respectively .however , in that case and can jointly increase their reward by allocating all effort to and , respectively . for general maximum effort games characterizing the existence and computational complexity of pairwise , setwise , and strong equilibriais an interesting open problem .in this paper we have proposed and studied a simple model of contribution games , in which agents can invest a fixed budget into different relationships .our results show that collaboration between pairs of players can lead to instabilities and non - existence of pairwise equilibria . for certain classes of functions ,the existence of pairwise equilibria is even np - hard to decide .this implies that it is impossible to decide efficiently if a set of players in a game can reach a pairwise equilibrium . for many interesting classes of games , however , we are able to show existence and bound the price of anarchy to 2 .this includes , for instance , a class of games with general convex functions , or minimum effort games with concave functions .here we are also able to show that best response dynamics converge to pairwise equilibria .there is a large variety of open problems that stem from our work .the obvious open problem is to adjust our results for the network case to general set systems and general contribution games . while some of our proofs can be extended in a straightforward way , many open problems , most prominently for concave functions , remain .another obvious direction is to identify other relevant classes of games within our model and prove existence and tight bounds on the price of anarchy .another interesting aspect is , for instance , the effect of capacity constraints , i.e. , restrictions on the effort that a player can invest into a particular project .more generally , instead of a total budget a player might have a function that characterizes how much he has to `` pay '' for the total effort that he invests in all projects .such `` price '' functions are often assumed to be linear or convex ( e.g. , in ) .finally , an intriguing adjustment that we outlined in the introduction is to view the projects as instances of the combinatorial agency framework and to examine equilibria in this more extended model .the authors would like to thank ramamohan paturi for interesting discussions about the model .
we consider _ network contribution games _ , where each agent in a network has a budget of effort that he can contribute to different collaborative projects or relationships . depending on the contribution of the involved agents a relationship will flourish or drown , and to measure the success we use a reward function for each relationship . every agent is trying to maximize the reward from all relationships that it is involved in . we consider pairwise equilibria of this game , and characterize the existence , computational complexity , and quality of equilibrium based on the types of reward functions involved . when all reward functions are concave , we prove that the price of anarchy is at most 2 . for convex functions the same only holds under some special but very natural conditions . another special case extensively treated are minimum effort games , where the reward of a relationship depends only on the minimum effort of any of the participants . in these games , we can show existence of pairwise equilibrium and a price of anarchy of 2 for concave functions and special classes of games with convex functions . finally , we show tight bounds for approximate equilibria and convergence of dynamics in these games .
cilia and flagella play a crucial role in the survival , development , cell feeding and reproduction of microorganisms .these lash - like appendages follow regular beating patterns which enable cell swimming in inertialess fluids . bending deformations of the flagellumare driven by the collective action of atp - powered dynein motor proteins , which generate sliding forces within the flagellar cytoskeleton , named axoneme .this structure has a characteristic 9 + 2 composition across several eukaryotic organisms , corresponding to 9 peripheral microtubule doublets in a cylindrical arrangement surrounding a central pair of microtubules .additional proteins , such as the radial spokes and nexin crosslinkers , connect the central to the peripheral microtubules and resist free sliding between the microtubule doublets , respectively .each doublet consists of an a - microtubule in which dyneins are anchored at regular intervals along the length of the doublets , and a b - microtubule , where dynein heads bind in the neighbouring doublet . in the presence of atp ,dyneins drive the sliding of neighbouring microtubule doublets , generating forces that can slide doublets apart if crosslinkers are removed . in the presence of crosslinkers , sliding is transformed into bending .remarkably , this process seems to be carried out in a highly coordinated manner , in such a way that when one team of dyneins in the axoneme is active , the other team remains inactive .this mechanism leads to the propagation of bending undulations along the flagellum , as commonly observed during the movement of spermatozoa .+ many questions still remain unanswered on how dynein - driven sliding causes the oscillatory bending of cilia and flagella . over the last half a century, intensive experimental and theoretical work has been done to understand the underlying mechanisms of dynein coordination in axonemal beating .different mathematical models have been proposed to explain how sliding forces shape the flagellar beat .coordinated beating has been hypothesised considering different mechanisms such as dynein s activity regulation through local axonemal curvature , due to the presence of a transverse force ( t - force ) acting on the axoneme and by shear displacements .other studies also examined the dynamics of flagellar beating by prescribing its internal activity or by considering a self - organized mechanism independent of the specific molecular details underlying the collective action of dyneins .in particular , the latter approach , although general from a physics perspective , it does not explicitly incorporate dynein kinetics along the flagellum , which has been shown to be crucial in order to understand experimental observations on sperm flagella .load - accelerated dissociation of dynein motors was proposed as a mechanism for axonemal sliding control , and was successfully used to infer the mechanical properties of motors from bull sperm flagella .in contrast , dynamic curvature regulation has been recently proposed to account for _ chlamydomonas _ flagellar beating . in the previous studies ,linearized solutions of the models were fit to experimental data ; however , it is unclear that such results still hold at the nonlinear level .recent studies also investigated the emergence and saturation of unstable modes for different dynein control models ; however , saturation of such unstable modes was not self - regulated , but achieved via the addition of a nonlinear elastic contribution in the flagellum constitutive relation. nevertheless , predictions on how dynein activity influences the selection of the beating frequency , amplitude and shape of the flagellum remain elusive . here , we provide a microscopic bottom - up approach and consider the intrinsic nonlinearities arising from the coupling between dynein activity and flagellar shape , regarding the eukaryotic flagellum as a generalized euler - elastica filament bundle .this allows a close inspection on the onset of the flagellar bending wave instability , its transient dynamics and later saturation of unstable modes , which is solely driven by the nonlinear interplay between the flagellar shape and dynein kinetics .+ we first derive the governing nonlinear equations using a load - accelerated feedback mechanism for dynein along the flagellum . the linear stability analysis is presented , and eigenmode solutions are obtained similarly to refs . , to allow analytical progress and pedagogical understanding .the nonlinear dynamics far from the hopf bifurcation is studied numerically and the resulting flagellar shapes are further analyzed using principal component analysis .finally , bending initiation and transient dynamics are studied subject to different initial conditions .we consider a filament bundle composed of two polar filaments subjected to planar deformations .each filament is modeled as an inextensible , unshearable , homogeneous elastic rod , for which the bending moment is proportional to the curvature and the young modulus is .the filaments are of length and separated by a constant gap of size , where ( fig . [ fig1]c ) .we define a material curve describing the shape of the filament bundle centerline as .the positions of each polar filament forming the bundle read , with the orientation of the cross - section at distance along its length defined by the normal vector to the centerline , being the angle between the tangent vector and the direction ( taken along the _ x _ axis ) .the subscripts ( + ) and ( - ) refer to the upper and lower filaments , respectively ( fig . [ fig1]c ) .the shape of the bundle is given at any time by the expression : the geometrical constraint of the filament bundle , induces an arclength mismatch , denoted as sliding displacement : where . for simplicity, we have set any arclength incongruity between the two filaments at the base to zero and we will consider the filaments clamped at the base . a similar approach can be used to include basal compliance and other types of boundary conditions at the base ( e.g. pivoting or free swimming head ) . herewe centre our study on the nonlinear action of motors along the flagellum .we aim to study the active and passive forces generated at each point along the arclength of the filament bundle .we define as the total internal force density generated at at time on the plus - filament due to the action of active and passive forces ( see fig .[ fig1 ] ) . by virtue of the action - reaction law ,the minus - filament will experience a force density at the same point .next , consider that dyneins are anchored at each polar filament in a region around , where is much smaller than the length of the flagellum and much larger than the length of the regular intervals dyneins are attached to along the microtubule doublets .we shall call the tug - of - war length .we define as the number of bound dyneins in a region of size around at time which are anchored in the plus- or minus - filament respectively . andtangent angle of the flagellum are parametrized by the arclength parameter .b ) passive ( springs ) and active ( dynein motors ) internal structures in the axoneme .dyneins in the ( + ) and ( - ) filaments compete in a tug - of - war and bind / unbind from filaments with rates and respectively .c ) the flagellum as a two - filament bundle : two polar filaments are separated by a small gap of size .the presence of nexin crosslinkers and dynein motors generates a total force density along the bundle . ]we consider a tug - of - war at each point along the flagellum with two antagonistic groups of dyneins . the elastic sliding resistance between the two polar filaments exerted by nexin crosslinkers is assumed to be hookean with an elastic modulus .thus , the internal force density reads : where is the density of tug - of - war units along the flagellum and is the load per motor each group of dyneins experiences due to the action of the antagonistic group .the stresses on the filament bundle are given by a resultant contact force and resultant contact moment acting at the point .the internal force density only contributes to the internal moment of the bundle , such that reads : where , provided that for bundles characterized by .the combined bending stiffness of the filament bundle is given by , where is the second moment of the area of the external rods .+ dynein kinetics is modeled by using a minimal two - state mechanochemical model with states , corresponding to microtubule bound or unbound dyneins , respectively .since the sum of bound and unbound motors at remains constant at all times , we only study the plus and minus bound motor distributions .dyneins bind with rates and unbind with rates ( fig .[ fig1]b ) .the corresponding bound motor population dynamics reads : the binding / unbinding rates are given by , where and are constant rates and is the characteristic unbinding force . herewe assume an exponential dependence of the unbinding force on the resulting load . by considering that dyneins fulfill a linear velocity - force relationship with stall force and velocity at zero load , the loads are given by .substituting the different definitions , the internal force density reads : where and . for simplicity, we will derive the equations governing the tangent angle in the limit of small curvature ( but possibly large amplitudes ) such that tangential forces can be neglected .the derivation for arbitrary large curvature is also presented in the electronic supplementary text . using resistive force theory in the limit of small curvature , we only consider normal forces along the flagellum obtaining , where is the normal drag coefficient . combining the last expression with eq .[ eq4 ] we have : hereinafter we switch to dimensionless quantities while keeping the same notation .we non - dimensionalize the arclength with respect to the length scale , time with respect to the correlation time of the system , motor number with respect to , internal force density with respect to and sliding displacement with respect to .the correlation time defines how fast the motors will respond to a change in load .we also define , , and .the sperm number characterizes the relative importance of bending forces to viscous drag .the parameter measures the relative importance of the sliding resistance compared with the bending stiffness . on the other hand ,the parameter denotes the activity of dyneins , measuring the relative importance of motor force generation compared with the bending stiffness of the bundle .finally denotes the ratio of the bundle diameter and the characteristic shear induced by the motors .the dimensionless sperm equation in the limit of small curvature reads : where in our case the dimensionless internal force density takes the form : and .since the flagellar base is clamped , without loss of generality , we set . combining eqs .[ eq8 ] and [ eq9 ] we obtain the nonlinear dynamics for the tangent angle : \label{eq9_2}\ ] ] in the absence of dynein activity , the last expression reduces to the dynamics of an elastic filament bundle with sliding resistance forces .notice that this expression is obtained considering the sliding mechanism and a linear velocity - force relationship for dyneins , but it is independent of dynein kinetics . on the other hand ,the dimensionless form of the bound motor population dynamics reads : \label{eq10}\ ] ] where is the duty ratio of the motors and dictates the sensitivity of the unbinding rate on the load .the nonmoving state of the system is characterized by and .this means that the flagellum is aligned with respect to the -axis and the number of plus and minus bound motors is constant in space and time . for the linear stability analysis, we consider the perturbed variables around the base state as and .introducing the modulation around and considering we obtain : \label{eq11b}\end{aligned}\ ] ] where .we use the ansatzs and , where is a complex eigenvalue and accounts for complex conjugate . from eq .[ eq11a ] we get , where is a complex response function : using eq . [ eq9 ] and considering , we obtain , where is a second complex response function : - \frac{\mu}{\mu_a } \label{eq13}\ ] ] the latter response functions generalize the work in ref . for a complex eigenvalue and are equivalent to results presented in ref . . with the ansatz in eq .[ eq11b ] , we obtain the characteristic equation , where , , being . solving the characteristic equationwe obtain four possible roots , and the eigenfunctions read : where . once is known , where and .therefore , the evolution of is the same as for except for a phase shift and an overall change on the amplitude , which depends on .this result indicates the presence of a time delay between the action of motors and the response of the flagellum .time delays commonly arise in systems where molecular motors work collectively .indeed , the regulation of active forces by the time delay of the curvature was proposed as a mechanism to generate travelling bending waves . ) for .the beating cycles are divided in 10 frames .( lower panels ) tangent angle kymographs where is the period .amplitudes and angles are shown in arbitrary units .c ) marginal stability curve for the clamped condition .points ( a ) and ( b ) in parameter space correspond to the profiles in a ) and b ) .d ) curvature modulations as a function of the arclength for ( dashed line ) and ( solid line ) .inset : wavenumber defined as as a function of . , , , ] in order to find , we need to impose the four boundary conditions , obtaining a linear system of equations for , ( see supplementary text ) . by setting the determinant of the system to zero , we find the set of complex eigenvalues , with the corresponding growth rates ] , which satisfy the boundary conditions , where .we order the set of different eigenvalues according to its growth rate , such that the first one has the largest growth rate . defining ,the general solution of the system reads : where are free amplitude parameters . for , ,solutions decay exponentially to the nonmoving state . on the other hand ,when becomes positive the system undergoes a hopf bifurcation and solutions follow an exponential growth , oscillating with frequency .next , we study the marginal stable solutions , i.e. when the maximum growth rate equals zero ( ) . for this, we define the critical frequency of oscillation as .the range of parameters studied is chosen according to experimental studies of sperm flagella and the biflagellate green algae _ chlamydomonas _( see supplementary text ) .travelling waves propagate from tip to base , a feature already reported for the clamped type boundary condition . in fig .[ fig2]c , the marginal stability curve in phase space is shown .intuitively , as is increased the travelling instability occurs for higher motor activity and the critical frequency of oscillation follows a non - monotonic decrease ( see supplementary figure s1 ) . for low viscosity ( )the wave propagation velocity is slightly oscillatory whereas for high viscosity ( ) it becomes more uniform ( fig .[ fig2]a and b , lower panels ) .these results are in agreement with studies on migrating human sperm , where in the limit of high viscosity waves propagated approximately at constant speed . for high viscosity, curvature tends to increase from base to tip , finally dropping to zero due to the zero curvature boundary condition at the tail ( see fig .[ fig2]d and supplementary text ) .this modulation is consistent with experimental studies on human sperm , which show viscosity modulation of the bending amplitude . in the latter study ; however , the effect is more pronounced possibly due to external elastic reinforcing structures found along the flagellum of mammalian species , as well as other nonlinear viscoelastic effects . defining as the characteristic wavenumber, we obtain that it increases almost linearly with ( fig .[ fig2]d , inset ) .similar results can be obtained using other definitions for , for example using the covariance matrix ( see principal component analysis section ) .in this section we study the nonlinear dynamics of the flagellum in the limit of small curvature by numerically solving eqs .[ eq9_2 ] and [ eq10 ] using a second - order accurate implicit - explicit numerical scheme ( see supplementary text ) .the unstable modes presented in section [ sec:3 ] follow an initial exponential growth and eventually saturate at the steady state due to the nonlinearities in the system . in fig .[ fig3 ] two different saturated amplitude solutions are shown . fig .[ fig3]a ( left ) corresponds to a case where the system is found close to the hopf bifurcation , whereas fig .[ fig3]a ( right ) corresponds to a regime far from the bifurcation .we notice that the marginal solution obtained in the linear stability analysis ( fig .[ fig2]b , upper panel ) gives a very good estimate of the nonlinear profile close to the bifurcation point , although it does not provide the magnitude of or .frequencies are hz and maximum amplitudes are found to be small , around of the total flagellum length. however , the oscillation amplitude for high motor activity is more than double in respect to the case of low activity ( fig . [ fig3]a ) .the colour code in fig .[ fig3]a indicates the value of the semi - difference of plus- and minus - bound motors .plus - bound motors are predominant in regions of positive curvature ( ) along the flagellum and vice - versa .( left ) and ( right ) , considering the respective eigenmodes as initial conditions .the beating cycles are divided in 10 frames as in fig .[ fig2]a , b ( upper panels ) .b ) ( solid lines ) and ( dashed lines ) evaluated at for the profiles in ( a ) respectively .c ) maximum absolute tangent angle evaluated at and dimensionless frequency as a function of the relative distance to the bifurcation . , , , , , m and ms . ] despite the low duty ratio of dynein motors , bound dyneins along the flagellum are sufficient to produce micrometer - sized amplitude oscillations . the full flagella dynamics corresponding to fig .[ fig3]a are provided in the supplementary movies 1 and 2 .[ fig3]b , the time evolution of and is shown at for the cases in fig .[ fig3]a , respectively . as mentioned in section [ sec:3 ] , the tangent angle delayed respect to , and the time delay is not considerably affected by motor activity .close to the instability threshold , both signals are very similar since the system is found near the linear regime ; however , far from threshold , both signals greatly differ . for high motor activity , both the tangent angle and the fraction of bound dyneins at certain points along the flagellum exhibit cusp - like oscillations ( fig .[ fig3]b , right ) .this behaviour is typical of molecular motor assemblies working far from the instability threshold . despite the signals in fig .[ fig3]b ( right ) are nonlinear , they conserve the symmetry , being the period of the signal .this is a consequence of both plus and minus motor populations being identical , a property also found in spontaneous oscillations of motor assemblies . finally , in fig .[ fig3]c we study how the amplitude and frequency of the oscillations vary with the relative distance from the bifurcation point . for small , the maximum absolute value of the tangent angle seems to follow a square root dependence , characteristic of supercritical hopf bifurcation ; however , in the strongly nonlinear regime the curve deviates from this trend . on the other hand , the beating frequency decreases for increasing activitythis fact can be understood in simple terms since the activity is proportional to ; hence , the larger the activity , the larger each dynein team becomes .consequently , the necessary time to unbind a sufficient number of dynein motors to drive the instability increases , leading to a lower beating frequency . in this section ,we study the obtained nonlinear solutions using principal component analysis .this technique treats the flagellar shapes as multi - feature data sets , which can be projected to a lower dimensional space characterized by principal shape modes . herewe will analyze the numerically resolved data following ref . to study sperm flagella .we discretize our flagella data with time - points , and intervals corresponding to points , along the flagellum , with .we construct a measurement matrix of size for the tangent angle where .this matrix represents a kymograph of the flagellar beat .we define the covariance matrix as , where ] , being the mean tangent angle at .the covariance matrix is shown for ( fig .[ fig4]a , left ) and ( fig .[ fig4]a , right ) . in fig .[ fig4]a ( left ) , we find negative correlation between tangent angles that are a distance apart . hence , a characteristic wavelength can be identified in the system , which manifests as a long - range correlation in the matrix . on the other hand ,strong positive correlations around the main diagonals correspond to short - ranged correlations mainly due to the bending stiffness of the bundle .the number of local maxima along the diagonals in decreases from to , and at the same time .hence , an increase in motor activity slightly increases the characteristic wavelength while decreasing the number of local maxima , which is related to the characteristic wavenumber . employing an eigenvalue decomposition of the covariance matrix ,we can obtain the eigenvectors and their corresponding eigenvalues , such that . without loss of generality, we can sort the eigenvalues in descending order .we find that the first two eigenvalues capture variance of the data .this fact indicates that our flagellar waves can be suitably described in a two - dimensional shape space , since they can be regarded as single - frequency oscillators .each flagellar shape $ ] can be expressed now as a linear combination of the eigenvectors : where are the shape scores computed by a linear least - square fit . in fig .[ fig4]b ( left ) , the two first eigenvectors are shown for . in fig .[ fig4]b ( right ) , the flagellar shape at a certain time ( thick solid line ) is reconstructed ( white line ) by using a superposition of the two principal shape modes ( solid and dashed lines , respectively ) and fitting the scores . finally , in fig .[ fig4]c we show the shape space trajectories beginning with small amplitude eigenmode solutions . while close to the bifurcation the limit cycle is elliptic ( fig .[ fig4]c , left ) , far from the bifurcation the limit cycle becomes distorted ( fig .[ fig4]c , right ) .elliptic limit cycles were also found experimentally for bull sperm flagella .hence , as found in section [ sec:4 ] , motor activity in the nonlinear regime significantly affects the shape of the flagellum when compared with the linear solutions , which only provide good estimates sufficiently close to the hopf bifurcation .( left ) and ( right ) .we can identify characteristic wavelengths from negative long - range correlations in .notice and the number of local maxima decreases when is increased .b ) ( left ) two principal shape modes ( solid and dashed lines , respectively ) , corresponding to the two maximum eigenvalues of the covariance matrix in fig [ fig4]a ( left ) . ( right )the flagellar shape at time ms ( thick solid line ) is reconstructed ( white line ) by a superposition of the two principal shape modes in fig .[ fig4]b ( left ) fitting the scores .c ) flagellar dynamics in a reduced two - dimensional shape space for ( left ) and ( right ) .elliptic limit cycles are rescaled to better appreciate the distortion due to the nonlinear terms . , , , . ] finally , we study bending initiation and transient dynamics for two different initial conditions , in order to understand the selection of the unstable modes . in fig .[ fig5]a , b the spatiotemporal transient dynamics are shown for the case of an initial eigenmode solution corresponding to the maximum eigenvalue ( fig .[ fig5]a ) and an initial sine perturbation in , with equal constant bound motor densities ( fig .[ fig5]b ) . in case( b ) travelling waves initially propagate in both directions and interfere at ( fig .[ fig5]b , d and supplementary movie 3 ) . however , in the steady state both the eigenmode and sine cases reach the same steady state solution , despite the sinusoidal initial condition being a superposition of eigenmodes .this result provides a strong evidence that the fastest growing mode is the one that takes over and saturates in the steady state . in fig .[ fig5]c the transient dynamics for case ( b ) are shown for plus and minus - bound dynein populations close to the tail ( ) . .c ) bound motor time evolution for the plus ( solid line ) and minus ( dashed line ) dynein populations at for the case of a sinusoidal initial condition .inset : flagella profiles at different times in ms .d ) snapshots of the flagellar shape for the sinusoidal initial condition up to ( white dashed line in fig . [ fig5]b ) at equal time intervals ( ms ) . at ,wave interference changes the direction of wave propagation .the full movie can be seen in supplementary movie 3 . , , , , , m and ms .arrows indicate the direction of wave propagation . ] both populations decay exponentially with characteristic time to and begin oscillating in anti - phase around this value , in a tug - of - war competition .in this work , we presented a theoretical framework for planar axonemal beating by formulating a full set of nonlinear equations to test how flagellar amplitude and shape vary with dynein activity . we have shown how the nonlinear coupling of flagellar shape and dynein kinetics in a sliding - controlled model provides a novel mechanism for the saturation of unstable modes in flagellar beating .our study advances understanding of the nonlinear nature of the axoneme , typically studied at the linear level .+ the origin of the bending wave instability can be understood as a consequence of the antagonistic action of dyneins competing along the flagellum .the instability is then further stabilized by the nonlinear coupling between dynein activity and flagellum shape , without the need to invoke a nonlinear axonemal response to account for the saturation of the unstable modes , in contrast to previous studies .moreover , the governing equations ( eqs . [ eq9_2 ] and [ eq10 ] ) contain all the nonlinearities in the limit of small curvature , and they are not the result of a power expansion to leading nonlinear order .far from the hopf bifurcation , linearized solutions fail to describe the flagellar shape and nonlinear effects arise in the system solely due to motor activity . at the nonlinear level , both the tangent angle and dynein population dynamics exhibit relaxation or cusp - like oscillations at some regions along the flagellum .similar cusp - like shapes for the curvature have also been reported in sea urchin sperm .this phenomenology is characteristic of motor assemblies working in far - from - equilibrium conditions and has been found in other biological systems such as in the spontaneous oscillations of myofibrils .interestingly , despite the low duty ratio of axonemal dynein , a fraction of % bound dyneins along the flagellum is sufficient to drive micrometer - sized amplitude oscillations .angular deflections are found to be rad in the experimentally relevant activity range and the order of magnitude seems not to be crucially affected by viscosity nor other parameters in the system .hence , despite of the fact that our description provides an intrinsic mechanism for amplitude saturation , it is only able to generate small deflections , typically an order of magnitude smaller than the ones reported for bull sperm flagella .other structural constraints , such as line tension , are likely to influence the amplitude saturation , due to the elastohydrodynamic coupling with motor activity ( see supplementary text ) .the presence of tension on the self - organized beating of flagella was previously investigated at leading nonlinear order ; however , deflections were also found to be small .hence , we conclude that a ` sliding - controlled ' mechanism may not be sufficient to generate large deflections .our work adds to other recent studies were the ` sliding - controlled ' hypothesis seems to lose support as the main mechanism responsible for flagellar beating .basal dynamics and elasticity are also likely to influence the amplitude saturation , and substantial further research is still needed to infer whether sliding - controlled regulation is the responsible mechanism behind flagellar wave generation .+ principal component analysis allowed us to reduce the nonlinear dynamics of the flagellum in a two - dimensional shape space , regarding the flagellum as a single - frequency biological oscillator .notice that a two - dimensional description would not hold for multifrequency oscillations , where an additional dimension is required .interestingly , we found that as activity increases , the characteristic wavenumber of the system slightly decreases .thus , dynein activity has an opposite effect on wavenumber selection when compared with the medium viscosity ( see fig .[ fig2]d , inset ) .we also showed that the steady state amplitude is selected by the fastest growing mode under the influence of competing unstable modes , provided that the initial mode amplitudes are sufficiently small .+ an important aspect which is not studied explicitly in this work is the direction of wave propagation . for simplicity , we used clamped boundary conditions at the head which are known to induce travelling waves which propagate from tip to base in the sliding - controlled model .it is beyond the scope of our study to assess the effects of different boundary conditions and the role of basal compliance at the head of the flagellum , which are known to crucially affect wave propagation .the present work also restricts to the case of small curvatures ; however , the full nonlinear equations including the presence of tension could in principle be numerically solved as in previous studies ( see supplementary text ) .finally , real flagella is subject to chemical noise due to the stochastic binding and unbinding of dynein motors .recent studies have provided insights on this problem by investigating a noisy oscillator driven by molecular motors . however ,their approach was not spatially extended .our approach could be suitably extended to include chemical noise in the system through eq .[ eq10 ] by considering a chemical langevin equation for the bound dynein populations including multiplicative noise .in particular , it can be easily deduced from our study that by considering a force - independent unbinding rate , fluctuations of bound motors around the base state have mean and variance , in agreement with the results in ref . where a different model was considered .+ the possibility to experimentally probe the activity of dyneins inside the axoneme is one of the most exciting future challenges in the study of cilia and flagella .these studies will be of vital importance to validate mathematical models of axonemal beating and the underlying mechanisms coordinating dynein activity and flagellar beating .we have no competing interests .d.o . carried out analytical work , numerical simulations , data analysis and drafted the manuscript .h.g . and j.c .conceived of the study , coordinated the study and helped draft the manuscript .all authors gave final approval for publication .j.c . and d.o .acknowledge financial support from the ministerio de economa y competitividad under projects fis2010 - 21924-c02 - 02 and fis2013 - 41144-p , and the generalitat de catalunya under projects 2009 sgr 14 and 2014 sgr 878 .d.o . also acknowledges a fpu grant from the spanish government with award number ap-2010 - 2503 and an embo short term fellowship with astf number 314 - 2014 .h.g . acknowledges support by the hooke fellowship , university of oxford .gaffney e.a . , gadlha h. , smith d.j . , blake j.r . and kirkman - brown j.c .2011 mammalian sperm motility : observation and theory .fluid mech ._ , * 43 * , 501 - 528 .( doi : 10.1146/annurev - fluid-121108 - 145442 ) summers k.e . and gibbons i.r .1971 adenosine triphosphate - induced sliding of tubules in trypsin - treated flagella of sea - urchin sperm ._ , * 68 * , 3092 - 3096 .( doi : 10.1073/pnas.68.12.3092 ) hines m. and blum j.j .1979 bend propagation in flagella .ii . incorporation of dynein cross - bridge kinetics into the equations of motion ._ biophys .j. _ , * 25 * , 421 - 441 .( doi : 10.1016/s0006 - 3495(79)85313 - 8 ) sartori p. , geyer v. f. , scholich a. , jlicher f. , howard j. 2016 dynamic curvature regulation accounts for the symmetric and asymmetric beats of _ chlamydomonas _ flagella ._ elife _ * 5*:e13258 ( doi:10.7554/elife.13258 ) gadlha h. , gaffney e.a . , smith d.j . and kirkman - brown j.c .2010 nonlinear instability in flagellar dynamics : a novel modulation mechanism in sperm migration ? _j. r. soc . interface _ ,* 7 * , 1689 - 1697 .( doi : 10.1098/rsif.2010.0136 ) brokaw c.j .1999 computer simulation of flagellar movement .conventional but functionally different cross - bridge models for inner and outer arm dyneins can explain the effects of outer arm dynein removal. _ cell motil .cytoskeleton _ , * 42 * , 134 - 148 .( doi : 10.1002/(sici)1097-0169(1999)42:2::aid-cm5.0.co;2-b ) brokaw c.j .2014 computer simulation of flagellar movement .x : doublet pair splitting and bend propagation modeled using stochastic dynein kinetics ._ cytoskeleton _ , * 71 * , 273 - 284 .( doi : 10.1002/cm.21168 ) bayly p.v . and wilson k.s .2014 equations of interdoublet separation during flagella motion reveal mechanisms of wave propagation and instability ._ biophys .j. _ , * 107 * , 1756 - 1772 .( doi : 10.1016/j.bpj.2014.07.064 ) gadlha h. , gaffney e.a . and goriely a. 2013 the counterbend phenomenon in flagellar axonemes and cross - linked filament bundles ._ , * 110 * , 12180 - 12185 .( doi : 10.1073/pnas.1302113110 ) werner s. , rink j.c ., riedel - kruse i.h . and friederich b.m .2014 shape mode analysis exposes movement patterns in biology : flagella and flatworms as case studies . _plos one _ , * 9*:e113083 .( doi : 10.1371/journal.pone.0113083 ) smith d.j ., gaffney e.a . , gadlha h. , kapur n. and kirkman - brown j.c .2009 bend propagation in the flagella of migrating human sperm , and its modulation by viscosity ._ cell motil .cytoskeleton _ , * 66 * , 220 - 236 .( doi : 10.1002/cm.20345 ) ma r. , klindt g.s ., riedel - kruse i.h . , jlicher f. and friederich b.m .2014 active phase and amplitude fluctuations of flagellar beating ._ , * 113 * , 048101 .( doi : 10.1103/physrevlett.113.048101 ) yasuda k. , shindo y. and ishiwata s. 1996 synchronous behavior of spontaneous oscillations of sarcomeres in skeletal myofibrils under isotonic conditions _ biophys .j. _ , * 70 * , 1823 - 1829 .( doi : 10.1016/s0006 - 3495(96)79747 - 3 )
the physical basis of flagellar and ciliary beating is a major problem in biology which is still far from completely understood . the fundamental cytoskeleton structure of cilia and flagella is the axoneme , a cylindrical array of microtubule doublets connected by passive crosslinkers and dynein motor proteins . the complex interplay of these elements leads to the generation of self - organized bending waves . although many mathematical models have been proposed to understand this process , few attempts have been made to assess the role of dyneins on the nonlinear nature of the axoneme . here , we investigate the nonlinear dynamics of flagella by considering an axonemal sliding control mechanism for dynein activity . this approach unveils the nonlinear selection of the oscillation amplitudes , which are typically either missed or prescribed in mathematical models . the explicit set of nonlinear equations are derived and solved numerically . our analysis reveals the spatiotemporal dynamics of dynein populations and flagellum shape for different regimes of motor activity , medium viscosity and flagellum elasticity . unstable modes saturate via the coupling of dynein kinetics and flagellum shape without the need of invoking a nonlinear axonemal response . hence , our work reveals a novel mechanism for the saturation of unstable modes in axonemal beating . + * keywords * : flagellar beating , dynein , spermatozoa , self - organization .
phone call activity patterns are a manifestation of our complex social dynamics .several aspects of our social behavior are reflected in these communication patterns , like day - night cycles , high activity at the end of working hours , or even our mobility patterns .mobile phone data provides an excellent ground to study several interesting social processes such as , for instance , the spreading of news and rumors , which is the focus of this work .we start out by asking ourselves whether such phenomenon really occurs through the mobile phone network .a phone call certainly involves information exchange between two individuals , but is there information propagation involving more than a single phone call ?is it possible to answer this question without having access to the content of conversations ?mobile phone log data consists in who calls whom and when , see fig .[ fig : sketch]a . a natural way of representing this data is through the use of directed edges .for example , let us use to represent that user has called user . in addition , we have to associate to each directed edge a time series that symbolizes when ( and how many times ) this action took place .this procedure provides us with a representation of log data in terms of a directed network . due to privacy issues we can not know which informationis exchanged during phone calls .this constraint forces us to adopt a hypothesis regarding how information flows on the network .it has been argued that depending on the nature of the information , its propagation dynamics is different .for example , a political opinion , a fad , a rumor , or a gossip , are supposed to involve , each of them , a different kind of human interaction dynamics which results in a different and particular propagation mechanism . here, we assume that the information that is exchanged is either a rumor or news .the spreading of rumors and news is believed to resemble the spreading of an infectious disease , .rumor spreading models assume that there are two categories of users , those who are informed and those who are not. among informed users , there are in turn two sub - categories : users that are actively broadcasting the rumor , and users that become inactive .several mechanisms for switching from active to inactive spreading behavior have been proposed .given the lack of empirical evidence to support a particular switching mechanism , here we adopt the simplest possible assumption already proposed for infectious disease : there is a characteristic time after which an active spreader turns into inactive .there is , however , a deeper reason to use such a switching mechanism . according to this description, we can represent the sequence of events that propagates the infection as a _ causality tree_. our goal is to study causality trees as a proxy to understand information spreading , and for this we need a characteristic time scale which can be easily controlled .the parameter , which we refer to as _ monitoring time _ , serves this purpose .the mobile phone data , particularly the existence of directed edges , poses the question whether during a phone call the information exchange exhibits a favored flow direction . clearly , the information can flow from the caller to the callee , from the callee to the caller , or in both direction .if we think of a rumor being spread on the mobile phone network , and a phone call that involves an informed and an ignorant user , we can imagine that after the communication , both users are informed .this picture implies that non intentional spreading of the rumor can occur : if the caller is the ignorant user and get the information from the callee , then the phone call was not intended to propagate the rumor .this would mean that rumor spreading does not involve causality and it occurs without `` intentionality '' .a different scenario is one in which the spreading is exclusively active and intentional , and phone calls made by informed individuals are intended to propagate the information . in this scenario ,information propagates through active broadcasting , and involves causality .a user can get informed exclusively if he / she is called by an informed individual .thus , in this framework , causality trees describe information flow . at this pointwe would like to mention two very recent works on information spreading on mobile phone networks that are particularly related to our study .karsai et al . studied flooding of information in a mobile phone network .they assumed that users retransmit constantly the information they receive , and estimated the time required to inform everybody in the system .they concluded that the presence of community structures , i.e. , topological correlations , and bursty phone call activity slow down the spreading of information . in , miritello et al . studied the propagation of information that obeys a susceptible - infected - removed ( sir ) epidemic dynamics .they assumed that during a phone call information flows in both direction and made use of newman s theory for disease spreading on undirected complex networks to interpret their results .they exclusively focused on the average size of the outbreaks , and confirmed the results obtained in . on the other hand, we observe that at short time - scales , i.e. , small values , the spreading dynamics , respectively the tree statistics , is not sensitive to topological node - node correlations and can be described simply in terms of the out degree distribution of the underlying social network . only at large timescales these node - node correlations become dominant , enhancing the spreading of information and allowing the circulation of information in ( closed ) loops .time - correlations , while they do not have a significant impact on information spreading , promote the existence of local information loops .it is only at this level that we observe genuine causality effects .for a given , we build the `` causality '' trees , which we also refer to as cascades , in the following way .we pick up at random a phone call from the database and monitor the activity of the receiver - e.g. user * a * - for a period .we register all phone calls user * a * makes during this period , and monitor the activity of all users who have been called by user * a * during a period .we repeat this process for every _ new _ user until the cascade gets extinguished .this occurs when all users in the tree have exceeded the monitoring time and there is no new user to monitor[ fig : sketch]a and [ fig : sketch]b illustrate this procedure , which has been also applied in .notice that the proposed method is equivalent to inoculate a susceptible - infected - recovered ( sir ) disease to user * a * and wait for the infection outbreak to get extinguished .this dynamics is similar to the one proposed for rumor spreading as defined in .susceptible ( i.e. , uninformed ) users only get infected by receiving a phone call from an already infected user , i.e. , phone calls imply directed links .finally , the transition from the infected to recovered state occurs after a time .we focus on two features of the trees : their size and their depth .the tree size is simply the number of users forming the tree .we use the term _ depth _ to refer to the distance ( in terms of nodes ) to the initial node , with defined as the maximum depth of the tree , see fig .[ fig : sketch]c .to gain insight into the statistics of the causality trees , we first propose a simple transmission theory relying on the following assumptions .a ) there exists an underlying social network which is static .though we know that at large time scales , e.g. , years , the underlying social network is necessarily dynamic , we assume that at short time scales of the order of hours to few days the static approximation provides a reasonable description .b ) given a couple of nodes and , there is a directed link from to if called at least once in the database , i.e , if the directed link is present in the underlying social network .this directed link is associated to a time series : the timestamps at which has contacted . for every linkwe can define a communication rate that indicates the rate at which the link is active to transmit information .this is simply the number of phone calls that occurred in the database , divided by the total time .c ) we define causality trees as sequences of consecutive phone calls where the time difference between consecutive phone calls is always less or equal to our observation time .the mean - field associated with this simplified process reads : where the dots indicate time derivatives , refers to the fraction of individuals that are uninformed at time , to those that are informed and retransmitting the information , and to those that got the information and stop retransmitting , is the average , and the average ( out-)degree .notice that for directed networks , the average in- and out - degree are the same .the average cascade size is the average number of individuals that got the information once the process is extinguished , i.e. , when there is no more users retransmitting the information and .it is easy to realize from eq .( [ eq : mfa ] ) that $ ] , and . using the fact that and , we obtain that the number of users that once got the information , i.e. , the average tree size , , reads : } \ , , \end{aligned}\ ] ] where is the number of users in the system .( [ eq : r ] ) defines a self - consistency equation for . from this expressionwe can derive the critical monitoring time required to observe infinite tree sizes : this means that if nodes retransmit the information that they get for a period the resulting trees can be arbitrarily large .it is important to make the distinction between the duration of cascade events , i.e. , the time elapsed between the first and last phone call of the tree , and the monitoring time .for instance ,when only a small fraction of the cascades percolates and consequently the average tree duration is shorter than .notice that the monitoring time refers ( and controls ) the individual behavior of users as callers .in fact , we are asking ourselves how the individual behavior of users should be in order to allow a rumor to take over a macroscopic fraction of the system .we will come back to this point later on and look for an interpretation of this relevant quantity .now let us consider the other hypothesis mentioned in the introduction , i.e. , let us imagine that information travels in both directions of the directed edges . in this case , we can still use eq .( [ eq : r ] ) to describe the spreading process , but parameters have a different meaning . the average rate activity now represents the activity of an undirected edge , i.e. , it is the average of . finally , the average out - degree has to be replaced by the average ( undirected ) degree , since now edges are undirected .thus eq .( [ eq : mfathreshold ] ) becomes : .our goal is to obtain an expression for the probability of finding a user with in - degree and out - degree for a given .the in - degree represents the number of different users that have called the user during .similarly , denotes the number of different users that the user has called during the monitoring time .this means that we are considering that there is an underlying static network whose directed links are switched on and off this dynamics defines a ( directed ) dynamical network . in order to compute , we need to know the probability that an edge is activated within a period .more precisely , we need to know for a node that has in - degree and out - degree , the probability per in - edge and per out - edge of being used within a period . then , if for instance , the probability that two of the three in - edges are used while one is not , is .we denote the probability of finding a node with in - degree and out - degree : .the probability refers to the static underlying network that contains all connections that have occurred in the whole database .thus , assuming that the in - degree of a node is uncorrelated with the out - degree of other nodes , we can express as : where we have also assumed that the activity of an edge is independent of the activity of the other edges , and used the binomial distribution approximation explained above . to simplify eq .( [ eq : pkiko ] ) , we make another assumption : the edge activity is independent of the in- or out - degree of the node , i.e. . edges exhibit a heterogeneous distribution of communication rates .let us recall that is the rate at which an edge is used within , i.e. , the number of phone calls through the edge divided the total time .we need an estimate for the probability that the edge is used during . knowing and assuming a poissonian process , the probability that the edge is used is . under these assumptions can be estimated as : if now we consider that information travels in both direction of the edges , we need to take into account the undirected degree distribution exhibited by the nodes .along similar lines , we can express the probability of finding a user of undirected degree for a given as : where refers to the degree distribution of the undirected static social network , and is the probability that an edge connected to a node of degree is used during .we now look for an expression of the critical monitoring time , which is sensitive to the topological structure of the underlying static network . as before , we start by assuming that the underlying static network is directed below we address the alternative case where information travels in both direction on edges . to derive the percolation threshold from eq .( [ eq : pkiko ] ) , we look for the associated generating function . after exchanging the order of the sums in order to use the binomial expansion ,we obtain : the process described here corresponds to a situation where , for a given , some edges are activated while others remain silent .we want to know whether the ( directed ) network of activated edges contains giant trees .as explained in , the condition for having infinite clusters in ( static ) directed networks is .we can evaluate this condition for our dynamical network using eq.([eq : generating ] ) , and recalling that , and .the evaluation of the above mentioned condition leads to : where and .notice that if the underlying network does not exhibit a giant component for , condition ( [ eq : threshold_t ] ) can not be fulfilled for any . to gain some intuition ,let us assume that , so that from eq .( [ eq : t ] ) we can approximate as so that : let us now consider the following two extreme cases : a ) a fully in - out degree correlated underlying static network , where , for which we get and b ) a fully in - out degree uncorrelated underlying static network , i.e. , , where we find that which is exactly the mean - field prediction given by eq .( [ eq : mfathreshold ] ) . in fig .[ fig : firstmoments ] we compare the average tree size obtained from the data with the above discussed theoretical arguments .[ fig : firstmoments ] shows that there is large difference between the thresholds corresponding to the extreme cases given by eqs .( [ eq : thresholdcorrelation ] ) and ( [ eq : thresholdfullycorrelated ] ) , more than hours , which indicates that the spreading process strongly depends on the in - out degree correlations .how can we understand that correlations have such an important effect on the spreading process ? if we look at either the in- or the out - degree distribution of the underlying static network , we observe that both of these distributions exhibit fat tails , see fig .[ fig : degree_correlations]a .the heterogeneity of user degree implies the presence of super - spreaders as well as super - receivers .though the relevance of super - spreaders have been well identified and understood since many years , the existence and role of super - receivers have remained relatively unexplored , except for a few noticeable works where the relevance of in - out degree correlations were acknowledged .if information travels in both direction on edges , we have to make use of eq .( [ eq : pkuncorrelated ] ) .its associated generating function reads : where as explained above , refers to the degree distribution of the undirected underlying network and to the probability that an undirected edge is used during . to obtain , we make use of the well - known percolation criterion for uncorrelated undirected networks , . as before , if we assume that , then : this result has been derived by newman in and recently used in the context of a mobile phone network by miritello et al . in . the estimated value for using eq .( [ eq : uncorrelatedthreshold ] ) is hours .we recall that if the information travels along the direction given by the directed edges , the critical corresponds to eq .( [ eq : thresholdcorrelation ] ) . the prediction given by eq .( [ eq : uncorrelatedthreshold ] ) is close to that obtained from eq .( [ eq : thresholdfullycorrelated ] ) , which corresponds the fully correlated scenario discussed above , see fig .[ fig : firstmoments ] .notice that eq .( [ eq : thresholdcorrelation ] ) never reduces to eq .( [ eq : uncorrelatedthreshold ] ) .this indicates that that directed , that is to say , intentional or active information propagation is qualitatively different from unintentional information spreading , i.e. , when information travels in both direction along edges .for instance , while for unintentional spreading depends always on the second moment of the degree distribution , for intentional spreading it may not depend on it , if the network is in - out degree uncorrelated . in the following we focus on the statistical features of the trees .we start out by estimating the size distribution under the assumption that information travels in the direction of the ( directed ) edges . in order to get an analytical estimate of , we neglect node - node correlations in the underlying static network as well as temporal correlations among nodes. it will be clear that node - node ( topological ) correlations can be ignored for , while temporal correlations are always too weak to impact the spreading dynamics ( see below ) .we further assume that trees are fully determined by the out - degree , but as we will see , the assumption breaks down as we approach .these simplifications allow us to estimate the probability of finding a tree of size one as the probability that the root node has out - degree , i.e. , .the probability of finding a tree of size two has to be equal to the probability that the root node has out - degree while simultaneously its unique branch has to lead to a sub - cascade of size , i.e. , .more generally , is related to with .this relation can be expressed in a compact and elegant way in term of the generating function which obeys the following self - consistency equation : where is the generating function of the out - degree distribution that is defined as : the cascade size distribution can be obtained from the derivatives of as in summary , eq.([eq : gz ] ) provides us with a method to derive under the assumption that the tree statistics is given by a galton - watson ( gw ) process that is fully determined by .notice that for a given , the out - degree distribution can be approximated using eq .( [ eq : generating ] ) as indicated by eq .( [ eq : outout ] ) . this approximation starts to fail for large values of due to the non - homogenous activity of node over time .alternatively , can be directly measured from the data for each .now we look for an estimate of the depth distribution under the same assumptions , i.e. , the tree statistics is given by a gw process fully determined by .we define , with .we look for the probability that a tree gets extinguished at depth less or equal than .the probability obeys .using a more compact notation , this relation reads : on the other hand , the probability that a tree gets extinguished at is directly the probability that a node does not make phone any phone call in a period , i.e. , .thus , using the above given definition of , we rephrase eq.([eq : cond_ed ] ) as : we can draw the probability from , as : if we assume that information can flow in both direction of the edges , the above depicted gw process for a directed underlying network can be easily adapted to an undirected network , the main difference being that the gw process is now fully determined by the ( undirected ) degree distribution see eq .( [ eq : pkuncorrelated ] ) . except for this , the computations of and follows similar lines , taking into account that a node of degree can contribute at most with new nodes to the growing tree .[ fig : comparison_dist ] shows a comparison between eq .( [ eq : gz ] ) ( analyt . ) , simulations of the proposed gw process ( gw synt . ) , and the tree statistics obtained from mobile phone data ( data ) .the figure indicates that as long as , the proposed theory provides a good estimate for the tree statistics .as ( and still for ) , the theory , that neglects ( topological ) node - node correlations as well as causality effects , systematically underestimates the probability of observing large trees . the origin of this discrepancy can be rooted either in the presence of node - node correlations in the underlying network , or in strong causality effects arising from temporal correlations . in the followingwe explore the possibility of local causality effects in the form of causality loops .closed causality patterns do not contribute to the spreading of the information and are not visible at the level of the tree statistics , since they do not involve the addition of new informed users to the set of informed ones .we consider two types of patterns : the first pattern involves a three - node chain , where user calls user at time and user calls user at some later time , with .we define the reciprocity coefficient as the fraction of three - node chains where , fig .[ fig : sketch_cycle]a .along similar lines , we define the dynamical clustering coefficient as the fraction of four - node chains where the first and last node are the same , see fig .[ fig : sketch_cycle]b . fig .[ fig : clustering ] shows that and converge for to an asymptotic value for both the original and rt data .though the number of three - node and four - node chains increase monotonically with , the fractions and , corresponding to closed loops , reach asymptotic values , which indicates that closed chains grow at the same rate with .the curves and for rt data corresponds to the fraction of cycles , involving two and three nodes , respectively , expected in absence of causality effects and induced by the topology of the underlying static network and edge activity rate .we observe that at short time scales the values of and obtained from the original data are well above those obtained from rt data .the abundance of causality loops in the original data with respect to rt data , reveals that at short time scales the original data exhibits strong causality effects .interestingly , the asymptotic value for the original and rt data do not coincide , being always larger for the original than for rt data .this indicates that for any value of the number of reciprocal phone calls in the original data is larger than what is expected in the absence of time correlations .this finding is likely to be related to the typical message - reply dynamics observed for instance in email data . on the contrary, the number of three - node loops in the original data seems to converge asymptotically with to the expected value in the absence of correlations .we have shown that the mobile phone data ( as many other communication data ) can be represented by a directed ( dynamical ) network , and argued that intentional information spreading requires information to flow in the direction given by the directed edges .we have explored this possibility and studied the topological properties of causality trees , such as size and depth , as a proxy to understand information propagation .we have introduced a time - scale in the system , the monitoring time , which provides a tolerance time that allows us to relate two phone calls as causally linked .the properties of the causality trees have been studied as function of this time - scale .our first observation is that the representation of the data in terms of directed edges reveals the existence of super - spreaders and super - receivers .we have shown that the tree statistics , respectively the information spreading process , are extremely sensitive to the in - out degree correlation of the users .moreover , we have clearly pointed out that the spreading dynamics under the assumption of intentional spreading is qualitatively different from that obtained under the assumption of unintentional spreading , i.e. , when information flows in both direction along edges ( see eqs .( [ eq : thresholdcorrelation ] ) , ( [ eq : thresholdfullycorrelated ] ) , ( [ eq : thresholdfullyuncorrelated ] ) , and ( [ eq : uncorrelatedthreshold ] ) , and discussion below these equations ) . the good matching at short time - scales between the tree statistics obtained from the original data and the theoretical predictions that neglect time correlations and topological node - node correlations has allowed us to conclude that none of these correlations have a strong effect on the tree statistics .this means that at short time - scales trees can be roughly described as a simple gw process .however , at larger time - scales the tree statistics can no longer be explained in terms of this simple theory .the tree statistics obtained from randomized time - stamp data indicates that topological node - node correlations , present in the original data but neglected in the theory , dominate the spreading dynamics at these time - scales .moreover , we have learned that these topological correlations promote bigger trees .these findings together with the observation that a given information , e.g. , a rumor , would require users to retransmit it for more than 30 hours in order to cover a macroscopic fraction of the system , suggest that there is no intentional broadcasting of information .in fact , the very idea that there is information spreading beyond nearest and second - nearest neighbors , i.e. , beyond a small vicinity , is called into question . at the local level , however , we have observed that time correlations enhance the number of dynamical closed patterns , an effect particularly evident at short time scales .it is only at this level that genuine causality effects , and consequently intentional information propagation , are detectable .nevertheless we stress that these observations apply exclusively to local information circulation .the analysis performed here can be applied to other communication network data like blog and email data , as well as mobile phone data in the presence of exceptional events like natural disasters , where different tree structures , and consequently tree statistics , are likely to emerge .finally , the cascade theory we have implemented here applies to directed networks in the absence of node - node ( topological ) correlations. generalizations to account for node - node correlations along the lines of should be possible .the mobile phone data we have analyzed corresponds to one month of phone calls from a european mobile phone provider . to guarantee confidentiality ,phone numbers were anonymized .this represents 1.044.397 users , that form a connected component , and 13.983.433 phone calls among these users .since our goal has been to study information transmission , we have constrained ourselves to the study of `` successful '' phone calls i.e. , those where the receiver has answered the phone call . there is an average activity of phone calls per second , which leads to an average of phone calls per second per directed edge .the underlying static network is characterized by , and .using undirected edges we obtain and , and average edge activity phone calls per second .some aspects of this dataset have been described in , and some features of the underlying static network has been analyzed in .we have performed the data analysis on three datasets : the original data set and two copies of it , one where we have reshuffled the time stamps of the phone calls , which we refer to as rt data , and another where we have randomized the order to the phone calls of every user , which we refer to as rc data .the rt dataset is an exact copy of the original data set , where source and destination of every phone call remains the same , but the time - stamp of phone calls are randomly exchanged .the new dataset is then ordered according to the new time - stamps . as result of this procedure, every node exhibits the same in- and out - degree as in the original data set .moreover , the activity rate per ( directed ) edge and user remain the same , as well as the global activity rate of the dataset ( day - night and weekly cycles , etc ) .we thank e. altmann , c.f .lee , and f. vazquez for valuable comments , and the max planck society for financial support .10 urlstyle [ 1]doi:#1 [ 1 ] [ 2 ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ key : # 1 + annotation : # 2 _ _ _ _ _ _ _ _ _ _ _ _ _ _ onnela j , saramki j , hyvnen j , szab g , lazer d , et al .( 2007 ) structure and tie strengths in mobile communication networks .proceedings of the national academy of sciences 104 : 7332 .onnela j , saramki j , hyvnen j , szab g , menezes m , et al .( 2007 ) analysis of a large - scale weighted network of one - to - one human communication .new journal of physics 9 : 179 .gonzlez m , hidalgo c , barabsi a ( 2008 ) understanding individual human mobility patterns .nature 453 : 779782 .song c , qu z , blumm n , barabsi a ( 2010 ) limits of predictability in human mobility .science 327 : 1018 .karsai m , kivel m , pan rk , kaski k , kertsz j , et al .( 2011 ) small but slow world : how network topology and burstiness slow down spreading .physical review e 83 : 025102(r ) .lambiotte r , blondel v , de kerchove c , huens e , prieur c , et al . ( 2008 ) geographical dispersal of mobile communication networks .physica a : statistical mechanics and its applications 387 : 53175325 .candia j , gonzlez m , wang p , schoenharl t , madey g , et al .( 2008 ) uncovering individual and collective human dynamics from mobile phone records .journal of physics a : mathematical and theoretical 41 : 224015 .miritello g , moro e , lara r ( 2011 ) dynamical strength of social ties in information spreading .physical review e 83 : 045102(r ) .bagrow j , wang d , barabsi a ( 2011 ) collective response of human populations to large - scale emergencies .plos one 6 .castellano c , loreto v ( 2009 ) statistical physics of social dynamics .reviews of modern physics 81 : 591646 .anderson r , may r ( 1991 ) infectious diseases of humans : dynamics and control .new york .daley d , kendall d ( 1964 ) epidemics and rumours .nature 204 : 1118 .liu z , lai y , ye n ( 2003 ) propagation and immunization of infection on general networks with both homogeneous and heterogeneous components .physical review e 67 : 031911 .moreno y , nekovee m , pacheco a ( 2004 ) dynamics of rumor spreading in complex networks .physical review e 69 : 066130 .moreno y , nekovee m , vespignani a ( 2004 ) efficiency and reliability of epidemic data dissemination in complex networks .physical review e 69 : 055101(r ) .newman m ( 2002 ) spread of epidemic disease on networks .physical review e 66 : 016128 .kovanen l , karsai m , kaski k , kertesz j , saramki j ( 2011 ) temporal motifs in time - dependent networks .arxiv preprint 11075646 .schwartz n , cohen r , barabsi a , havlin s ( 2002 ) percolation in directed scale - free networks . physical review e 66 : 015104 .kendall m , stuart a ( 1967 ) the advanced theory of statistics : inference and relationship , volume 2 .griffin .pastor - satorras r , vespignani a ( 2001 ) epidemic spreading in scale - free networks .physical review letters 86 : 32003203 .bogu m , serrano m ( 2005 ) generalized percolation in random directed networks .physical review e 72 : 016106 .zamora - lpez g , zlati v , zhou c , tefani h , kurths j ( 2008 ) reciprocity of networks with degree correlations and arbitrary degree sequences .physical review e 77 : 016106 .cohen r , erez k , ben - avraham d , havlin s ( 2000 ) resilience of the internet to random breakdowns .physical review letters 85 : 46264628 .harris t ( 2002 ) the theory of branching processes .dover publications .barabsi a ( 2005 ) the origin of bursts and heavy tails in human dynamics .nature 435 : 207211 .vzquez a , oliveira j , dezs z , goh k , kondor i , et al .( 2006 ) modeling bursts and heavy tails in human dynamics .physical review e 73 : 036127 .gruhl d , guha r , liben - nowell d , tomkins a ( 2004 ) information diffusion through blogspace . in : proceedings of the 13th international conference on world wide web .491501 .leskovec j , mcglohon m , faloutsos c , glance n , hurst m ( 2007 ) cascading behavior in large blog graphs .arxiv preprint 07042803 .cointet j , roth c ( 2009 ) socio - semantic dynamics in a blog network . in : proceedings of the 12th international conference on computational science and engineering .ieee , volume 4 , pp .114121 .malmgren r , stouffer d , motter a , amaral l ( 2008 ) a poissonian explanation for heavy tails in e - mail communication .proceedings of the national academy of sciences 105 : 1815318158 .stoica a , prieur c ( 2009 ) structure of neighborhoods in a large social network . in : international conference on computational science and engineering , 2009 .ieee , volume 4 , pp . 2633 .
without having direct access to the information that is being exchanged , traces of information flow can be obtained by looking at temporal sequences of user interactions . these sequences can be represented as causality trees whose statistics result from a complex interplay between the topology of the underlying ( social ) network and the time correlations among the communications . here , we study causality trees in mobile - phone data , which can be represented as a dynamical directed network . this representation of the data reveals the existence of super - spreaders and super - receivers . we show that the tree statistics , respectively the information spreading process , are extremely sensitive to the in - out degree correlation exhibited by the users . we also learn that a given information , e.g. , a rumor , would require users to retransmit it for more than 30 hours in order to cover a macroscopic fraction of the system . our analysis indicates that topological node - node correlations of the underlying social network , while allowing the existence of information loops , they also promote information spreading . temporal correlations , and therefore causality effects , are only visible as local phenomena and during short time scales . these results are obtained through a combination of theory and data analysis techniques .
even after more than 60 years there remain many problems on the understanding of quantum mechanics . from the early days ,a main concern of the majority of physicists reflecting on the foundations of the theory has been the question of understanding the nature of the quantum probability . at the other hand, it was a problem to understand the appearance of probabilities in classical theories , since we all agree that it finds its origin in a lack of knowledge about a deeper deterministic reality .the archetypic example is found in thermodynamics , where the probabilities associated with macroscopic observables such as pressure , volume , temperature , energy and entropy are due to the fact that the real state of the entity is characterized deterministically by all the microscopic variables of positions and momenta of the constituting entities , the probabilities describing _ our _ lack of knowledge about the microscopic state of the entity .the variables of momenta and positions of the individual entities can be considered as hidden variables , present in the underlying reality .this example can stand for many of the attempts that have been undertaken to explain the notion of quantum probability , and the underlying theories are called hidden variable theories . in general , for a hidden variable theory , one aims at constructing a theory of an underlying deterministic reality , in such a way that the quantum observables appear as observables that do not reach this underlying hidden reality and the quantum probabilities finding their origin in a lack of knowledge about this underlying reality .von neumann gave a first impossibility proof for hidden variable theories for quantum mechanics .it was remarked by bell that in the proof of his no - go theorem , von neumann had made an assumption that was not necessarily justified , and bell explicitly constructs a hidden variable model for the spin of a spin- quantum particle .bell also criticizes the impossibility proof of gleason , and he correctly points out the danger of demanding extra mathematical assumptions without an exact knowledge on their physical meaning .very specific attention was paid to this danger in the study of kochen and specker , and their impossibility proof is often considered as closing the debate .we can state that each of these impossibility proofs consists in showing that a hidden variable theory ( under certain assumptions ) gives rise to a certain mathematical structure for the set of observables of the physical system under consideration , while the set of observables of a quantum system does not have this mathematical structure .therefore it is impossible to replace quantum mechanics by a hidden variable theory ( satisfying the assumptions ) . to be more specific , if one works in the category of observables , then a hidden variable theory ( under the given assumptions ) gives rise to a commutative algebraic structure for the set of observables , while the set of observables of a quantum system is non - commutative .if one works in the category of properties ( yes - no observables ) then a hidden variable theory ( satisfying the assumptions ) has always a boolean lattice structure for the set of properties while the lattice of properties of a quantum system is not boolean .if one works in the category of probability models , then a hidden variable theory ( satisfying the assumptions ) has always a kolmogorovian probability model for the set of properties while the quantum probability model is not kolmogorovian .most of the mathematically oriented physicists , once aware of these fundamental structural differences , gave up the hope that it would ever be possible to replace quantum mechanics by a hidden variable theory .however , it turned out that the state of affairs was even more complicated than the structural differences in the different mathematical categories would make us believe .we have already mentioned that the no - go theorems for hidden variables , from von neumann to kochen and specker , depended on some assumptions about the nature of these hidden variable theories .we shall not go into details about the specific assumptions related to each specific no - go theorem , because in the mean time it became clear that there is one central assumption that is at the core of each of these theorems : the hidden variables have to be hidden variables of the _ state _ of the physical entity under consideration and specify a deeper underlying reality of the physical entity itself , independent of the specific measurement that is performed .therefore we shall call them state hidden variables .this assumption is of course inspired by the situation in thermodynamics , where statistical mechanics is the hidden variable theory , and indeed , the momenta and positions of the molecules of the thermodynamical entity specify a deeper underlying reality of this thermodynamical entity , independent of the macroscopic observable that is measured .it was already remarked that there exists always the mathematical possibility to construct so - called contextual hidden variable models for quantum particles , where one allows the hidden variables to depend on the measurement under consideration ( e.g. the spin model proposed by bell ) . for the general casewe refer to a theorem proved by gudder .however , generally this kind of theories are only considered as a mathematical curiosum , but physically rather irrelevant .indeed , it seems difficult to conceive from a physical point of view that the nature of the deeper underlying reality of the quantum entity would depend on the measurement to be performed .to conclude we can state that : ( 1 ) only state hidden variable theories were considered to be physically relevant for the solution of the hidden variable problem , ( 2 ) for non - contextual state hidden variable theories the no - go theorem of kochen and specker concludes the situation ; it is not possible to construct a hidden variable theory of the non - contextual state type that substitutes quantum mechanics .what we want to point out is that , from a physical point of view , it is possible to imagine that not only the quantum system can have a deeper underlying reality , but also the physical measurement process for each particular measurement .if this is true , then the physical origin of the quantum probabilities could be connected with a lack of knowledge about a deeper underlying reality of the measurement process . in idea was explored and it has been shown that such a lack of knowledge gives indeed rise to a quantum structure ( quantum probability model , non - commutative set of observables , non - boolean lattice of properties ) .this uncertainty about the interaction between the measurement device and the physical entity can be eliminated by introducing hidden variables that describe the fluctuations in the measurement context .however , they are not state hidden variables , they rather describe an underlying reality for each measurement process , and therefore they have been called hidden measurements , and the corresponding theories hidden measurement theories. suppose that weperform a measurement on a physical system and that there is a lack of knowledge on the measuring process connected with , in such a way that there exist hidden measurements , where each has the same outcome set as , and each is deterministic , which means that for a given state of the system , for each the hidden measurement has a determined outcome .now the fundamental idea is that each time when the measurement is performed , it is actually one of the , each with a certain probability , that takes place in the underlying hidden reality . in is shown that a hidden measurement model can be constructed for any arbitrary quantum mechanical system of finite dimension , and the possibility of constructing a hidden measurement model for an infinite dimensional quantum system can be found in .although the models presented in these papers illustrate our point about the possibility of explaining the quantum probabilities in this way , there is always the possibility to construct more concrete macroscopic models , only dealing with real macroscopic entities and real interactions between the measurement device and the entities , that give rise to quantum mechanical structures .it is our point of view that these realistic macroscopic models are important from a physical and philosophical point of view , because one can visually perceive how the quantum - like probability arises .one of the authors introduced such a real macroscopic model for the spin of a spin- quantum entity .when he presents this spin model for an audience , it was often raised that this kind of realistic macroscopic model can only be built for the case of a two - dimensional hilbert space quantum entity , because of the theorem of gleason and the paper of kochen and specker .gleason s theorem is only valid for a hilbert space with more than two dimensions and hence not for the two - dimensional complex hilbert space that is used in quantum mechanics to describe the spin of a spin- quantum entity . in the paper of kochen and specker also a spin model for the spin of a spin- quantum entityis constructed , and a real macroscopic realization of this spin model is proposed .they point out on different occasions that such a real model can only be constructed for a quantum entity with a hilbert space of dimension not larger than two .the aim of this paper is to clarify this dimensional problem .therefore we shall construct a real macroscopic physical entity and measurements on this entity that give rise to a quantum mechanical model for the case of a three - dimensional real hilbert space , a situation where gleason s theorem is already fully applicable .we remark that one of the authors presented a model for a spin- quantum entity that allows in a rather straightforward way a hidden measurement representation .nevertheless , since he only considered a set of coherent spin- states ( i.e. , a set of states that spans a three - dimensional hilbert space , but that does not fill it ) his model can not be considered as a satisfactory counter argument against the no - go theorems . in the first two sections ,we briefly give the two - dimensional examples of aerts and kochen - specker and analyze their differences . in section 4we investigate the dimensional problem related to the possible hidden variable models .afterwards , we construct a hidden measurement model with a mathematical structure for its set of states and observables that can be represented in a three - dimensional real hilbert space .the physical entity that we consider is a point particle that can move on the surface of the unit sphere .every unit vector represents a state of the entity . for every point of define a measurement as follows : a rubber string between and its antipodal point catches the particle that falls orthogonally and sticks to it .next , the string breaks somewhere with a uniform probability density and the particle moves to one of the points or , depending on the piece of elastic it was attached to .if it arrives in we will give the outcome to the experiment , in the other case we will say that the outcome has occurred .after the measurement the entity will be in a new state : in the case of outcome and in the other case .taking into account that the elastic breaks uniformly , it is easy to calculate the probabilities for the two results : with .we have the same results for the probabilities associated with the spin measurement of a quantum entity of spin- ( see ) , so we can describe our macroscopic example by the ordinary quantum formalism where the set of states is given by the points of a two - dimensional complex hilbert space .clearly , we can also interpret this macroscopic example as a hidden variable model of the spin measurement of a quantum entity of spin- . indeed, if the point where the string disintegrates is known , the measurement outcome is certain .the probabilities in this model appear because of our lack of knowledge of the precise interaction between the entity and the measurement device .every spin measurement can be considered as a class of classical spin measurements with determined outcomes , and the probabilities are the result of an averaging process . in this exampleit is clear that the hidden variable is neither a variable of the entity under study nor a variable pertaining to the measurement apparatus .rather , it is a variable belonging to the measurement process as a whole .in kochen and specker s model , again a point on a sphere represents the quantum state of the spin- entity .however , at the same time the entity is in a hidden state which is represented by another point of , the upper half sphere with as its north pole , determined in the following way .a disk of the same radius as the sphere is placed perpendicular to the line which connects with the center of the sphere and centred directly above .a particle is placed on the disk that is now shaken `` randomly '' , i.e. , in such a way that the probability that the particle will end up in a region of the disk is proportional to the area of .the point is then the orthogonal projection of the particle .the probability density function is where is the angle between and .if a measurement is made in the direction the outcome `` spin up '' will be found in the case that and `` spin down '' otherwise . as a result of the measurement the new state of the entity will be in case of spin up and otherwise . the new hidden state is now determined as before , the disk being placed now at if the new state is and at if otherwise .it can be shown that the same probabilities as for the quantum spin- entity occur .it is important to remark that the hidden variable here pertains to the entity under study , as was made clear by using the expression `` hidden state '' . but is this really the case ?as we look closer we see that for every consecutive spin measurement to reveal the correct probabilities , we need each time a randomisation of the hidden state . thus every time a measurement occurs the hidden variable has to be reset again . in practice this means that for every measurement a new value of the variable will be needed .thus we can make the philosophical important step to remove this `` hidden state '' from the entity and absorb it within the context of the measurement itself , indeed a reasonable thing to do .once this is done , the analogy with the model of section 3 is obvious .but it is also clear that a new idea has been introduced , namely the shift of the hidden variable from the entity towards the measurement process .this is not only a new feature for a hidden variable theory , but also a natural way out of the traps of the no - go theorems .as was pointed out by several authors ( see ) , it is possible to prove that `` reasonable '' hidden variable theories do nt exist for hilbert spaces with a dimension greater than two .moreover , other arguments show the necessity for a proof of existence of a hidden variable model with a more than two - dimensional state space .there is for instance the theorem of gleason which states that for a propositional system corresponding to a three - dimensional real hilbert space there exists a unique probability function over the set of propositions which satisfies some very plausible properties .this means that every hidden variable theory ( satisfying these assumptions ) can only reveal the same probabilities as the quantum probability function and this would render the hidden variable theory redundant , because no extra information can be gained . to prove that the no - go theorems are too restrictive it is thus necessary ( but also sufficient ! )to give one `` reasonable '' example with a three - dimensional hilbert state space and this is exactly what we will do now .in this section we introduce a mechanistic macroscopic physical entity with a three - dimensional hilbert space quantum description . probably there exist models that are much more elegant than the one we propose , because the explicit realization would be rather non - trivial , but for our purpose it is sufficient to prove that there exists at least one. once again we remark that the system that we present is not a representation of a quantum mechanical entity , but a macroscopic physical entity that gives rise to the same probability structure as one encounters in quantum mechanics .first we propose the model and , for reasons of readability , we present a geometrical equivalent in . in this way we can easily prove the equivalence between the model and the quantum mechanical case . in section 5.3we shall study the probability structure of the model .the entity that we consider is a rod of length 2 which is fixed in its center point , both sides of which have to be identified .the set of states of the entity , i.e. the set of rays in euclidean -space , possibly characterized by one of the two end points of the rod ( denoted by ) , will be denoted by .the measurement apparatus consists of three mutual orthogonal rods , parallel with rays , fixed in 3-space .the entity and the measurement device are coupled for a measurement in the following way : ( see fig .1 ) : connection in : the rod floats in a slider which is fixed orthogonal to the rod of the measurement apparatus . connection in : the three interaction - rods are fixed to one slider , which floats on the `` entity - rod '' . we also fix three rubber strings between the entity - rod and the three rods of the measurement apparatus . the last ingredient that takes part in the interaction is something we call a `` random gun '' .this is a gun , fixed on a slider that floats on and turns around the entity - rod in such a way that : the gun is shooting in a direction orthogonal to the entity - rod . the movement and the frequency of shooting are at random but such that the probability of shooting a bullet in a certain direction , and from a certain point of the entity - rod is uniformly distributed , i.e. , the gun distributes the bullets uniformly in all directions and from all the points of the rod .if a bullet hits one of the connections , both the rod and string break , such that the entity can start moving ( there is one new degree of freedom ) , and it is clear that the two non broken strings will tear the entity into the plane of the measurement - rods to which it is still connected .0.3 cm _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ fig . 1 : practical realization of the model . with rods , sliders , strings and a random gun " we construct a device with a mathematical structure equivalent to the one for a quantum entity with a three - dimensional real hilbert state space . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to facilitate the calculation of the probabilities we will describe what happens during the measurement from a geometrical point of view .we know that a state of the entity is characterized by the angles between the rod and an arbitrary selected set of three orthonormal axis in euclidean -space .it is clear that this set of states corresponds in a one - to - one way with the states of an entity described in a three - dimensional real hilbert space .the set of measurements to be performed on this entity is characterized as follows .let be the three mutual orthogonal rays coinciding with the rods of the measurement apparatus . as a consequence , for a given state , and a given experiment , we have the three angles as representative parameters to characterize the state , relative to the measurement apparatus .we denote by , the orthogonal projections of on the three rays , forming a set of points representative for the couple .the geometrical description of the measurement process goes as follows : \i ) every point is connected with by a segment denoted by ] on the rod is .\ii ) next , one of the connections ] on the rod ( in fig . 4 and fig .5 we suppose that ] and ] .the length of the projection of ] or ] , breaks with probability proportional to the length of the projection of ] and ] breaks and then ] .since we have .thus is the conditional probability for the breaking of ] and then ] is a ( generalised ) probability measure , there exists a unit vector such that , with the lattice of closed subspaces of the hilbert space . in our caseit asserts that the probability to obtain say necessarily takes the form that was given above in this paper .therefore it is implicit in the assumptions of the theorem that the probabilities only depend on the initial and final state of the entity .however , referring to our model we see that it is easy to invent other probability measures that actually do depend on the intermediate states of the entity and therefore do not satisfy the assumptions of gleason s theorem . for instance, one can imagine that the random gun is absent and the interaction rods break with a uniform probability density , resulting in the first probability being proportional to in stead of . since the hidden measurement approach is obviously a contextual theory that keeps the hilbert space framework for its state space , but situates the origin of the quantum probability in the measurement environment, there is no need for the existence of dispersion - free probability measures on as in the conventional non - contextual state hidden variable theories . j. von neumann , _ grudlehren _ , math . wiss .xxxviii , 1932 . j.s .bell , rev .mod . phys . * 38 * , 447 , 1966 . a.m. gleason , j. math .* 6 * , 885 , 1957 . s. kochen and e.p .specker , j. math . mech . * 17 * , 59 , 1967 . s.p .gudder , j. math .phys * 11 * , 431 ( 1970 ) . d. aerts , _ a possible explanation for the probabilities of quantum mechanics and a macroscopic situation that violates bell inequalities _ , in _ recent developments in quantum logic _ , eds .p. mittelstaedt et al . , in grundlagen der exacten naturwissenschaften , vol . *6 * , wissenschaftverlag , bibliographischen institut , mannheim , 235 , 1984 . d. aerts , _ a possible explanation for the probabilities of quantum mechanics _ , j. math .phys . * 27 * , 202 , 1986 . d. aerts , _ the origin of the non - classical character of the quantum probability model _ , in information , complexity and control in quantum physics , a. blanquiere , et al . ,eds . , springer - verlag , 1987 . b. coecke , found .* 8 * , 437 ( 1995 ) . b. coecke , helv .* 68 * , 396 ( 1995 ) .aerts , found .* 24 * , 1227 ( 1994 ) .aerts , int . j. theor. phys . * 34 * , 1165 ( 1995 ) .aerts , _ the entity and modern physics _ in _ identity and individuality of physical objects _ , ed .t. peruzzi , princeton university press , princeton , ( 1995 ) ..m . jauch and c. piron , helv .. acta . * 36 * , 827 ( 1963 ) .bell , rev .phys . * 38 * , 447 ( 1966 ) .
it is sometimes stated that gleason s theorem prevents the construction of hidden - variable models for quantum entities described in a more than two - dimensional hilbert space . in this paper however we explicitly construct a classical ( macroscopical ) system that can be represented in a three - dimensional real hilbert space , the probability structure appearing as the result of a lack of knowledge about the measurement context . we briefly discuss gleason s theorem from this point of view . center leo apostel ( clea ) and foundations of the exact sciences ( fund ) , brussels free university , pleinlaan 2 , b1050 , brussels . diraerts.ac.be , bocoecke.ac.be , bdhooghe.be , fvalcken.ac.be
large - scale evolutionary equations for many - body systems arise ubiquitously in numerical modeling .the cases of particular interest and difficulty involve many configuration coordinates .for instance , the time - dependent _ schroedinger _equation describes the wavefunction , depending on all positions of all quantum particles or states of spins .another important example is the simulation of the joint probability density function either in continuous ( _ fokker - planck _ equation ) or discrete ( _ master _ equation ) variables . in case of configuration variables , solutions of these problemsare -variate functions . on the discrete level, one may typically assume that finite sets of admissible values are introduced for each coordinate independently ( e.g. a standard tensor product discretization grid ) .thereby , we do not discriminate the variables from the very beginning .however , the total amount of entries , defining the multivariate function , scales as . even if the _ dimension _ is of the order of hundreds and ( a modest size for spin dynamics problems ) , this becomes an enormously large number , and straightforward computations are unthinkable . to cope with such _ high - dimensional _ problems , one has to employ _( data-)sparse _ techniques , i.e. describe the solution by much less unknowns than .different state of the art approaches were developed for this task . among the most successful ones we may identify monte carlo methods , sparse grids , and tensor product representations . in this paper ,we follow the latter framework . _ tensor product methods_ rely on the idea of separation of variables : a -variate array ( or _ tensor _ ) may be defined or approximated by sums and products of univariate vectors .extensive information can be found in recent reviews and books , e.g. .a promising potential of tensor product methods stems from the fact that each univariate _ factor _ requires only elements to store instead of .if a tensor can be approximated up to the required accuracy with a moderate amount of such terms , the memory and complexity savings may be outstanding .there exist different tensor product _formats _ , i.e. rules how to map univariate factors to the initial array . in case of two dimensions ,one ends up with the well - known low - rank dyadic factorization of a matrix .this straightforward sum of direct products of vectors in higher dimensions is called cp format , and traces back to .however , the error function recast to the entries of the cp factors may not have a minimizer .therefore , even if all elements of a tensor are given , it is difficult to detect its cp rank .certain heuristics are available , for example , one may increase the rank one by one in a try - and - dispose als procedure or greedy algorithms .nevertheless , such methods typically exhibit a fast saturation of the convergence for rather modest ranks , and more accurate calculations become struggling .a family of reliable tools exploits recurrent two - dimensional factorizations to make the computations stable . in this work ,we focus on the simplest member of this family , rediscovered several times under different names : _ valence bond states _ , _ matrix product states _ ( mps ) and _ density matrix renormalization group _ ( dmrg ) in condensed matter quantum physics , and _ tensor train _ ( tt ) in numerical linear algebra .this format possesses all power of the recurrent model reduction concept , but the description of algorithms may benefit from some transparency and elegance . for higher flexibility in particular problems ,one may use more general tree - based constructions , such as the _ ht _ or _ extended tt / qtt - tucker _ formats .the dmrg is not only the name of the representation , but also a variety of computational tools .it was originally developed to find ground states ( lowest eigenpairs ) of high - dimensional hamiltonians of spin chains .the main idea behind the dmrg is the alternating optimization of a function ( e.g. rayleigh quotient ) on tensor format blocks in a sequence .it was noticed that this method may manifest a remarkably fast convergence , and later extensions to the energy function followed . besides the stationary problems ,the same framework was applied to the dynamical spin schroedinger equation .two conceptually similar techniques , the _ time - evolving block decimation _ ( tebd ) and the _ time - dependent dmrg _ ( tdmrg ) take into account the nearest - neighbor form of the hamiltonian to split the operator exponent into two parts using the trotter decompositions. for each part , the exact exponentiation may be performed , but at the cost of increased sizes of tensor format factors . to reduce the storage ,the truncated singular value decomposition is employed .thus , the method introduces two types of error : the truncated part of the trotter series , and the truncated part of the tensor format .if many time steps are required , the error may accumulate in a very unwanted manner : it lacks a reasonable separation of variables , and hence inflates the tensor format storage of the solution ( see e.g. ) . to stick the evolution to the manifold , generated by the tensor format , the so - called _ dirac - frenkel _ principlemay be exploited .this scheme projects the time derivative onto the tangent space of the tensor product manifold , and formulates the dynamical equations for the factor elements directly .the storage of the format is now fixed , but approximation errors become generally uncontrollable . in addition , the projected dynamical equations may be ill - conditioned . as an alternative approach, one may consider time just as another variable , since the dimension contributes linearly to the complexity of tensor product methods , and solve the global system for many time layers simultaneously . in this workwe follow this way .contrarily to , we use the spectral differentiation in time on the chebyshev grid , see .this makes the time discretization error negligible , and we show that a long - time dynamics is possible without explosion of the tensor format storage .the linear system arising from this scheme is always non - symmetric and requires a reliable solution algorithm in a tensor format .the traditional dmrg may suffer from a stagnation at a local minimum , far from the requested error level .recently , the _ alternating minimal energy _ ( amen ) method was proposed , which augments the tensor format of the solution in the dmrg technique by the tensor format of the global residual , mirroring the classical steepest descent iteration .this endows the method with the rank adaptivity and a guaranteed global convergence rate .importantly , the practically manifesting convergence appears to be much faster than the theoretical predictions , which yields a solution with a nearly - optimal tensor product representation for a given accuracy .another problem reported for tdmrg ( it takes place for the techniques in as well ) is the corruption of system invariants .even if the storage remains bounded during the dynamics , the magnitude of the error may rise . though we may be satisfied with the resulting approximation of the whole solution , it is worth sometimes to preserve a linear or quadratic function of the solution exactly ( see e.g. a remark in ) . in this paperwe address this issue for linear functions and the second norm of the solution by including the vectors , defining the invariants , into the amen enrichment scheme . in the next sectionwe formulate the ode problem , investigate its properties related to the first- and the second - order invariants , show the galerkin model reduction concept and how the invariants may be preserved in the reduced system , and suggest the spectral discretization in time .section [ sec : tensor ] gives a brief introduction to tensor product formats and methods , and finally , the new tamen algorithm ( the name is motivated by tdmrg ) is proposed and discussed .section 4 demonstrates supporting numerical examples , and section 5 contains concluding remarks .our central problem , considered in particular in the numerical examples , is the homogeneous linear system of odes , in section [ sec : spectral ] and in the final version of the algorithm , we will extend to the general quasi - linear form .analogously , the inhomogeneous case may be taken into account with a few technical changes .nevertheless , basic features may be illustrated already on the simple linear system , and we will keep it in focus in the first part of the paper . throughout the paper , and other quantities denoted by small letterswill be considered as vectors , such that the _ dot _ ( inner , scalar ) product may be written as .the time discretization relies on both the finite approximation of the time derivative and boundary conditions for the cauchy problem . a simple way to derive themis presented below .given ( not necessarily linear ) together with , we introduce a new variable , and obtain to discretize this equation , we use the chebyshev spectral differentiation scheme . the base _ chebyshev _ nodes on the interval ] we obtain , .since starts from , the point is excluded , in accordance with the zero dirichlet boundary condition .now , we represent any function in the form , where be the lagrange interpolation polynomial built on , i.e. .therefore , the time derivative can be approximated by the matrix - by - vector product , where ] , is analytically extensible to the complex ellipse with .then the error of the chebyshev derivative converges exponentially , if the ode solution is not smooth in time , more sophisticated hp - techniques may be required . in many cases , however , the chebyshev interpolation is preferable , since it allows to work with pointwise samples of functions instead of galerkin coefficients , and increases the sparsity of involved matrices .the chebyshev differentiation matrices can be also used for spatial variables in e.g. the fokker - planck equation , see . in many practical models ,the right - hand side of the ode system is _ quasi - linear _ , i.e. . in case of a mild non - linearity , the straightforward _iteration may exhibit a satisfactory convergence .given the initial vector , composed from the stacked samples at the chebyshev nodes in time , we write a counterpart of as the following linear system , where denotes the `` reversed '' kronecker product , ] . a tensor may come from a discretized multidimensional pde , for example : suppose a function is discretized by sampling at grid nodes , then the sampled values may be collected into a tensor .however , when we pose a linear system problem , or an ode , should be considered as a vector , cf . .we will denote the same data , re - arranged as a vector , by the _ multi - index _ operation stands for renumeration of the elements of .we use the rule consistent with the reversed kronecker product from the previous section : suppose {i_k=1}^{n_k}, ] , right - hand side , initial guesses for the solution and the residual in -dimensional tt formats ; initial state and detection vectors in -index tt formats ; truncation threshold and local accuracy gap .updated solution , residual in the tt formats .prepare ] with the central difference discretization scheme , where is the mesh step of the uniform grid , , .the pure convection is a notoriously fragile problem , since inaccurate discretizations may cause large spurious oscillations . in this test, we select a smooth initial state , and consider large grids , , such that the spatial part is properly resolved , and we may focus on the time integration scheme .we choose this model example for demonstration purposes _ deliberately_. it allows a comparison with a known analytical solution , and contains both types of invariants considered in the paper .moreover , fine grids still make the problem challenging , cf .large cpu times of the straightforward matrix exponentiation in table [ tab : conv_timeerror ] .the initial state is a rank-1 -dimensional tt tensor if we separate and .however , to achieve higher cost reduction , we employ the so - called qtt format : we choose , and decompose each index to the binary digits , after that , all tensors are reshaped to the new indexing and compressed into the -dimensional tt format , for example , the matrix in is exactly ( and constructively , see ) representable in the qtt format with the maximal tt rank , but does not possess an exact decomposition anymore , and the accuracy threshold plays a nontrivial role . and vs. time and accuracy.,title="fig : " ] and vs. time and accuracy.,title="fig : " ] .convection example .cpu times ( seconds ) and errors in different methods and parameters . [ cols="^,^,^,^,^,^,^ " , ] with and without additional enrichments .degeneracy of the normalization ( right ) is shown only for the test with enrichments.,title="fig : " ] with and without additional enrichments .degeneracy of the normalization ( right ) is shown only for the test with enrichments.,title="fig : " ] ( left ) and maximal errors in ( right),title="fig : " ] ( left ) and maximal errors in ( right),title="fig : " ] as the initial state , we choose the multinomial function according to , where , and is the heaviside function .though even infinite copy numbers are potentially allowed , the probability function vanishes in the limit . in practice, we have to deal with a finite problem , so we restrict the copy numbers to finite values . to ensure that the truncated part outside is negligible , we take .moreover , we adjust the propensities of generation reactions as follows : together with the natural condition for , we obtain the _ normalization conservation _ property , , where is a vector of all ones . therefore , our first constraint vector .besides , as one of statistical outputs , we may be interested in the _ mean copy numbers _ , computed as where are the all - ones vectors of size . to make the computations of more accurate, we also include in the enrichment set , which reads therefore . in fig .[ fig : cme - time - rank ] , we investigate the tt ranks of the solution and the cpu times of the calculations with the following parameters : the tensor truncation threshold , local residual gap , number of chebyshev points in time , the residual tt rank in tamen , and the time grid is exp - uniform in accordance with , , , such that for the step . to cope with large grid sizes ( ) ,we employ the qtt format , as in the first example .we remind that the crank - nicolson calculations in required about one hour on the same computer . from fig .[ fig : cme - time - rank ] we may observe that the straightforward tamen algorithm requires less time , but the enrichments make it larger . in fig .[ fig : cme - meanconc ] , we show the evolution of the mean copy numbers in time , and compare them with the reference values , computed with smaller tolerance .we may notice that the enrichments improve the accuracy significantly .we would like to emphasize that the artifacts in the left plane of fig .[ fig : cme - meanconc ] do not reflect explicitly the error in the solution , rather than in the means . recall that the maximal value of is .the exact solution would have a fast decay of the elements , which compensates large values of the index in .however , the approximate solution may conceal this decay by oscillations at the magnitude . taking into account , we may conclude that may be of the order of , as appears in fig .[ fig : cme - meanconc ] .the same consideration holds for in the end of the dynamics . nevertheless , if we keep in the tt format for exactly , the inner products in recover satisfactory accuracy . as in the previous example, the degeneracy of the normalization stays below in the enriched version of the algorithm ( see fig . [ fig : cme - time - rank ] ) . for the sake of clarity , we do not plot this quantity for the algorithm without enrichments , since it grows up to .we have proposed and studied the alternating iterative algorithm for approximate solution of ordinary differential equations in the mps / tt format .the method combines advances of dmrg techniques and classical iterative methods of linear algebra. started from the solution at the previous time interval as the initial guess , it often converges in 24 iterations , and delivers accurate solution even for strongly non - symmetric matrices in the right - hand side of an ode . another important ingredient is the spectral discretization scheme in time .the high - order approximation allows to simulate systems with purely imaginary spectrum without blowing the solution storage up , due to the absence of a poorly - separable noise , an unfortunate phenomenon in low - order schemes .the method possesses a simple mechanism how to bring linear conservation laws into the reduced tensor product model exactly , provided the generating vectors admit low - rank representations .the second norm of the solution can be also preserved easily .the numerical experiments reveal a promising potential of this method in long time simulations with the chemical master and similar equations .nevertheless , several further research directions open . the second norm conservation benefits from the orthogonality properties of the tensor format .is it possible to maintain general quadratic and high - order invariants ?we saw that accurate solution of the reduced systems in the tensor product scheme may be crucial for the robustness of the whole process . to what extent can we relax this demand ?are there reliable ways to precondition the local problems ?stiff problems may require either small time steps or large numbers of chebyshev points in time .are there ways to refine temporal grids adaptively inside the tensor format ?we are planning to address some of these questions in future work .another part of research will involve verification of the technique in a broad range of applications .recently , the amen algorithm for linear systems was employed in the simulation of a nuclear magnetic resonance experiment for large proteins .the tt formalism allows to consider the whole quantum hilbert space with a controllable accuracy an unprecedented flexibility in nmr calculations . in future , we plan to extend the proposed approach to more complicated time - dependent nmr problems . concerning the non - linear modeling , it is intriguing to revisit the simulations of plasma .
we propose an algorithm for solution of high - dimensional evolutionary equations ( odes and discretized time - dependent pdes ) in tensor product formats . the solution must admit an approximation in a low - rank separation of variables framework , and the right - hand side of the ode ( for example , a matrix ) must be computable in the same low - rank format at a given time point . the time derivative is discretized via the chebyshev spectral scheme , and the solution is sought simultaneously for all time points from the global space - time linear system . to compute the solution adaptively in the tensor format , we employ the alternating minimal energy algorithm , the dmrg - flavored alternating iterative technique . besides , we address the problem of maintaining system invariants inside the approximate tensor product scheme . we show how the conservation of a linear function , defined by a vector given in the low - rank format , or the second norm of the solution may be accurately and elegantly incorporated into the tensor product method . we present a couple of numerical experiments with the transport problem and the chemical master equation , and confirm the main beneficial properties of the new approach : conservation of invariants up to the machine precision , and robustness in long evolution against the spurious inflation of the tensor format storage . _ keywords : _ high dimensional problems , tensor train format , mps , als , dmrg , ode , conservation laws , dynamical systems . _ msc2010 : _ 15a69 , 33f05 , 65f10 , 65l05 , 65m70 , 34c14 .
arsenic is recognized as a dangerous pollutant of the environment .arsenic present in groundwaters may be trapped in the solid phase of minerals like calcite or gypsum , either by adsorption or by co - precipitation . when a contaminant is incorporated in the bulk rather than simply adsorbed at the surface , it is less available and it can be considered `` immobilized '' in the environment at least until the host phase dissolution .the aim of this study is to elucidate whether the incorporation of as(iii ) and as(v ) into the bulk of calcite and gypsum , respectively , occurs or not , and to what extent .surface - sensitive x - ray standing wave ( xsw ) studies by cheng _et al_. show that the as atom replaces the c atom in the carbonate molecules of calcite .the geometry of the carbonate group is not preserved , showing a displacement of the as atom of 0.76 in the $ ] direction .density functional theory ( dft ) based simulations have proved that this replacement drives to a similar displacement of the as atom when c atoms are replaced by as atoms in the bulk of calcite ( see below ) .this fact leads us to keep the same hypothesis of arsenite / carbonate replacement in our study of as incorporation into the bulk .the stability diagram of aqueous solutions of h shows that the arsenate ( aso ) is the most stable specie under oxidizing conditions .this anion is a tetrahedron with the as atom at the centre , surrounded by four o atoms .the fact that both , arsenate and sulphate groups have the same geometry supports the hypothesis of a possible replacement of sulphate by arsenate groups when gypsum is precipitated in presence of as(v ) .the charge is compensated by bonding to an extra proton .in order to test the possible mechanisms for as immobilization by calcite and gypsum , samples of both minerals were synthesized in the presence of as(iii ) and as(v ) , respectively .calcite precipitation was conducted at ph = 7.5 by addition of cacl and na solutions .gypsum was precipitated from na and cacl solutions at three different ph values : 4 , 7.5 and 9 .arsenic concentrations incorporated in the solids range between 30 mm / kg and 1200 mm / kg for calcite and 100 mm / kg to 1000 mm / kg for gypsum .powder samples were analysed by neutron diffraction at the high flux powder diffractometer d20 ( ill ) .experiments were carried out using a cu(200 ) monochromator which gives a wavelength of and at ambient conditions of pressure and temperature .diffraction patterns were taken for the samples in their container and for the empty cell , in the range of 10 to 130 . also powder diffraction experiments were performed with x - ray at id11 ( esrf ) , reproducing the same experimental conditions but using a wavelength of .both diffraction data sets were analysed using fullprof .geometrical optimisations of the unit cell and the supercells of pure and as - doped calcite and gypsum were done with the vienna ab - initio simulation package ( vasp ) .the pbe functional and paw pseudopotentials were used .the goal was to reproduce the expansion of the unit cell produced by the incorporation of as atoms into the structure of both minerals .unit cells of pure calcite and gypsum obtained from rietveld refinements were used as starting point for all the models .geometrical optimizations of single unit cells and of supercells of gypsum were done replacing the sulphate molecules so by arsenates aso .similar simulations were performed with supercells of calcite replacing c by as atoms .exafs data were collected on a diffraction and absorption beamline ( gilda - bm8 ) at the esrf of grenoble and extracted using standard procedures .the theoretical photoelectron paths were generated using the feff8 code and the fit performed using the minuit library from cern .a monochromator of si(311 ) was used to set the incident energy at the k - edge of as ( 11867 ev ) .our diffraction data show an expansion of the unit cell due to as incorporation into calcite crystallites ( fig .1 ) . by modelling , as concentration in the bulk of the samples can be extrapolated by comparison of the relative volume changes between the experimental and simulated data .calcite unit cell and two supercells ( 2x2x1 and 3x2x1 ) were geometrically optimised replacing one and two units of aso ( 150 mm / kg and 290 mm / kg ) by co units .the simulations showed a volume expansion linearly dependent on the replacement of as , as expected by vegard s law ( fig .this augmentation is due to the lattice expansion along the axis as as concentration in solids increases .the observed as ion displacement of 0.57 over the o base along the is compatible with cheng s results . the lower value of this displacement ( 0.57 _vs_. 0.76 ) can be due to the fact that atoms near the surface are less attached to the solid and can move more freely .the experimental value of the unit cell volume was interpolated using the linear fit of the simulated volume expansion , giving values of 9 , 10 and 16 mm / kg of as in calcite for the three synthesised samples .simulations of 3x2x1 supercells with one co unit replaced by one aso unit were done to check whether the replacement is more likely in sites at the same or at different crystallographic planes .the higher enthalpy of formation ( mev ) for the calcite structure with two as atoms lying on the same plane shows that replacement is more likely to happen in different planes , leading to a more stable structure .we found from exafs data analysis a nearest neighbour distance of corresponding to the as - o bond distance .this value lies in between the reported one for the arsenite molecule ( ) and that obtained from the simulations ( ) .the coordination number was kept fixed to its theoretical value ( ) in order to reduce the correlation between free parameters in the fitting procedure .this result supports the hypothesis of the incorporation of the as atoms into the c crystallographic sites .figure 1 shows the gypsum unit cell volume obtained from combined refinements of neutron and x - ray data .the expansion of the unit cell is proportional to the as concentration in solids and strongly dependent on the ph value : the biggest expansion is found in samples synthesized at ph 9 .this result is in good agreement with the hypothesis of replacement of sulphate ( so ) by arsenate groups ( aso ) .this replacement is more likely at higher ph values , according to available thermodynamical data regarding speciation of as .the expansion of the unit cell parameters is due to the different lengths for as - o ( ) and s - o ( ) bonds .simulations show an increasing of the unit cell volume proportional to the number of atoms of s replaced by as ( fig .four single cells ( with 940 , 1809 , 3357 and 4696 mm / kg of as ) and four supercells were simulated : two 2x1x2 supercells with one and two as atoms , a 2x1x3 and a 3x1x3 with one as atom each , giving as concentrations of 358 , 705 , 240 and 160 mm / kg , respectively . the simulations allow us to extrapolate the as concentration in the bulk of the samples by comparing the relative volume variations between the experimental and simulated data ( table 1 ) .our results support the hypothesis of as immobilisation by incorporation into the bulk of these minerals .this improves the knowledge on the long term stability of contaminated sludges and it has important consequences for site remediation actions .this work is an example of a direct link between fundamental research and environmental issues .the understanding of the as compounds behaviour in sedimentary environments is essential to estimate and predict possible consequences of forecast or accidental events .j. nriagu ( 1984 ) . _ arsenic in the environment_. wiley interscience . l. cheng _ et al_. , geochim . cosmochim .acta * 63 * ( 1999 ) 3153 - 3157 . m. pourbaix ( 1974 ) ._ atlas of electrochemical equilibria in aqueous solutions_. pergamon press . g. romn - ross _ et al_. , geochim . cosmochim . acta ( 2005 )( _ submitted _ ) .j. rodrguez - carvajal ( 1990 ) ._ collected abstracts of powder diffraction meeting_. ed . by j. galy .toulouse , france .g. kresse , j. furthmller , software vasp , vienna ( 1999 ) ; g. kresse , phys .b * 54 * 11 ( 1996 ) 169 .p. a. lee _et al_. , rev .phys . * 53 * ( 1981 ) 769 . a. i. ankudinov _et al_. , phys .b , * 58 * ( 1998 ) 7565 .f. james , cern program library , * 506 * ( 1994 ) .a. loewenschuss , y. marcus , j. phys .chem . ref .data * 25 * ( 1996 ) 1495 - 1507 . s. m. loureiro_ et al_. , j. sol .stat . chem .* 121 * ( 1996 ) 66 - 73 .u. kolitsch , p. bartu , acta cryst .c * 60 * ( 2004 ) i94-i96 .
uptaking of contaminants by solid phases is relevant to many issues in environmental science as this process can remove them from solutions and retard their transport into the hydrosphere . here we report on two structural studies performed on as - doped gypsum ( caso 2h ) and calcite ( caco ) , using neutron ( d20-ill ) and x - ray ( id11-esrf ) diffraction data and exafs ( bm8-esrf ) . the aim of this study is to determine whether as gets into the bulk of gypsum and calcite structures or is simply adsorbed on the surface . different mechanisms of substitution are used as hypotheses . the combined rietveld analysis of neutron and x - ray diffraction data shows an expansion of the unit cell volume proportional to the as concentration within the samples . dft - based simulations confirm the increase of the unit cell volume proportional to the amount of carbonate or sulphate groups substituted . interpolation of the experimental rietveld data allows us to distinguish as substituted within the structure from that adsorbed on the surface of both minerals . results obtained by exafs analysis from calcite samples show good agreement with the hypothesis of replacement of as into the c crystallographic site . arsenic , minerals , simulation , diffraction , exafs
solar cycle 24 was predicted to begin in 2008 march ( 6 months ) , and peak in late 2011 or mid-2012 , with a cycle length of 11.75 years .so , the recent paucity of sunspots and the delay in the expected start of solar cycle 24 were unexpected , even though it is well known that solar cycles are challenging to forecast . since traditional models based on sunspot data require information about the starting and rise times , andalso the shape and amplitude of the cycle , the fine details of a given solar cycle can be predicted accurately only after a cycle has begun ( _ e.g. _ , ) .many of these models analyze a large number of previous cycles in order to predict the pattern for the new cycle .in contrast , the technique of helioseismology does not depend on sunspot data and has been used to predict activity two cycles into the future ; this method was used by to predict that sunspots will cover a larger area of the sun during cycle 24 than in previous cycles , and that the cycle will reach its peak about 2012 , one year later than forecast by alternative methods based on sunspot data ( _ e.g. _ , ) .the measurements of the length of the sunspot cycle show that the cycle varies typically between 10 and 12 years .moreover , these variations in the cycle length have been associated with changes in the global climate .in addition , the maunder minimum illustrates a connection between a paucity of sunspots and cooler than average temperatures on earth .the length of the sunspot cycle was first measured by heinrich schwabe in 1843 when he identified a 10-year periodicity in the pattern of sunspots from a 17-year study conducted between 1826 and 1843 . in 1848 ,rudolph wolf introduced the relative sunspot number , r , organized a program of daily observations of sunspots , and reanalyzed all earlier data to find that the average length of a solar cycle was about 11 yrs . for more than two centuries , solar physicists applied a variety of techniques to determine the nature of the solar cycle .the earliest methods involved counting sunspot numbers and determining durations of cyclic activity from sunspot minimum to minimum using the `` smoothed monthly mean sunspot number '' .the `` group sunspot number '' introduced by is another well - documented data set and provides comparable results to those derived from relative sunspot numbers .in addition , sunspot area measurements since 1874 describe the total surface area of the solar disk covered by sunspots at a given time .the analysis of sunspot numbers or sunspot areas is often referred to as a one - dimensional approach because there is only one independent variable , namely sunspot numbers or areas .recently , introduced a new parameter called the `` sunspot unit area '' in an effort to combine the information about the sunspot numbers and sunspot areas to derive the length of the cycle .there is also a two - dimensional approach in which the latitude of an observed sunspot is introduced as a second independent variable .when sunspots first appear on the solar surface they tend to originate at latitudes around 40 degrees and migrate toward the solar equator .when such migrant activity is taken into account it can be shown that there is an overlap between successive cycles , since a new cycle begins while its predecessor is still decaying .this overlap became obvious when published his butterfly diagram and demonstrated the latitude drift of sunspots throughout the cycles .maunder s butterfly diagram showed that although the length of time between sunspot minima is on average 11 years , successive cycles actually overlap by 1 to 2 years .in addition , found that there were distinct solar cycles lasting 10 years as well as cycles lasting 12 years .this type of behavior suggests that there could be a periodic pattern in the length of the sunspot cycle .a summary of analyses of the sunspot cycle is found in and a more recent review of the long - term variability is given by .sunspot number data collected prior to the 1700 s show epochs in which almost no sunspots were visible on the solar surface .one such epoch , known as the maunder minimum , occurred between the years 1642 and 1705 , during which the number of sunspots recorded was very low in comparison to later epochs .geophysical data and tree - ring radiocarbon data , which contain residual traces of solar activity , were used to examine whether the maunder period truly had a lower number of sunspots or whether it was simply a period in which little data had been collected or large degrees of errors existed .these studies showed that the timing of the maunder minimum was fairly accurate because of the high quality of sunspot data during that period , including sunspot drawings , and the dates are strongly correlated with geophysical data .other epochs of significantly reduced solar activity include the oort minimum from 1010 - 1050 , the wolf minimum from 1280 - 1340 , the sprer minimum from 1420 - 1530 , and the dalton minimum from 1790 - 1820 .these minima have been derived from historical sunspot records , auroral histories , and physical models which link the solar cycle to dendrochronologically - dated radiocarbon concentrations .our interest in predicting flaring activity cycles on cool stars ( _ e.g. _ , ) led us to investigate the long - term behavior of the solar cycle since solar flares display a typical average 11-year cycle like sunspots . in this paper ,the preliminary results of which were published in and , we investigate the variations in the length of the sunspot number cycle and examine whether the variability can be explained in terms of a secular pattern .our analysis can serve as a tutorial .we apply classical one - dimensional techniques to recalculate the periodicities of solar activity using the sunspot number and area data to provide internal consistency in our analysis of the long - term behavior .these results are then used as a basis in the subsequent study of the sun s long - term behavior . in 2 we discuss the source of the data ; in 3 we describe the derivation of the cycle from sunspot numbers and sunspot areas using two independent techniques ; in 4 we examine the variability in the cycle length based on the times of cycle minima and maxima using two independent techniques ; and in 5 we discuss the results .lll spot number & daily & 1818 jan 8 - 2005 jan 31 + & monthly & 1749 jan - 2005 jan + & yearly & 1700 - 2004 + spot area & daily & 1874 may 9 - 2005 feb 28 + & monthly & 1874 may - 2005 feb + & yearly & 1874 - 2004 +the sunspot data used in this work were collected from archival sources that catalog sunspot numbers and sunspot areas , as well as the measured length of the sunspot cycle .the sunspot number data , covering the years from 1700 - 2005 , were archived by the national geophysical data center ( ngdc ) .these data are listed in individual sets of daily , monthly , and yearly numbers .the relative sunspot number , r , is defined as r = k ( 10 g + s ) , where g is the number of sunspot groups , s is the total number of distinct spots , and the scale factor k ( usually less than unity ) depends on the observer and is `` intended to effect the conversion to the scale originated by wolf '' .the scale factor was 1 for the original wolf sunspot number calculation .the spot number data sets are tabulated in table 1 and plotted in figure [ f1 ] .cccc 1610.8 & 1615.5 & 8.2 & 10.5 + 1619.0 & 1626.0 & 15.0 & 13.5 + 1634.0 & 1639.5 & 11.0 & 9.5 + 1645.0 & 1649.0 & 10.0 & 11.0 + 1655.0 & 1660.0 & 11.0 & 15.0 + 1666.0 & 1675.0 & 13.5 & 10.0 + 1679.5 & 1685.0 & 10.0 & 8.0 + 1689.0 & 1693.0 & 8.5 & 12.5 + 1698.0 & 1705.5 & 14.0 & 12.7 + 1712.0 & 1718.2 & 11.5 & 9.3 + 1723.5 & 1727.5 & 10.5 & 11.2 + 1734.0 & 1738.7 & 11.0 & 11.6 + 1745.0 & 1750.3 & 10.2 & 11.2 + 1755.2 & 1761.5 & 11.3 & 8.2 + 1766.5 & 1769.7 & 9.0 & 8.7 + 1775.5 & 1778.4 & 9.2 & 9.7 + 1784.7 & 1788.1 & 13.6 & 17.1 + 1798.3 & 1805.2 & 12.3 & 11.2 + 1810.6 & 1816.4 & 12.7 & 13.5 + 1823.3 & 1829.9 & 10.6 & 7.3 + 1833.9 & 1837.2 & 9.6 & 10.9 + 1843.5 & 1848.1 & 12.5 & 12.0 + 1856.0 & 1860.1 & 11.2 & 10.5 + 1867.2 & 1870.6 & 11.7 & 13.3 + 1878.9 & 1883.9 & 10.7 & 10.2 + 1889.6 & 1894.1 & 12.1 & 12.9 + 1901.7 & 1907.0 & 11.9 & 10.6 + 1913.6 & 1917.6 & 10.0 & 10.8 + 1923.6 & 1928.4 & 10.2 & 9.0 + 1933.8 & 1937.4 & 10.4 & 10.1 + 1944.2 & 1947.5 & 10.1 & 10.4 + 1954.3 & 1957.9 & 10.6 & 11.0 + 1964.9 & 1968.9 & 11.6 & 11.0 + 1976.5 & 1979.9 & 10.3 & 9.7 + 1986.8 & 1989.6 & 9.7 & 10.7 + 1996.5 & 2000.3 & & + average & & 11.0.5 & 11.0.0 the sunspot area data , beginning on 1874 may 9 , were compiled by the royal greenwich observatory from a small network of observatories . in 1976 ,the united states air force began compiling its own database from its solar optical observing network ( soon ) and the work continued with the help of the national oceanic and atmospheric administration ( noaa ) .the nasa compilation of these separate data sets lists sunspot area as the total whole spot area in millionths of solar hemispheres .we have analyzed the compiled daily sunspot areas as well as their monthly and yearly sums .the sunspot area data sets were tabulated in table 1 and plotted in figure [ f2 ]. there may be subtle differences between the two data sets since the sunspot number and area data were collected in different ways and by different groups , but these differences should reveal themselves when the data are analyzed .the sunspot number cycle data from years 1610 to 2000 are shown in table 2 .this table displays the dates of cycle minima and maxima as well as the cycle lengths calculated from those minima and maxima .the first three columns of this table were taken from the ngdc , and we calculated the fourth column from the dates of cycle maxima .these sunspot cycle data are discussed further in 4 .the sunspot number and sunspot area data were analyzed to provide a basis for the analysis of the long - term behavior of the sun .we used the same techniques that were used by in their study of radio flaring cycles of magnetically active close binary star systems .two independent methods were used to determine the solar activity cycles . in the first method, we analyzed the power spectrum obtained by calculating the fast fourier transform ( fft ) of the data .the fourier transform of a function is described by for frequency , , and time , .this transform becomes a function at frequencies that correspond to true periodicities in the data , and subsequently the power spectrum will have a sharp peak at those frequencies . the lomb - scargle periodogram analysis for unevenly spaced datawas used . in the second method , called the phase dispersion minimization ( pdm ) technique , a test period was chosen and checked to determine if it corresponded to a true periodicity in the data .the goodness of fit parameter , , approaches zero when the test period is close to a true periodicity .pdm produces better results than the fft in the case of non - sinusoidal data .the goodness of fit between a test period and a true period , is given by the statistic , where , the data are divided into groups or samples , is the variance of m samples within the data set , is a data element ( ) , is the mean of the data , is the number of total data points , is number of data points contained in the sample , and is the variance of the sample . if , then and . however , if , then 0 ( or a local minimum ) .all solutions from the two techniques were checked for numerical relationships with ( i ) the highest frequency of the data ( corresponding to the data sampling interval ) ; ( ii ) the lowest frequency of the data , ( corresponding to the duration or time interval spanned by the data ) ; ( iii ) the nyquist frequency , ; and in the case of pdm solutions ( iv ) the maximum test period assumed .a maximum test period of 260 years was chosen for all data sets , except in the case of the more extensive yearly sunspot number data when a maximum of 350 years was assumed .we chose the same maximum test period for the sunspot area analysis for consistency with the sunspot number analysis , even though these test periods are longer than the duration of the area data .the results from the fft and pdm analyses of sunspot number and sunspot area data are illustrated in figures [ f3 ] and [ f4 ] , corresponding to the daily , monthly , and yearly sunspot numbers and the daily , monthly , and yearly sunspot areas , respectively . in these figures ,the top frame shows the power spectrum derived from the fft analysis , while the bottom frame shows the -statistic obtained from the pdm analysis .we specifically used two independent techniques so that we could test for consistency and determine the common patterns evident in the data .the fact that the two techniques produced similar results shows that the assumptions made in these techniques have minimal influence on the results .as expected , our results confirmed the work done by earlier studies .the sunspot cycles derived from these results are summarized in table 3 .the most significant periodicities corresponding to the 50 highest powers and the 50 lowest values suggest that the solar cycle derived from sunspot numbers is 10.95 0.60 years , while the value derived from sunspot area is 10.65 0.40 years . the average sunspot cycle from both the number and area data is 10.80 0.50 years . the strongest peaks in figures [ f3 ] and [ f4 ] correspond to this dominant average periodicity over a range from years up to years .a weaker periodicity was also identified from the pdm analysis with an average period of 21.90 0.66 years over a range from 24 years .llcc sunspot number & daily & 10.85 .60 & 10.86 .27 + & monthly & 11.01 .68 & 11.02 .68 + & yearly & 10.95 .72 & 11.01 .64 + average ( number ) & & & 10.95 .60 + sunspot area & daily & 10.67 .44 & 10.67 .42 + & monthly & 10.67 .39 & 10.66 .39 + & yearly & 10.62 .39 & 10.62 .36 + average ( area ) & & & 10.65 .40 + average ( all data ) & & & 10.80 .50 the errors for the fft and pdm analyses were derived by measuring the full width at half maximum ( fwhm ) of each dominant peak for each data set .the error is then defined by .the three averages given in table 3 were determined by averaging the dominant solutions from the fft and pdm analyses for each data set .the errors in the averages were determined using standard techniques . while the errors for the sunspot area results are smaller than those for the spot numbers , the area data are actually less accurate than the sunspot number data because the measurement error in the areas may be as high as 30% .the higher errors for the area data are related to the difficulty in determining a precise spot boundary .longer periodicities that could not be eliminated because of relationships with the duration of the data set or other frequencies related to the data ( as described in 3.1 ) were also identified with durations ranging from 260 years ( figures [ f3 ] and [ f4 ] ) .these long - term periodicities are discussed further in the following section .the previous analysis of sunspot data provided some evidence of long term cycles in the data .this secular behavior was studied in greater detail through an analysis of the dates of sunspot minima and maxima from 1610 to 2000 , as shown in table 2 .since there have been concerns about the difficulty in deriving the exact times of sunspot minima , and the even greater complexity in the determination of the maxima , we derived our results using the cycle minima and maxima separately .the sunspot cycle lengths were calculated in two ways : ( i ) from the dates of successive cycle minima provided by the ngdc , and ( ii ) from our calculations of cycle lengths derived from the maxima .these cycle lengths are tabulated in table 2 and plotted in figure [ f5 ] .the data in figure 5 show substantial variability over time .the cycle lengths derived from the dates of sunspot minima and maxima were analyzed to search for periodicities in the cycle length using two techniques : ( i ) a median trace analysis and ( ii ) a power spectrum analysis of the ` observed minus calculated ' or ( o - c ) residuals .median trace analyses have been used to identify hidden trends in scatter plots which , at first glance , display no obvious pattern ( e.g. , ) .these analyses have also been applied to astronomical data ( e.g. , ) .the method of median trace analysis is applicable to any scatter plot , irrespective of how measurements were obtained , and is one of a general class of smoothing methods designed to identify trends in scatter plots .a median trace is a plot of the median value of the data contained within a bin of a chosen width , for all bins in the data set .a median trace analysis depends on the choice of an optimal interval width ( oiw ) .these oiws , , were calculated using three statistical methods applied routinely to estimate the statistical density function of the data .the first method defines the oiw as where is the number of data points and , a statistically robust measure of the standard deviation of the data called the mean absolute deviation from the sample median , is defined as where is the sample median .the second method defines the oiw as a third definition of the oiw is given by where is the interquartile range of the data set .optimal bin widths were determined for three data sets corresponding to the cycle lengths derived from the ( i ) cycle minima , ( ii ) cycle maxima , and ( iii ) the combined minima and maxima data .table 4 lists the solutions for the optimal interval widths ( ) for each data set .since the values of the optimal bin widths ranged from 120 years , we tested the impact of different bin widths on our results .this procedure was limited by the fact that only 35 sunspot number cycles have elapsed since 1610 ( see table 2 ) .the data set can be increased to 70 points if we analyze the combined values of the length of the solar cycle derived from both the sunspot minima and the sunspot maxima . using our derived oiws as a basis for our analysis, we calculated median traces for bin widths of 40 , 50 , 60 , 70 , 80 , and 90 years .these are illustrated in figure [ f6 ] .the lower bin widths were included to make maximum use of the limited number of data points , and the higher bin widths were excluded because , once binned , there would be too few data points to make those analyses meaningful .lccccc cycle minima & 35 & 97.4 & 103.9 & 75.4 & 116.6 + cycle maxima & 35 & 97.0 & 103.4 & 75.1 & 115.4 + combined & 70 & 97.3 & 82.4 & 63.5 & 91.8 figure [ f6 ] shows the binned data ( median values ) and the sinusoidal fits to the binned data .the least absolute error method was used to produce the sinusoidal fits to the median trace in each frame of the figure .these sinusoidal fits illustrate the long - term cyclic behavior in the length of the sunspot number cycle .the optimal solution was determined by identifying the fits that satisfied two criteria : ( 1 ) the cycle periods deduced from the three data sets should be nearly the same , and ( 2 ) the cyclic patterns should be in phase for the three data sets .table 5 lists the derived cycle periods for all three data sets : the ( a ) cycle minima , ( b ) cycle maxima , and ( c ) combined minima and maxima data .the lengths of the sunspot number cycles tabulated by the national geophysical data center ( table 2 & figure [ f5 ] ) show that the basic sunspot number cycle is an average of ( 11.0 1.5 ) years based on the cycle minima and ( 11.0 2.0 ) years based on the cycle maxima .this schwabe cycle varies over a range from 8 to 15 years if the cycle lengths are derived from the time between successive minima , while the range increases to 7 to 17 years if the cycle lengths are derived from successive maxima .these variations may be significant even though the data in figure [ f5 ] show _ heteroskedasticity _ , i.e. , variability in the standard deviation of the data over time .although the range in sunspot cycle durations is large , the cycle length converged to a mean of 11 years , especially after 1818 as the accuracy of the data became more reliable .in particular , the sunspot number cycle lengths from 1610 - 1750 had a high variance while the cycle durations since 1818 show a much smaller variance ( figure [ f5 ] ) because the data quality was poor in the 18th and early 19th century .this variance may be influenced by the difficulty in identifying the dates of cycle minima and maxima whenever the sunspot activity is relatively low .even after the data became more accurate there was still a significant 1.5-year range about the 11-year mean .the range in the length of this cycle suggests that there may be a hidden longer - term variability in the schwabe cycle .our median trace analysis of the lengths of the sunspot number cycle uncovered a long - term cycle with a duration between 146 and 419 years ( table 5 ) , if the data are binned in groups of 40 to 90 years ( see 4.1 ) .since the median trace analysis is influenced by the bin size of the data , we determined the optimal bin width based on the goodness of fit between the median trace and the corresponding sinusoidal fit ( see figure [ f6 ] ) .based on the sunspot minima ( the best data set ) , the cycle length was 185 years for the 50-yr bin width , 243 years for the 60-yr bin , 222 years for the 70-yr bin , 393 years for the 80-year bin , and 299 years for the 90-yr bin ; so we found no direct relationship between the bin size and the resulting periodicity .figure [ f6 ] also shows the median traces for the data and illustrates that the optimal bin width is in the range of 50 - 60 years because it is only in these two cases that the sinusoidal fits are in phase and the derived periods are approximately equal for all three data sets .the 50-year median trace predicts a 183-year sunspot number cycle , while the 60-year trace predicts a 243-year cycle .since the observations span years , there is greater confidence in the 183-year cycle than in the longer one because at least two cycles have elapsed since 1610 .similar long - term cycles ranging from 169 to 189 years have been proposed for several decades .ccccc 40 & 157 & 165 & 146 & 156 10 + 50 & 185 & 182 & 182 & 183 2 + 60 & 243 & 243 & 243 & 243 + 70 & 222 & 273 & 304 & 266 41 + 80 & 393 & 349 & 419 & 387 35 + 90 & 299 & 299 & 209 & 269 52 the median trace analysis gives us a rough estimate of the long - term sunspot cycle .however , an alternative method to derive this secular period is to calculate the power spectrum of the ( o - c ) variation of the dates corresponding to the ( i ) cycle minima , ( ii ) cycle maxima , and ( iii ) the combined minima and maxima .the following procedure was used to calculate the ( o - c ) residuals for each of the data sets given above , based only on the dates of minima and maxima listed in table 2 .first , we defined the cycle number , , to be , where are the individual dates of the extrema , and is the start date for each data set . here , l is the average cycle length ( 10.95 years ) derived independently by the fft and pdm analyses from the sunspot number data ( [ fftpdmresults ] ) .the ( o - c ) residuals were defined to be where , is the integer part of and represents the whole number of cycles that have elapsed since the start date .the resulting ( o - c ) pattern was normalized by subtracting the linear trend in the data .this trend was found by fitting a least squares line to the ( o - c ) data .the normalized ( o - c ) data are shown in figure [ f7 ] along with the corresponding power spectra .the power spectra of the ( o - c ) data in figure [ f7 ] show that the long term variation in the sunspot number cycle has a dominant period of years .the gleissberg cycle was also identified in this analysis , with a period of years .the solutions for these analyses are illustrated in figure [ f7 ] and tabulated in table 6 .the errors were calculated from the fwhm of the power spectrum peaks , as described in 3.2 .the sinusoidal fit to the ( o - c ) data in figure [ f7 ] corresponds to the dominant periodicity of 188 years identified in the power spectra .another cycle with a period of years was also found .lcc cycle minima & 86.8 8.8 & 188 40 + cycle maxima & 86.3 18.1 & 187 37 + combined & 86.8 10.7 & 188 38 + average & 86.6 12.5 & 188 38our study of the length of the sunspot cycle suggests that the cycle length should be taken into consideration when predicting the start of a new solar cycle .the variability in the length of the sunspot cycle was examined through a study of archival sunspot data from 1610 2005 . in the preliminary stage of our study , we analyzed archival data of sunspot numbers from 1700 - 2005 and sunspot areas from 1874 - 2005 using power spectrum analysis and phase dispersion minimization .this analysis showed that the schwabe cycle has a duration of ( 10.80 0.50 ) years ( table 3 ) and that this cycle typically ranges from 12 years even though the entire range is from 17 years . based on our results ,we have found evidence to show that ( 1 ) the variability in the length of the solar cycle is statistically significant .in addition , we predict that ( 2 ) the length of successive solar cycles will increase , on average , over the next 75 years ; and ( 3 ) the strength of the sunspot cycle should eventually reach a minimum somewhere between cycle 24 and cycle 31 , and we make no claims about any specific cycle .the focus of our study was to investigate whether there is a secular pattern in the range of values for the schwabe cycle length .we used our derived value for the schwabe cycle from table 3 to examine the long - term behavior of the cycle .this analysis was based on ngdc data from 16102000 , a period of 386 years ( using sunspot minima ) or 385 years ( using sunspot maxima ) .the long - term cycles were identified using median trace analyses of the length of the cycle and also from power spectrum analyses of the ( o - c ) residuals of the dates of sunspot minima and maxima .we used independent approaches because of the inherent uncertainties in deriving the exact times of minima and the even greater complexity in the determination of sunspot maxima .moreover , we derived our results from both the cycle minima and the cycle maxima .the fact that we found similar results from the two data sets suggests that the methods used to determine these cycles ( ngdc data ) did not have any significant impact on our results .the median trace analysis of the length of the sunspot number cycle provided secular periodicities of 183 243 years .this range overlaps with the long - term cycles of 260 years which were identified directly from the fft and pdm analyses of the sunspot number and area data ( figures [ f3 ] and [ f4 ] ) .the power spectrum analysis of the ( o - c ) residuals of the dates of minima and maxima provided much clearer evidence of dominant cycles with periods of years , years , and years .these results are significant because at least two long - term cycles have transpired over the -year duration of the data set .the derived long - term cycles were compared in figure [ f8 ] with documented epochs of significant declines in sunspot activity , like the oort , wolf , sprer , maunder , and dalton minima . in this figure ,the modern sunspot number data were combined with earlier data from 1610 - 1715 and with reconstructed ( ancient ) data spanning the past 11,000 years .these reconstructed sunspot numbers were based on dendrochronologically - dated radiocarbon concentrations which were derived from models connecting the radiocarbon concentration with sunspot number .the reconstructed sunspot numbers are consistent with the occurrences of the historical minima ( e.g. , maunder minimum ) . found that over the past 70 years , the level of solar activity has been exceptionally strong .our 188-year periodicity is similar to the 205-year de vries - seuss cycle which has been identified from studies of the carbon-14 record derived from tree rings ( _ e.g. _ , ) .figure [ f8 ] compares the historical and modern sunspot numbers with the derived secular cycles of length ( a ) 183 years ( 4.2 ) , ( b ) 243-years ( 4.2 ) , and ( c ) 188 years ( 4.4 ) .the first two periodicities were derived from the median trace analysis , while the third one was derived from the power spectrum analysis of the sunspot number cycle ( o - c ) residuals .the fits for the 183-year periodicity all had the same amplitude , but were moderately out of phase with each other , while the fits for the 243-year periodicity were in phase for all data sets , albeit with different amplitudes .an examination of frames ( a ) and ( c ) of figure [ f8 ] reveals that the cycle lengths increased during each of the wolf , sprer , maunder , and dalton minima for the 183-year and 188-year cycles . on the other hand , frame ( b )shows no similar correspondence between the cycle length and the times of historic minima for the 243-year cycle .therefore , the 183- and 188-year cycles appear to be more consistent with the sunspot number data than the 243-year cycle .all four historic minima since 1200 occurred during the rising portion of the 183- and 188-year cycles when the length of the sunspot cycle was increasing . according to our analysis , the length of the sunspot cycle was growing during the maunder minimum when almost no sunspots were visible .given this pattern of behavior , the next historic minimum should occur during the time when the length of the sunspot cycle is increasing ( see fig .[ f8 ] ) .the existence of long - term solar cycles with periods between 90 and 200 years is not new to the literature but the nature of these cycles is still not fully understood .our study of the length of the sunspot cycle shows that there is a dominant periodicity of 188 years related to the basic schwabe cycle and weaker periodicities of and 87 years .this 188-year period , determined over a baseline of 385 years that spans more than two cycles of the long - term periodicity , should be compared with schwabe s 10-year period that was derived from 17 years ( i.e. , less than two cycles ) of observations .our study also suggests that the length of the sunspot number cycle should increase gradually , on average , over the next years , accompanied by a gradual decrease in the number of sunspots .this information should be considered in cycle prediction models ( _ e.g. _ , ) to provide better estimates of the starting time of a given cycle .we thank k. s. balasubramaniam for his comments on the manuscript , a. retter for his comments on the research and for his advice on the ( o - c ) analysis , and d. heckman for advice on the data analysis .the supermongo plotting program was used in this research .this work was partially supported by national science foundation grants ast-0074586 and dms-0705210 .coffey , h. e. , and erwin , e. 2004 , national geophysical data center , noaa ( ftp://ftp.ngdc.noaa.gov/stp/ solardata / sunspotnumbers ; www.ngdc.noaa .gov/ stp / solar / ftpsunspotnumber.html#international ; www.ngdc.noaa.gov/stp/solar/ftpsunspotregions.html ) joselyn , j. a. , anderson , j. , coffey , h. , harvey , k. , hathaway , d. , heckman , g. , hildner , e. , mende , w. , schatten , k. , thompson , r. , thomson , a. w. p. , & white , o. r. 1996 , solar cycle 23 project : summary of panel findings ( http://www.sec.noaa.gov/info/cycle23.html )
the recent paucity of sunspots and the delay in the expected start of solar cycle 24 have drawn attention to the challenges involved in predicting solar activity . traditional models of the solar cycle usually require information about the starting time and rise time as well as the shape and amplitude of the cycle . with this tutorial , we investigate the variations in the length of the sunspot number cycle and examine whether the variability can be explained in terms of a secular pattern . we identified long - term cycles in archival data from 1610 2000 using median trace analyses of the cycle length and power spectrum analyses of the ( o - c ) residuals of the dates of sunspot minima and maxima . median trace analyses of data spanning 385 years indicate a cycle length with a period of 183 - 243 years , and a power spectrum analysis identifies a period of 188 38 years . we also find a correspondence between the times of historic minima and the length of the sunspot cycle , such that the cycle length increases during the time when the number of spots is at a minimum . in particular , the cycle length was growing during the maunder minimum when almost no sunspots were visible on the sun . our study suggests that the length of the sunspot number cycle should increase gradually , on average , over the next years , accompanied by a gradual decrease in the number of sunspots . this information should be considered in cycle prediction models to provide better estimates of the starting time of each cycle .
flow of granular media can be difficult to predict and challenging to model because of the inherent complexity of the collective motion of large numbers of particles . yet, granular flows can sometimes be modeled using a continuum approximation , for example , as in the case of flow in a rotating tumbler and flow down an inclined plane or heap .the usual assumption in these geometries is that spanwise particle motion is primarily diffusive and averages to zero .however , there are situations where spanwise flow occurs . for example , in a partially - filled , long cylindrical tumbler rotating about its axis , endwall friction causes pathlines to curve near the endwalls . in the case of bidisperse particles ,this out - of - plane flow can drive axial segregation and initiate axial band formation .and denote the rotation speed and flowing layer velocity , respectively , and is the dynamic angle of repose of the free surface with respect to horizontal . ] here we consider flow in partially - filled three - dimensional ( 3d ) tumblers , such as spherical tumblers [ fig .[ figmodel](a ) ] , rotating with angular velocity about a horizontal axis that intersects the tumbler at its `` poles . ''our interest derives from the desire to extend the understanding of granular flow in quasi - two - dimensional ( 2d ) cases to fully 3d flows .we consider the situation where the free surface is essentially flat and continuously flowing . in this regime ,the surface of the flowing layer maintains a dynamic angle of repose with respect to horizontal which depends on the frictional properties and diameter of the particles , and the rotational speed of the tumbler .a simple model of 3d flow in a spherical tumbler assumes that flow in each plane perpendicular to the axis of rotation is essentially that of a two - dimensional circular slice of the appropriate diameter , as shown in fig .[ figmodel](a ) . in this reduced quasi-2d geometry , particles enter the upstream end of the thin , rapidly flowing layer from the fixed bed ( the region of particles in solid body rotation ) , flow downslope , and return to the fixed bed following the idealized streamlines shown in fig . [ figmodel](b ) .the streamwise velocity profile in the flowing layer decreases approximately linearly with depth , while the portion of the ` solid body ' region nearest the flowing layer exhibits a much slower creeping motion . despite the attractiveness of its simplicity , there are indications that the 2d flow assumption is imperfect. simulations of monodisperse flow in a partially - filled spherical tumbler indicate a slight out - of--plane curvature in the trajectories of surface particles .asymmetries between the upstream and downstream portions of the curved trajectories manifest as axial drift . unlike a partially - filled cylindrical tumbler , the trajectory curvature can not be directly attributed to frictional endwall effects .rather , the curved particle trajectories appear to be related to the relative curvature of the tumbler walls to the surface of spherical particles as the paths of large particles curve more than the paths of small particles for the same tumbler diameter . in this paper , axial driftis examined in experiment and simulation .experimentally , particles are tracked on the surface and along the tumbler wall to measure the axial drift .cross sections of the tumbler are also imaged to visualize the axial motion of colored particles within the bed .discrete element method ( dem ) simulations are performed to obtain particle trajectories and velocities to understand the axial drift in more detail .similar axial drift occurs in both experiment and simulation over a range of tumbler and particle parameters , confirming the robust nature of the phenomenon . , vs. normalized axial position , , for in a spherical tumbler .curves extend from the equator to the maximum axial position of the free surface .the region between vertical dashed lines represents the initial position of the tracer band ( ) in experiments and simulations . ]the tumblers in these experiments were clear acrylic spheres rotated at constant angular velocity about a horizontal axis by a motor . in the band drift experiments ,a -cm diameter tumbler was filled to a fill fraction ( by volume ) varying between and with mm diameter soda - lime glass beads ( siliglit deco beads , sigmund lindner gmbh , germany ) and rotated at rpm . a tracking band [ fig .[ figcrosssection](a ) ] was formed by filling the space between two thin partitions ( at cm and cm ) with light colored beads and filling the exterior with dark colored beads . to reduce electro - static interactions between particles and the wall ,the inside of the tumbler was either wiped with an antistatic wipe ( staticide , acl inc . ,chicago , il ) or treated with an antistatic spray ( sp 610 , sprayon inc . ,cleveland , oh ) prior to filling . to reduce inter - particle staticcharging when the relative humidity was below 50% , a small amount of deionized water ( typically 2 - 5 ) was allowed to evaporate in the sealed tumbler to increase the relative humidity above 50% . to view cross sections on planes perpendicular to the flow direction in experiment after tumbling ,the grain bed was immobilized by pouring hot gelatin into the tumbler .once the gelatin cooled to room temperature , the tumbler was placed in a freezer to set the gelatin .the tumbler was then cut in half along the meridional plane and photographed .additional parallel planes behind the initial cut were exposed by carefully removing particles with a scraper .two sets of images were obtained for each experiment from the halves created by the initial cut . before presenting the experimental results, we note that the time between surface particle passes through the flowing layer is always less than the tumbler rotation period for and , as a consequence , the number of particle passes ( cycles ) is always greater than the number of tumbler rotations .this occurs because the bed surface in any quasi-2d slice perpendicular to the rotation axis subtends an angle , measured with respect to the rotation axis ( fig .[ figtime ] inset ) . ignoring the relatively much shorter time spent in the flowing layer( see appendix [ restime ] ) , .for all and thus varies with axial position since the surface of the particle bed does not extend to the poles .figure [ figtime ] shows the predicted value of as a function of normalized axial position where is the tumbler radius . for fixed increases with while for fixed decreases ( increases ) with for ( ) . at independent of .[simtable ] dem simulation parameters [ cols= " < , < " , ]the negligible flowing layer passage time assumption made in sec .ii was verified by experiments and simulations with mm colored tracer bead in a bed of mm beads .a larger tracer was used because it remained at the top of the flowing layer and at the tumbler wall when in solid body rotation , which allowed the time spent in solid body rotation to be measured . was used because should be independent of for this fill fraction .the relative flowing layer time , , and the solid body rotation residence time , , were measured for 20 tumbler rotations .results in table [ timetable ] indicate that is slightly longer than the predicted value of 0.5 for probably due to a small degree of internal slip and slight rearrangement of particles in the fixed bed as the tumbler rotates . is about an order of magnitude less than .it increases slightly with rotation rate , which is probably a result of the thicker flowing layer at these high flow rates . from simulations ,the flowing layer at the equator was 1.3 times thicker at 30rpm than at 3rpm , resulting in a twofold increase in flowing layer passage time .flowing layer passage times can also be estimated from experiments in a quasi-2d circular tumbler by jain et al . and from the model of christov et al . ; both methods give values similar to those obtained in tracer experiments ( table [ timetable ] ) .for the dem simulations , a standard linear - spring and viscous damper force model was used to calculate the normal force between two contacting particles : \hat{\mathbf{r}}_{ij} ] is the normal stiffness and is the normal damping , where is the collision time and is the restitution coefficient .a standard tangential force model with elasticity was implemented : , where is the relative tangential velocity of two particles , is the tangential stiffness , and is the net tangential displacement after contact is first established at time , .the velocity - verlet algorithm was used to update the position , orientation , and linear and angular velocity of each particle .tumbler walls were modeled as both smooth surfaces ( smooth walls ) and as compositions of bonded particles ( rough walls ) .both wall conditions have infinite mass for calculation of the collision force between the tumbling particles and the wall .rough walls were used exclusively for double cone tumbler simulations .most spherical tumbler simulations used smooth walls , though a few used rough walls for comparison both wall types produced similar results . to characterize the mean velocity field , the spherical computational domain was divided into cubical bins of width . in most cases , this bin width adequately resolves the flowing layer , as the thickness of the flowing layer typically ranges from .local flow properties were obtained by averaging values for all particles in each bin every 100 time steps for a total of 15s of physical time .54ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1080/00018730701611677 [ * * , ( ) ] link:\doibase 10.1140/epje / i2003 - 10153 - 0 [ * * , ( ) ] http://dx.doi.org/10.1007/s10035-005-0211-4 [ * * , ( ) ] link:\doibase 10.1103/physreve.78.021303 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.73.644 [ * * , ( ) ] link:\doibase 10.1103/physreve.52.4393 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.78.50 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.79.2975 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.82.4643 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.104.188002 [ * * , ( ) ] http://stacks.iop.org/1367-2630/13/i=5/a=055021 [ * * , ( ) ] link:\doibase 10.1017/s002211200800075x [ * * , ( ) ] link:\doibase 10.1209/0295 - 5075/91/20003 [ * * , ( ) ] link:\doibase 10.1016/j.ces.2012.01.044 [ * * , ( ) ] http://stacks.iop.org/0295-5075/30/i=3/a=002 [ * * , ( ) ] http://stacks.iop.org/0295-5075/30/i=1/a=002 [ * * , ( ) ] link:\doibase 10.1142/s0218127499001036 [ * * , ( ) ] link:\doibase 10.1017/s0022112001005201 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.91.064302 [ * * , ( ) ] link:\doibase 10.1017/s0022112004008869 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.91.264301 [ * * , ( ) ] http://stacks.iop.org/0295-5075/61/i=4/a=492 [ * * , ( ) ] link:\doibase 10.1017/s0022112007004697 [ * * , ( ) ] link:\doibase 10.1103/physreve.86.011304 [ * * , ( ) ] link:\doibase 10.1063/1.869172 [ * * , ( ) ] link:\doibase 10.1063/1.1431244 [ * * , ( ) ] link:\doibase 10.1002/aic.690481007 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.86.1757 [ * * , ( ) ] link:\doibase 10.1103/physreve.71.031304 [ * * , ( ) ] link:\doibase 10.1103/physreve.74.031307 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.102.148001 [ * * , ( ) ] link:\doibase 10.1016/j.ces.2004.04.026 [ * * , ( ) ] link:\doibase 10.1103/physreve.80.031302 [ * * , ( ) ] link:\doibase 10.1122/1.549900 [ * * , ( ) ] link:\doibase 10.1063/1.868282 [ * * , ( ) ] \doibase 10.1051/jp1:1996129 [ * * , ( ) ] http://www.openisbn.com/isbn/9783642086007/[__ ] ( , ) link:\doibase 10.1017/s002211200200109x [ * * , ( ) ] link:\doibase 10.1103/physrevlett.99.068002 [ * * , ( ) ] link:\doibase 10.1063/1.1459720 [ * * , ( ) ] link:\doibase 10.1680/geot.1979.29.1.47 [ * * , ( ) ] link:\doibase 10.1103/physreve.65.061306 [ * * , ( ) ] http://www.openisbn.com/isbn/9780198556459/[__ ] ( , ) http://books.google.com/books?id=ohqbqgaacaaj[__ ] , ed .( , ) http://books.google.com/books?id=asxkxbynlmqc[__ ] , ed . , advanced series ( , ) link:\doibase 10.1016/s0032 - 5910(99)00227 - 2 [ * * , ( ) ] link:\doibase 10.1016/s0032 - 5910(99)00204 - 1 [ * * , ( ) ] link:\doibase 10.1063/1.1347961 [ * * , ( ) ]
models of monodisperse particle flow in partially filled three - dimensional tumblers often assume that flow along the axis of rotation is negligible . we test this assumption , for spherical and double cone tumblers , using experiments and discrete element method simulations . cross sections through the particle bed of a spherical tumbler show that , after a few rotations , a colored band of particles initially perpendicular to the axis of rotation deforms : particles near the surface drift toward the pole , while particles deeper in the flowing layer drift toward the equator . tracking of mm - sized surface particles in tumblers with diameters of 814 cm shows particle axial displacements of one to two particle diameters , corresponding to axial drift that is 13% of the tumbler diameter , per pass through the flowing layer . the surface axial drift in both double cone and spherical tumblers is zero at the equator , increases moving away from the equator , and then decreases near the poles . comparing results for the two tumbler geometries shows that wall slope causes axial drift , while drift speed increases with equatorial diameter . the dependence of axial drift on axial position for each tumbler geometry is similar when both are normalized by their respective maximum values
advances in technology underpinning multiple domains have increased the capacity to generate and store data and metadata relating to domain processes .the field of data science is continuously evolving to meet the challenge of gleaning insights from these large data sets , with extensive research in exact algorithms , heuristics and meta - heuristics for solving combinatorial optimisation problems .the primary advantage of using exact methods is the guarantee of finding the global optimum for the problem .however , a disadvantage when solving complex ( np - hard ) problems is the exponential growth of the execution time proportional to the problem instance size .heuristics tend to be efficient , but solution quality can not be guaranteed and techniques are often not versatile .meta - heuristics attempt to consolidate these two approaches and deliver an acceptable solution in a reasonable time frame .a large number of meta - heuristics designed for solving complex problems exist in the literature and the genetic algorithm ( ga ) has emerged as a prominent technique , using intensive global search heuristics that explore a search space intelligently to solve optimisation problems .although the algorithms must traverse large spaces , the computationally intensive calculations can be performed independently .compute unified device architecture ( cuda ) is nvidia s parallel computing platform which is well suited to many computational tasks , particularly where data parallelism is possible . implementing a ga to perform cluster analysis on vast data sets using this platformallows one to mine though the data relatively quickly and at a fraction of the cost of large data centres or computational grids .a number of authors have considered parallel architectures to accelerate gas ( see as examples ) .while the work of is conceptually similar to the implementation proposed in this paper , a key difference is our choice of fitness function for the clustering scheme .giada and marsili propose an unsupervised , parameter - free approach to finding data clusters , based on the maximum likelihood principle .they derive a log - likelihood function , where a given cluster configuration can be assessed to determine whether it represents the inherent structure for the dataset : cluster configurations which approach the maximum log - likelihood are better representatives of the data structure .this log - likelihood function is thus a natural candidate for the fitness function in a ga implementation , where the population continually evolves to produce a cluster configuration which maximises the log - likelihood .the optimal number of clusters is a free parameter , unlike in traditional techniques where the number of clusters needs to be specified a priori .while unsupervised approaches have been considered ( see and references therein ) , the advantage of the giada and marsili approach is that it has a natural interpretation for clustering in the application domain explored here .monitoring intraday clustering of financial instruments allows one to better understand market characteristics and systemic risks .while genetic algorithms provide a versatile methodology for identifying such clusters , serial implementations are computationally intensive and can take a long time to converge to a best approximation . in this paper, we introduce a maintainable and scalable master - slave parallel genetic algorithm ( pga ) framework for unsupervised cluster analysis on the cuda platform , which is able to detect clusters using the giada and marsili likelihood function . by applying the proposed cluster analysis approach and examining the clustering behaviour of financial instruments ,this offers a unique perspective to monitoring the intraday characteristics of the stock market and the detection of structural changes in near - real - time .the novel implementation presented in this paper builds on the contribution of cieslakiewicz . while this paper provides an overview and specific use - case for the algorithm ,the authors are investigating aspects of adjoint parameter tuning , performance scalability and the impact on solution quality for varying stock universe sizes and cluster types .this paper proceeds as follows : section 2 introduces cluster analysis , focusing on the maximum likelihood approach proposed by giada and marsili .section 3 discusses the master - slave pga .section 4 discusses the cuda computational platform and our specific implementation .section 5 discusses data and results from this analysis , before concluding in section 6 .cluster analysis groups objects according to metadata describing the objects or their associations .the goal is to ensure that objects within a group exhibit similar characteristics and are unrelated to objects in other groups .the greater the homogeneity within a group , and the greater the heterogeneity between groups , the more pronounced the clustering . in order to isolate clusters of similar objects, one needs to utilise a data clustering approach that will recover inherent structures efficiently .the correlation measure is an approach to standardise the data by using the statistical interdependence between data points .the correlation indicates the direction ( positive or negative ) and the degree or strength of the relationship between two data points . the most common correlation coefficient which measures the relationship between data points is the _ pearson correlation coefficient _ , which is sensitive only to a linear relationship between them .the pearson correlation is + 1 in the case of a perfect positive linear relationship and -1 in the case of a perfect negative linear relationship and some value between -1 and + 1 in all other cases , with values close to 0 signalling negligible interdependence .any specific clustering procedure entails optimising some kind of criterion , such as minimising the within - cluster variance or maximising the distance between the objects or clusters .maximum likelihood estimation is a method of estimating the parameters of a statistical model .data clustering on the other hand deals with the problem of classifying or categorising a set of objects or clusters , so that the objects within a group or cluster are more similar than objects belonging to different groups .if each object is identified by measurements , then an object can be represented as a tuple , , in a -dimensional space .data clustering will try to identify clusters as more densely populated regions in this vector space .thus , a configuration of clusters is represented by a set of integer labels , where denotes the cluster that object belongs to and is the number of objects ( if , then object and object reside in the same cluster ) , and if takes on values from to and , then each cluster is a _cluster constituting one object only .one can apply super - paramagnetic ordering of a -state potts model directly for cluster identification . in a market potts model ,each stock can take on -states and each state can be represented by a cluster of similar stocks .cluster membership is indicative of some commonality among the cluster members .each stock has a component of its dynamics as a function of the state it is in and a component of its dynamics influenced by stock specific noise .in addition , there may be global couplings that influence all the stocks , i.e. the external field that represents a market mode .in the super - paramagnetic clustering approach , the cost function can be considered as a hamiltonian whose low energy states correspond to cluster configurations that are most compatible with the data sample .structures are then identified with configurations for the cluster indices , which represents cluster to which the -th object belongs .this allows one to interpret as a potts spin in the potts model hamiltonian with decreasing with the distance between objects .the hamiltonian takes on the form : where the spins can take on -states and the external magnetic fields are given by .the first term represents common internal influences and the second term represents external influences .we ignore the second term when fitting data , as we include shared factors directly in later sections when we discuss information and risk and the influence of these on price changes . in the potts model approach one can think of the coupling parameters as being a function of the correlation coefficient .this is used to specify a distance function that is decreasing with distance between objects .if all the spins are related in this way then each pair of spins is connect by some non - vanishing coupling . in this model ,the case where there is only one cluster can be thought of as a ground state .as the system becomes more excited , it could break up into additional clusters and each cluster would have specific potts magnetisations , even though nett magnetisation may remain zero for the complete system .generically , the correlation would then be both a function of time and temperature in order to encode both the evolution of clusters , as well as the hierarchy of clusters as a function of temperature . in the basic approach ,one is looking for the lowest energy state that fits the data . in order to parameterise the model efficientlyone can choose to make the noh ansatz and use this to develop a maximum - likelihood approach rather than explicitly solving the potts hamiltonian numerically .following giada and marsili , we assume that price increments evolve under noh model dynamics , whereby objects belonging to the same cluster should share a common component : here , represents the features of object and is the label of the cluster that the object belongs to .the data has been normalised to have zero mean and unit variance . is a vector describing the deviation of object from the features of cluster and includes measurement errors , while describes cluster - specific features . is a loading factor that emphasises the similarity or difference between objects in cluster . in this researchthe data set refers to a set of the objects , denoting assets or stocks , and their features are prices across days in the data set .the variable is indexing stocks or assets , whilst is indexing days . if , all objects with are identical , whilst if , all objects are different .the range of the cluster index is from 1 to in order to allow for singleton clusters of one object or asset each . if one takes equation 2 as a statistical hypothesis and assumes that both and are gaussian vectors with zero mean and unit variance , for values of , it is possible to compute the probability density for any given set of parameters by observing the data set as a realisation of the common component of equation 2 as follows : the variable is the dirac delta function and denotes the mathematical expectation . for a given cluster structure ,the likelihood is maximal when the parameter takes the values the quantity in equation 4 denotes the number of objects in cluster , i.e. the variable is the internal correlation of the cluster , denoted by the following equation : the variable is the _ pearson correlation coefficient _ of the data , denoted by the following equation : the maximum likelihood of structure can be written as ( see ) , where the resulting likelihood function per feature is denoted by from equation 8 , it follows that for clusters of objects that are uncorrelated , i.e. where or or when the objects are grouped in singleton clusters for all the cluster indexes ( ) .equation 8 illustrates that the resulting maximum likelihood function for depends on the _ pearson correlation cofficient _ and hence exhibits the following advantages in comparison to conventional clustering methods : * it is * unsupervised * : the optimal number of clusters is unknown _ a priori _ and not fixed at the beginning * the interpretation of results is * transparent * in terms of the model , namely equation 2 .giada and marsili state that provides a measure of structure inherent in the cluster configuration represented by the set .the higher the value , the more pronounced the structure .in order to localise clusters of normalised stock returns in financial data , giada and marsili made use of a _ simulated annealing _ algorithm , with as the cost function for their application of the log - likelihood function on real - world data sets to substantiate their approach . thiswas then compared to other clustering algorithms , such as _ k - means _ ,_ single linkage _ , _ centroid linkage _ , _ average linkage _ , _ merging _ and _ deterministic maximisationthe technique was successfully applied to south african financial data by mbambiso et al . , using a serial implementation of a _ simulated annealing _algorithm ( see and ) ._ simulated annealing _ and _ deterministic maximisation _ provided acceptable approximations to the maximum likelihood structure , but were inherently computationally expensive .we promote the use of pgas as a viable an approach to approximate the maximum likelihood structure . will be used as the fitness function and a pga algorithm will be used to find the maximum for , in order to efficiently isolate clusters in correlated financial data .one of the key advantages of gas is that they are conceptually simple .the core algorithm can be summarised into the following steps : _ initialise population _ , _ evolve individuals _, _ evaluate fitness _, _ select individuals to survive to the next generation_. gas exhibit the trait of broad applicability , as they can be applied to any problem whose solution domain can be quantified by a function which needs to be optimised .specific genetic operators are applied to the parents , in the process of reproduction , which then give rise to offspring .the genetic operators can be classified as follows : the purpose of selection is to isolate fitter individuals in the population and allow them to propogate in order to give rise to new offspring with higher fitness values .we implemented the _stochastic universal sampling selection operator _ , where individuals are mapped to contiguous segments on a line in proportion to their fitness values .individuals are then selected by sampling the line at uniformly spaced intervals .while fitter individuals have a higher probability of being selected , this technique improves the chances that weaker individuals will be selected , allowing diversity to enter the population and reducing the probability of convergence to a local optimum .crossover is the process of mating two individuals , with the expectation that they can produce a fitter offspring .the crossover genetic operation involves the selection of random loci to mark a cross site within the two parent chromosomes , copying the genes to the offspring . a bespoke_ knowledge - based crossover _ operator was developed for our implementation , in order to incorporate domain knowledge and improve the rate of convergence .mutation is the key driver of diversity in the candidate solution set or search space .it is usually applied after crossover and aims to ensure that genetic information is randomly distributed , preventing the algorithm from being trapped in local minima .it introduces new genetic structures in the population by randomly modifying some of its building blocks and enables the algorithm to traverse the search space globally .coley states that fitness - proportional selection does not necessarily favour the selection of any particular individual , even if it is the fittest . thus the fittest individuals may not survive an evolutionary cycle .elitism is the process of preserving the fittest individuals by inherent promotion to the next generation , without undergoing any of the genetic transformations of crossover or mutation .replacement is the last stage of any evolution cycle , where the algorithm needs to replace old members of the current population with new members .this mechanism ensures that the population size remains constant , while the weakest individuals in each generation are dropped .+ although gas are very effective for solving complex problems , this positive trait can unfortunately be offset by long execution times , due to the traversal of the search space .gas lend themselves to parallelisation , provided the fitness values can be determined independently for each of the candidate solutions . while a number of schemes have been proposed in the literature to achieve this parallelisation ( see , and ) , we have chosen to implement the _ master - slave _ model .master - slave gas , also denoted as global pgas , involve a single population , but distributed amongst multiple processing units for determination of fitness values and the consequent application of genetic operators .they allow for computation on shared - memory processing entities or any type of distributed system topology , for example grid computing .ismail provides a summary of the key features of the master - slave pga : the algorithm uses a single population ( stored by the master ) and the fitness evaluation of all of the individuals is performed in parallel ( by the slaves ) .communication occurs only as each slave receives the individual ( or subset of individuals ) to evaluate and when the slaves return the fitness values , sometimes after mutation has been applied with the given probability .the particular algorithm we implemented is _ synchronous _ , i.e. the master waits until it has received the fitness values for all individuals in the population before proceeding with selection and mutation .the _ synchronous _ master - slave pga thus has the same properties as a conventional ga , except evaluation of the fitness of the population is achieved at a faster rate .the algorithm is relatively easy to implement and a significant speedup can be expected if the communications cost does not dominate the computation cost .the whole process has to wait for the slowest processor to finish its fitness evaluations until the selection operator can be applied .a number of authors have used the message parsing interface ( mpi ) paradigm to implement a master - slave pga .digalakis and margaritis implement a synchronous mpi pga and shared - memory pga , whereby fitness computations are parallelised and other genetic operators are applied by the master node only .they demonstrate a computation speed - up which scales linearly with the number of processors for large population sizes .zhang et al .use a centralised control island model to concurrently apply genetic operators to sub - groups , with a bespoke migration strategy using elite individuals from sub - groups .nan et al . used the matlab parallel computing and distributed computing toolboxes to develop a master - slave pga , demonstrating its efficacy on the image registration problem when using a cluster computing configuration . for our implementation, we made use of the nvidia cuda platform to achieve massive parallelism by utilising the graphical processing unit ( gpu ) streaming multiprocessors ( sm ) as slaves , and the cpu as master .compute unified device architecture ( cuda ) is nvidia s platform for massively parallel high performance computing on the nvidia gpus .compute unified device architecture ( cuda ) is nvidia s platform for massively parallel high - performance computing on the nvidia gpus . at its coreare three key abstractions : a hierarchy of thread groups , shared memories , and barrier synchronisation .full details on the execution environment , thread hierarchy , memory hierarchy and thread synchronisation schemes have been omitted here , but we refer the reader to nvidia technical documentation for a comprehensive discussion .the cuda algorithm and the respective testing tools were developed using microsoft visual studio 2012 professional , with the nvidia nsight extension for cuda - c projects .the following configurations were tested to determine the versatility of the cuda clustering algorithms on the following architectures : .development , testing and benchmarking environments [ cols="<,<,<",options="header " , ] in this section , we illustrate a sample of the resultant cluster configurations which were generated from our model , represented graphically as msts .this serves as a particular domain application which provides an example of resulting cluster configurations which have meaningful interpretations .the thickness of the vertices connecting nodes gives an indication of the strength of the correlation between stocks .the south african equity market is often characterised by diverging behaviour between financial / industrial stocks and resource stocks and strong coupling with global market trends . in figure 2 , we see 4 distinct clusters emerge as a result of the early morning trading patterns , just after market open .most notably , a 6-node financial / industrial cluster ( slm , sbk , asa , shf , gfi , oml ) and a 3-node resource cluster ( bil , sol , agl ) . at face value, these configurations would be expected , however we notice that gfi , a gold mining company , appears in the financial cluster and fsr , a banking company , does not appear in the financial cluster .these are examples of short - term decoupling behaviour of individual stocks due to idiosyncratic factors .figure 3 illustrates the effect of the uk market open on local trading patterns .we see a clear emergence of a single large cluster , indicating that trading activity by uk investors has a significant impact on the local market .when examining the large single cluster , all of the stocks have either primary of secondary listings in the us and uk .in particular , sab and ang have secondary listings on the london stock exchange ( lse ) , whereas bil and agl have primary listings on the lse .it is also unusual to see such a strong link ( correlation ) between agl , a mining company , and cfr , a luxury goods company .this may be evidence that significant uk trading in these 2 stocks can cause a short - term elevated correlation , which may not be meaningful or sustainable .figure 4 considers midday trading patterns .we see that the clustering effect from uk trading has dissipated and multiple disjoint clusters have emerged .cfr has decoupled from agl in the 2 hours after the uk market open , as we might expect .we see a 4-node financial / industrial cluster ( npn , mtn , asa , imp ) and 4-node resource cluster ( agl , sab , sol , bil ) ; imp , a mining company , appears in the financial / industrial cluster .figure 5 illustrates the effect of the us market open on local trading patterns .similar to what we observed in figure 3 , we see the emergence of a large single cluster , driven by elevated short - term correlations amongst constituent stocks .this provides further evidence that significant trading by foreign investors in local stocks can cause a material impact on stock market dynamics .this paper verifies that the giada and marsili likelihood function is a viable , parallelisable approach for isolating residual clusters in datasets on a gpu platform .key advantages compared to conventional clustering methods are : 1 ) the method is unsupervised and 2 ) the interpretation of results is transparent in terms of the model .the implementation of the master - slave pga showed that efficiency depends on various algorithm settings .the type of mutation operator utilised has a significant effect on the algorithm s efficiency to isolate the optimal solution in the search space , whilst the other adjoint parameter settings primarily impact the convergence rate . according to the benchmark test results , the cuda pga implementation runs 10 - 15 times faster than the serial ga implementation in matlab for detecting clusters in 18-stock real world correlation matrices . specifically , when using the nvidia gtx titan black card , clusters are recovered in sub - second speed , demonstrating the efficiency of the algorithm .provided intraday correlation matrices can be estimated from high frequency data , this significantly reduced computation time suggests intraday cluster identification can be practical , for near - real - time risk assessment for financial practitioners .detecting cluster anomalies and measuring persistence of effects may provide financial practitioners with useful information to support local trading strategies . from the sample results shown ,it is clear that intraday financial market evolution is dynamic , reflecting effects which are both exogenous and endogenous .the ability of the clustering algorithm to capture interpretable and meaningful characteristics of the system dynamics , and the generality of its construction , suggests the method can be successful in other domains .further investigations include adjoint parameter tuning and performance scalability for varying stock universe sizes and cluster types , quantifying the variability of solution quality on the gtx architecture as a result of non - ecc memory usage and the investigation of alternative cost - effective parallelisation schemes . given the spmd architecture used by cuda , the required data dependence across thread blocks restricts the assignment of population genes to threads and results in a large number of synchronisation calls to ensure consistency of each generation .an mpi island model with distributed fitness computation and controlled migration is perhaps a more well - posed solution to explore , however the cost of the setup required to achieve the equivalent speed - up provided by cuda should be justified .this work is based on the research supported in part by the national research foundation of south africa ( grant numbers 87830 , 74223 and 70643 ) .the conclusions herein are due to the authors and the nrf accepts no liability in this regard .99 advanced clustering technologies ._ hpc cluster blog - gtx vs tesla_. see http://www.headachefreehpc.com/company-blog/hpc-cluster-blog-gtx-vs-tesla.html for further details . accessed 2014 - 09 - 25 . j. baker . _ reducing bias and inefficiency in the selection algorithm_. proceedings of the second international conference on genetic algorithms and their application , hillsdale , new jersey , pp .14 - 21 , 1987 . c. bohm , r. noll , c. plant , b. wackersreuther . _ density - based clustering using graphics processors_. proceedings of the 18th acm conference on information and knowledge management , cikm 09 , acm : new york , pp .661 - 670 , 2009 .a. colorni , m. dorigo , f. maffioli , v. maniezzo , g. righini , m. trubian ._ heuristics from nature for hard combinatorial optimization problems_. international transactions in operational research vol .3 , pp . 1 - 21 , 1996 .t. dessel , d.p .anderson , m. magdon - ismail , h. newberg , b.k .szymanski , c.a .varela . _ an analysis of massively distributed evolutionary algorithms_. proceedings from 2010 ieee congress on evolutionary computation ( cec ) , barcelona , pp . 1 - 8 , 2010 .a. jaimes , c. coello coello ._ mrmoga : a new parallel multi - objective evolutionary algorithm based on the use of multiple resolutions_. concurrency and computation : practice and experience , vol .397 - 441 , 2007 .l. nan , g. pengdong , l. yongquan , y. wenhua . _ the implementation and comparison of two kinds of parallel genetic algorithm using matlab_. ninth international symposium on distributed computing and applications to business , engineering and science , 2010 .p. pospichal , j. jaros , j. schwarz ._ parallel genetic algorithm on the cuda architecture_. proceedings of the 2010 international conference on applications of evolutionary computation ( evoapplicatons10 ) , springer - verlag , vol .1,pp . 442 - 451 , 2010 .d. robilliard , v. marion , c. fonlupt ._ high performance genetic programming on gpu_. proceedings of the 2009 workshop on bio - inspired algorithms for distributed systems , bads 09 , acm : new york , ny , usa , pp .8594 , 2009 .v. tirumalai , k. ricks , k. woodbury ._ using parallelization and hardware concurrency to improve the performance of a genetic algorithm_. concurrency and computation : practice and experience , vol .443 - 462 , 2007 .s. zhang , z. he . _implementation of parallel genetic algorithm based on cuda_. proceedings of the 4th international symposium on advances in computation and intelligence , isica 09 , berlin , heidelberg , springer - verlag , pp .24 - 30 , 2009 .
we implement a master - slave parallel genetic algorithm ( pga ) with a bespoke log - likelihood fitness function to identify emergent clusters within price evolutions . we use graphics processing units ( gpus ) to implement a pga and visualise the results using disjoint minimal spanning trees ( msts ) . we demonstrate that our gpu pga , implemented on a commercially available general purpose gpu , is able to recover stock clusters in sub - second speed , based on a subset of stocks in the south african market . this represents a pragmatic choice for low - cost , scalable parallel computing and is significantly faster than a prototype serial implementation in an optimised c - based fourth - generation programming language , although the results are not directly comparable due to compiler differences . combined with fast online intraday correlation matrix estimation from high frequency data for cluster identification , the proposed implementation offers cost - effective , near - real - time risk assessment for financial practitioners . unsupervised clustering , genetic algorithms , parallel algorithms , financial data processing , maximum likelihood clustering