article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
bristle bots are characterised by small size , robust and cheap design , and high speed of locomotion .applications of bristle bots can be found in inspection technology , search and rescue systems , and swarm robotic research .the mechanism underlying their locomotion capabilities has been studied in . to change motion direction of bristle - based mobile robots the following methodshave been reported in the literature : changing the rotation direction of an unbalanced motor , using the phase shift between two unbalanced rotors or changing the inclination of the bristle system using additional actuators .recent theoretical studies have suggested that , for systems excited by vertical oscillations and moving along a straight line , direction of motion can be controlled by tuning the frequency of actuation .we provide in this paper an experimental validation of this prediction .our results may be of interest in the field of inspection systems optimized for limited manoeuvring space , e.g. pipe inspection robots .the paper is organized as follows . in section 2we accommodate the analysis presented in for internally actuated robots in the context of an equivalent system , which consists of an inactive robot placed on a vibrating substrate .this setting provides cleaner and more efficient experimental study . in section 3we summarize the results of the experiments , and in section 4 we outline possible directions for future work .bristle bots are actuated by an internal vibrating engine . in order to better study their behaviour experimentally , however , we can avoid the encumbrance of an on - board motor by considering the setting depicted in fig .[ fig : ly - modell ] .the setting consists of a ( inactive ) robot lying on a vertically vibrating substrate ( shaker ) . as we show below , the resulting physical system , when considered in the shaker attached frame ,is identical to that of a bristle bot moving on a still substrate and driven by an internal oscillating force .the robot is modelled as a two - dimensional rigid object , consisting on a row of weightless support elements ( bristles ) of length attached to a main body of mass .the -th bristle is connected to the main body by a rotatory spring of stiffness .the inclination of the bristles with respect to the vertical is given by , where is the inclination angle in the unloaded configuration .the ( horizontal ) friction force acting at the contact point of the -th bristle with the shaker is modelled as where is the normal reaction force acting on the tip of the bristle , is a phenomenological friction coefficient , and the velocity of the contact point in the horizontal direction .we denote with a dot the derivative with respect to time .we introduce two cartesian coordinate systems in the vertical plane : the fixed reference frame ( ) and the shaker - attached frame ( ) .the vertical displacement of the shaker at time with respect to the axis is given by .we suppose that each bristle is always in contact with the shaker , and that the robot does not rotate with respect to the -plane .we have then while all contact points have the same horizontal velocity for every . applying the principle of linear momentumwe obtain where and are the coordinate of the centre of mass in the fixed frame , and is the total normal force . with and the balance of linear momentum reads notice that and are the coordinates of in the shaker - attached frame . finally , the principle of angular momentum gives where .observe that equations ( [ onetwo ] ) and ( [ three ] ) are formally identical to the equations describing the same bristle bot model lying on a still substrate and actuated by an internal vertical force . to normalize the dynamical variables we define the following parameters define then the _ normalized _ normal force , angle difference , and horizontal velocity of the robot as applying all the definitions above , equations ( [ onetwo ] ) and ( [ three ] ) can be rewritten as the equivalent system in the dimensionless time where in the following we suppose that is a small parameter , and .we derive in this section an estimate of the average horizontal velocity of the robot , and we show how it can change sign for different values of the frequency of actuation . to obtain this estimate , we solve ( [ norm ] ) by expanding the solution in power series in the ( small ) parameter , that is expanding ( [ norm ] ) in powers of , and matching coefficients of equal power , leads to a sequence of equations to be solved successively for the unknowns ( ) , with .it can be proved rigorously , see , that converge uniformly for every small enough , and that only one periodic solution exist for each coefficient , and at each order . the resulting sum for , , and is the only periodic solution of , and any other solution of the system converges asymptotically in time to it .the zero - order system is given by and its only periodic solution is now , imposing , the first order system reads we look here for solutions of the type replacing in and matching coefficients of sines and cosines respectively , we end up with six equations which allow us to determine , , , , , and .we obtain we then recover , in particular , , where is a periodic function with zero average .indeed , the average velocity of the robot is of the order , however , we do not need to solve the second order system to recover a formula for it .we observe that , imposing , the second order expansion of the second equation in gives we know from the previously stated results in that admits one periodic solution for and .therefore , in particular , and have zero average .from then follows that the average of can be written in terms of the solution of the first order system this last equation provides an explicit formula for the approximate ( normalized ) average horizontal velocity of the robot since moreover , shows how the sign of the average velocity depends on that of the difference between the two parameters and , and ultimately on the frequency , see .the formula predicts an average motion in the negative direction for large values of , and in the positive direction for small values of .2 shows the frequency dependence of when we fit with the parameters of the prototype described below .the frequency such that is given by which gives an approximation of the frequency at which the inversion of motion of the robot occurs . in the experiments below . against excitation frequency , with .5 g , 8 mm , , nm , 9.81 , experimental setup is shown in fig . [ fig : exp ] .it consists of a passive robot prototype lying on a platform attached to an electromagnetic shaker , which provides vertical excitation .the main body of the robot is made of polymer material with length width height = 55 mm 35 mm 17 mm . , and mass 10.5 g. the bristle functionality is realised by two 30 mm wide paper strips with a free length 8 mm . with a mass of 55 mg ,the paper strips meet sufficiently well the model assumption of massless bristles .the centre of mass of the robot is located in the middle between the ground - bristle contact points in order to avoid rotation on the main body , see model condition ( [ winkel ] ) .in contrast with the model , the elasticity of the real bristles is equally distributed along their length . their equivalent rotational stiffness and inclination angle are calculated to be nm and 35.2 .robot and shaker are equipped with markers for motion tracking .the shaker is switched on producing vertical sinusoidal vibrations with controllable frequency and amplitude , leading to directed locomotion of the robot . at different frequencies we tune the amplitude of the shaker in order to match our analytic assumption and , in turn , to avoid the robot from losing contact with the ground .we recover a clear motion in the positive horizontal direction for frequencies below hz , and motion in the negative direction for frequencies above hz , in agreement with the theoretical predictions ( between 10 and 18 hz results are inconclusive ) . for two oscillation frequencies we filmed the experiments with an high - speed camera ( fig 3 shows a frame of the videos ) .locomotion is analysed by tracking the markers on the robot and the shaker .fig 4 presents the tracking results for an excitation below the calculated inversion frequency , while fig .5 shows the tracking results for excitation above the inversion frequency .we showed analytically and experimentally that the inversion of motion of bristle bots is possible by tuning the frequency of pure vertical excitation .future work should focus on models accounting on more quantitatively accurate description of frictional interactions .further experimental analysis is needed to find precisely the relation between robot parameters and locomotion characteristics .
bristle bots are vibration - driven robots actuated by the motion of an internal oscillating mass . vibrations are translated into directed locomotion due to the alternating friction resistance between robots bristles and the substrate during oscillations . bristle bots are , in general , unidirectional locomotion systems . in this paper we demonstrate that motion direction of vertically vibrated bristle systems can be controlled by tuning the frequency of their oscillatory actuation . we report theoretical and experimental results obtained by studying an equivalent system , consisting of an inactive robot placed on a vertically vibrating substrate .
why certain shapes are observed more often than others depends on the developmental history of the tissue , which is determined by e.g. the sequence of cell differentiation , and cell divisions and deaths .since a lot is still unknown about the developmental history , we do not include it in the modeling .however , since cells seem in mechanical equilibrium at any moment in development ( c.f . ) , future insights in developmental gene regulation could be translated in parameter changes that permit the modeling of the dynamics of development .simulations thus start from unstable initial conditions ( ) designed to favor the random search of final , stable topologies .we do not expect to find a quantitative correspondence between the frequency of topologies in simulations and experiments .we regard only the final result of the model simulations : we have found a local equilibrium , when the simulated shape does not change anymore .we compare this shape with the experimental results ( topology , geometry ) .distinguishing between topologies is trivial .but , due to the variability of membrane fluctuations , we find that it is difficult to describe the geometrical characteristics ( e.g. contact angles for the mutant ommatidia , interface lengths , elongation of cells ) by quantitative measurements : one obtains more information by looking at the image ( ` eyeballing ' ) .quantitative measurements serve as a complement to the eyeballing when enough data are available ( c.f .[ fig : wildtype ] ) , not as replacement .we determine for each model which parameters do influence the shape of the cone cells ; for the other parameters , we choose reasonable values ( e.g. , a compromise between simulation speed and precision , c.f . ) .we assume that ( i ) the adhesion strength is determined by the presence of these cadherins : when the two of them are present ( i.e. at interfaces between cone cells ) , adhesion is thus stronger .mutants should be modeled by only changing existing parameters .we thus require that ( ii ) to model the _roi_-mutants , we only need to change the number of cone cells ; ( iii ) to model the cadherin mutant ommatidia , only the adhesion for the mutant cells should be changed ( i.e. diminished for deletion , increased for overexpression ) ; and ( iv ) all cells of a cell type that share the same mutation should be modeled using the same parameter values .a stronger adhesion between cells and is represented by a lower interfacial tension , , which is a constant depending only on the cell types of and .we minimise the energy: is the length of the interface between cells and , is the cell s area ( the 2d equivalent of volume ) , is the cell s preferred area ( target area ) , and is the area modulus ( a lower value allows more deviations from ) .the values of are inferred from the experimental pictures , with cone ( ) cells being smaller than primary pigment ( ) cells .we assume - adhesion , mediated by both e- and n - cadherin , to be stronger than - and - adhesion , and , which are mediated by e - cadherin alone .we assume the latter two to be equal : .only three parameters need to be explored extensively : , ( ) , and . the tensions the cell shapes directly , whereas determines a cell s deviations from the target area .starting the simulations with a four - cell vertex ( a ) , we systematically find an incorrect topology ( fig .[ fig : tension]a ) : the anterior and posterior cell touch . even if we force the correct one , where the polar and equatorial cell touch , it is unstable and decays into the incorrect one : the interfaces between the cells are under tension , and pull the polar and equatorial cells apart .to obtain the correct topology , we need another assumption : either that the adhesion between polar and equatorial cells is stronger ( fig .[ fig : tension]b ) ; or that the cells pull less on them ( by having a stronger adhesion , fig .[ fig : tension]c ) .still , the geometry is quite different from the experiments : notably the interface between the polar and equatorial cell is too short in simulations .besides , there is no experimental evidence to support these assumptions .another optimization strategy is to determine ( up to a prefactor ) the tensions of three interfaces a , b , d that meet in a vertex from the experimentally observed contact angles , , ( ) by using .we inject those tensions in the model . by constructionwe obtain the correct contact angles , and thus topology ; but the overall geometry ( especially the interface lengths ) differs considerably from observations ( results not shown ) . for the mutant ommatidia , the requirements ( ii ) to ( iv ) mentioned above could not be satisfied with this model : there are too many cases where other parameters need to be changed as well .we conclude that this model is insufficient to coherently describe the experiments . to obtain the observed shapes, it would certainly be possible to choose a tension for each individual interface .but if the tension was just an input parameter without biological basis , then the model would not be predictive , nor help to understand the differences between the cells .adhesion between two cells tends to extend their contact length ; it thus contributes negatively to the energy , , where : in agreement with intuition , a higher describes a stronger adhesion , while in absence of adhesion .this extension is compensated by an elastic cell cortex term , , where is the perimeter modulus , and is the target perimeter of cell .the cell perimeter is the sum of its interfaces , .we thus minimise the energy:\,.\label{eq : adhesion hamiltonian}\ ] ] the interfacial tension between cells and is the energy change associated with a change in membrane length ( c.f . ) ; eq .yields : as in the previous model , is positive , else the cell would be unstable .however , it is no longer an input parameter . a stronger adhesion ( high )decreases the tension : this will usually cause an extension of the perimeter , which increases this tension even more .we represent all adhesion terms as combinations of e- and n - cadherin mediated adhesion ( and , respectively ) . in the wildtype , the adhesion between cells is mediated by both cadherins , so ; whereas all other interfaces only have e - cadherin , so .values of are estimated from pictures .the target perimeter ( expressed in units of ) should be larger for cells that deviate more from a circular shape , i.e. for the cells .we thus adjust 6 main parameters : , , , , , , which is too much to explore systematically .we adjust the parameters by hand , for wildtype and mutant configurations simultaneously , since the wildtype alone does not sufficiently constrain the number of optimal parameter combinations . unless indicated , throughout this paper , and for all figures except fig .[ fig : tension ] , we use eq . with the same set of parameters ( ) for wildtype and mutants . starting the simulations with a four - cell vertex ( a ), the cells relax either into the correct topology where the polar and equatorial cells touch ( fig .[ fig : wildtype ] ) ; or into the incorrect one where anterior and posterior cells touch ( analogous to fig .[ fig : tension]a ) .both topologies are stable , i.e. they are local energy minima . in the correct topology ,the geometry of the simulated ommatidium resembles well the experimental pictures .more quantitatively , the contact angles measured in simulations and in experiments agree as well ( fig . [ fig : wildtype ] ) .in contrast to the constant tension model , we do not need additional assumptions .we found that the adhesion of secondary and tertiary pigment cells should be much stronger than can be expected from e - cadherin alone ( , ) , otherwise they loose contact .experimentally , deleting the e - cadherin of these cells does not induce any geometrical or topological change . both experiments and simulationsthus suggest that secondary and tertiary pigment cells might have other adhesion molecules than e- and n - cadherin . without any additional parameter, we can simulate different numbers of cells ( _ roi_-mutants ) ; the total size of the simulation lattice is adjusted accordingly .for one , two , three and five cells , only one topology is observed in experiments , and the same one in simulations ( ) . for six cells , three topologiesare observed experimentally ( fig .[ fig : roi]a - c ) . theoretically , there are two more possible equilibrium topologies for 6-cell aggregates , which are never observed although one of them has a smaller total interface length ( simulations using the surface evolver , s. cox , unpublished results 2004 ) .we here performed a total of potts model simulations with different random seeds ( see methods ) , and found only three topologies ( fig .[ fig : roi]d - f ) : they correspond to the observed ones .we observe in fig .[ fig : roi]a and c that the entire ommatidium is elongated .besides , ommatidia of _ roi_-mutants do not all have six sides and are assembled into a disordered pattern ( see ) . thus , in _roi_-mutants , ommatidia have variable shapes , which origin is not easily understood ( especially for mutants with more pigment cells ) . since in turn the shape of the ommatidium influences the geometry of its cells ( results not shown ) , studying the geometry of the cells in more details would only by possible by adding more free parameters . again without any additional parameter , simply by suppressing , we could predict the pattern of ommatidia with n - cadherin deficient cells . sincen - cadherin is only present on interfaces between cells , deletion means we set the adhesion between mutant and wildtype cells as ( mutant cells are denoted by lower case letters ) .we predict the correct topologies ( fig . [ fig : ncad]a - f and i - n ) , most of which are the same as in the wildtype .we predict qualitatively the main geometrical differences between mutants and wildtype : ( i ) the length of the interfaces between mutant cells and wildtype cells decreases ; ( ii ) the contact angles change ; ( iii ) the interface length between the remaining wildtype cells increases ( fig .[ fig : ncad]a - b and i - j ) ; and ( iv ) the length of the central interface increases ( fig .[ fig : ncad]d and l ) . when the polar or equatorial cell is the only cell without n - cadherin , we simulate ( fig .[ fig : ncad]m - n ) both topologies that coexist in experiments ( fig .[ fig : ncad]e - f ) .to simulate one mutant cell that mis - expresses n - cadherin , we optimize . while for the wildtype , we find an increase for the mutant , .the high adhesion of this cell with the cells severely disrupts the normal configuration .many topologies that differ considerably form the wildtype are observed in experiments and simulations ( e.g. fig .[ fig : ncad]h , p ) . when both cells mis - express n - cadherin , they balance each other and the topology is back to normal ( fig .[ fig : ncad]g ) .optimization yields ( fig . [ fig : ncad]o and ) . both and higher than the wildtype value of - adhesion ( ) . the mutant cell in fig . [ fig : nocad]a does not express e - cadherin , and it lacks adherens junctions at the interfaces with the cells .to simulate it , it would seem natural to suppress at all interfaces , that is , and . with this assumption, we obtain the correct topology , which is the same as in the wildtype ; however , the simulated geometry ( not shown ) is also the same as the wildtype , while the experiment is significantly different ( fig .[ fig : nocad]a ) .if we rather assume that - adhesion is unchanged by this mutation ( ) , we obtain a good agreement ( fig .[ fig : nocad]d ) .e - cadherin overexpression in cells ( but not in cells ) significantly affects the pattern , yielding a coexistence of different topologies : in fig .[ fig : ecad]a and b , the same cells are mutants , but the topologies differ ; the same holds for fig .[ fig : ecad ] d and e. we predict the observed topologies ( all stable ) and , qualitatively , the geometries ( fig .[ fig : ecad]f - j ) when we increase the - cell adhesion from to ; while we find that the adhesion between a wildtype and mutant cells should not change , , we should change it if both are mutants , .since e - cadherin overexpression in cells rarely induces geometrical or topological changes , we do not change their adhesion values .we predict the effect of both e - cadherin and n - cadherin missing in cells by setting .mutant cells do not adhere to any of their neighbors , fig .[ fig : nocad]e - f : intercellular space becomes visible between the cells , and the cells have shrunken .this agrees well with experiments , where mutant cells lose the apical contacts with their neighbors ( fig .[ fig : nocad]b - c ) .when surface tension is a constant model parameter , only modified by adhesion , the surface mechanics are soap - bubble - like : minimization of the interfaces with cell type dependent weights .this model proves to be insufficient here .however , in studies focussing on larger aggregates ( to cells ) , constant surface tension was sufficient to explain tissue rounding and cell sorting , and even _ dictyostelium _ morphogenesis .this constant tension model catches two important features of tissues of adherent cells : first , cells tile the space without gaps or overlap ; second , the interface between cells is under ( positive ) tension , which implies for instance that three - cell vertices are stable , unlike four - cell ones , and thus severely constrains the possible topologies . in the present example of retina development ,we show that interfacial tension should be variable , as described in a second model .tension results from a adhesion - driven extension of cell - cell interfaces , balanced by an even larger cortical tension ( eq . ) .it explains correctly the topologies of many observations , and correctly simulates the geometries .it requires more free parameters ; but they are tested against many more experimental data ; and their origins , signs and variations are biologically relevant . adding more refinements ( and thus more free parameters )would be easy , but does not seem necessary to describe the equilibrium shape of ommatidial cells .the parameters should not be taken as quantitative predictions , since _ in vivo _ biophysical measurements to compare them to are lacking . by adjusting a set of independent free parameters in this variable tension model, we obtain topological and geometrical agreement between the simulations and the pictures of different situations : the wildtype ( fig .[ fig : wildtype ] ) , the six topologies observed in the _ roi_-mutants ( fig .[ fig : roi ] and ) ; as well as the nine cadherin deletion mutants ( figs . [ fig : ncad]a - f , [ fig : nocad ] ) by setting the corresponding parameter to zero .we also simulate cadherin overexpression mutants , by re - adjusting the corresponding parameter ( figs .[ fig : ncad]g , h , [ fig : ecad ] ) : adhesion is increased .the strongest increases are found when two overexpressing cells touch : this corresponds to the idea that the adhesion strength depends on the availability of cadherin molecules in both adhering cells .we found two cases where a mutation does not seem to change the adhesion strength : first , when deleting e - cadherin from one cell , its adhesion with a normal cell is unchanged ( fig . [fig : nocad]d ) ; second , we rarely observed shape changes in e - cadherin overexpressing cells in experiments ( c.f . ) . indeed , while a linear relation between cadherin expression and adhesion strength has been found _ in vitro _ , this need not be true _ in vivo _ , since cells have many more ways to regulate protein levels .these exceptions , thus , do not contradict the conclusion that the shapes observed in mutants are the effect of altered adhesion : an increase in the case of overexpression , a decrease in the case of deletion . in the variable tension model , the perimeter modulus and the target perimeter reflect the role of the cortical cytoskeleton .the target perimeter is always smaller than the perimeter , therefore the interfacial tension ( eq . ) is always positive , else the cell would be unstable and fall apart or disappear .the cortex of the simulated cells is contractile , and generates tension .this tension depends on the perimeter of the cell , which length depends on the cell s shape , which in turn depends on the tension : there is a feedback between tension and shape , and thus between each cell and its neighbours . to understand the effect of this feedback , let us consider the wildtype ommatidium .we assume that the four cells have equal adhesion properties .the tension at the interfaces between the two cells pulls at the polar and equatorial cell .when the tension is constant , these cells will therefore be pulled apart ( fig . [fig : tension]a ) : the cells do not react on their deformation . when the tension , however , depends on the cell s perimeter , pulling at those cells deforms them , and increases their tension : energy minimization thus requires that they stay in contact .the prediction that cytoskeletal contractility is essential for the establishment of cell shape should be tested , e.g. by treating the cells with cytoskeletal inhibitors , or genetically modifying the cytoskeleton . since the cytoskeleton has multiple functions that could interfere with adhesion ( c.f . ) , the results will be difficult to interpret .preliminary experimental results ( not shown ) do indicate that genetically disturbing rho - family gtpases influences the cell shape .the role of the cytoskeleton has been confirmed in various tissues and organisms ( see for reviews ) .we here present a computational framework able to test this hypothesis , which can be extended to other tissues , ranging from patterns of few cells to large - scale aggregates .retinas were stained and analyzed as described in and . in short, cell contours were visualized by staining either with cobalt sulfide ( fig .[ fig : roi ] and ) , or with the antibodies against de - cadherin , dn - cadherin ( referred to as e- resp .n - cadherin in the rest of the text ) , -catenin or -spectrin . rough eye ( _ roi _ ) flies were used to examine the topology and geometry of variable number of cone cells .the effect of eliminating or overexpressing cadherin molecules was studied in mosaic retinas composed of wild - type and mutant cells ( see ) .we examined more than five retinas in each experiment .thus at least several hundreds of ommatidia ( ) were examined for the wildtype and each mutation , except the e- and n - cadherin overexpression , in which case approximately ommatidia were examined each .some pictures used for the analysis were published previously .the cellular potts model is a standard algorithm to simulate variable cell shape , size and packing .its use in biology is motivated by the capability to handle irregular , fluctuating interfaces ( c.f . ) ; the pixelisation induced by the calculation lattice can be chosen to correspond to the pixelisation in the experimental images .each cell is defined as a certain set of pixels , here on a 2d square lattice ; their number defines the cell area .the cell shapes change when one pixel is attributed to one cell instead of another .our field of simulation for one ommatidium is a hexagon with sides of approximately pixels ( its surface is pixels , about the same as in experimental pictures ) .we use periodic boundary conditions , as if we were simulating an infinite retina with identical ommatidia .initially , the whole hexagon is filled with cells , approximately at the right positions ( ) .we treat bristle cells as tertiary pigment cells : both are situated at the edge of three ommatidia .these initial conditions , with an unstable -cell vertex in the middle , do not fix the final configuration in advance .simulations can be started with different seeds of the random number generator , to explore whether multiple solutions are possible .shape is relaxed in order to decrease the energy , eq . or eq . . the algorithm to minimize uses monte carlo sampling and the metropolis algorithm , as follows . we randomly draw ( without replacement ) a lattice pixel , and one of its eight neighboring pixels .if both pixels belong different cells , we try to copy the state of the neighboring pixel to the first one . if the copying diminishes , we accept it ; and if it increases , we accept it with probability . here is the difference in before and after the considered copying .the prefactor is a fluctuation ( random copying ) allowance : it determines the extent of energy - increasing copy events , leading to membrane fluctuations .since all energy parameters are scalable with the fluctuation allowance , we can fix it without loss of generality ; for numerical convenience we choose numbers of order of a hundred .we define one monte carlo time step ( mcs ) as the number of random drawings equal to the number of lattice pixels .it takes approximately to mcs to attain a shape that does not evolve anymore , that is , in mechanical equilibrium where stresses are balanced .we run the simulation much longer ( up to mcs ) to test if topological changes occur . to avoid possible effects of lattice anisotropy on cell shapes , we compute and by including interactions up to the 20 next next nearest neighbours .all perimeters indicated here are corrected by a suitable prefactor to ensure that a circle with an area of pixels has a perimeter . in experiments ,interstitial fluid is present in small amount , and cells can lose contact ( fig . [fig : nocad ] b , c ) . to simulate it in our 2d model , at each mcswe randomly choose one pixel at a cell interface and change its state into ` intercellular space ' ( a state without adhesion , nor area and perimeter constraints ) .in addition , we choose the sum of all cells target areas to be less than the total size of the hexagonal simulation field ( , ) . only when cells lose adhesion ( ) do we actually observe intercellular space in simulations ( fig .[ fig : nocad]e , f ) .we try different parameters and adjust them to improve the visual agreement ( ` eyeballing ' ) between simulated and experimental pictures . to estimate our uncertainty , we note that % changes in the values of the adhesion parameters do not yield visible changes in the geometry , while % changes do ; see for an example of the determination of .once we simulate the correct topology , we measure the contact angles of straight lines fitted through the interfaces that meet in the vertex. the line should be long enough to avoid grid effects ; we fit a straight line using the first first - order neighboring sites .since the simulated cells show random fluctuations , statistics are obtained by measuring the contact angles several times during the simulation , or in simulations with different random number seeds . in experimental pictures ,we measure contact angles in wildtype ommatidia by hand , aided by the program imagej .ommatidia have two axes of symmetry , and we consider the ommatidia to consist of four equal quarters , which gives us measurements for each angle ( and measurements of the angles that are intersected by the axes of symmetry ) . the variation between different wildtype ommatidia is larger than in simulations ( fig .[ fig : wildtype ] ) . in mutant ommatidia ,the error bar is even larger , so that we did not attempt at any quantitative comparison .we thank simon cox for surface evolver calculations on soap bubble clusters , christophe raufaste for discussions on the computational methods , sascha hilgenfeldt for interesting discussions , and yohanns bellache for critical reading of the manuscript .we thank t. uemura , h. oda , u. tepass , g. thomas , b. dickson , p. garrity , the bloomington drosophila stock center and the developmental studies hybridoma bank for fly strains and/or antibodies , and k. saigo for use of facilities .t.h . was supported by a research fellowship from the japan society for the promotion of science for young scientist .mare , a. f. m. , grieneisen , v. a. , & hogeweg , p. ( 2007 ) in _ single cell based models in biology and medicine _ ,anderson , a. r. a. , chaplain , m. a. j. , & rejniak , k. a. ( birkhuser - verlag , basel ) , pp .107136 . statistical standard deviation ; the straight line represents .inset left : an ommatidium stained for e - cadherin ; anterior ( a ) , posterior ( p ) , polar ( pl ) and equatorial ( e ) cone cells .inset right : variable tension model simulation , with cone cells ( c ) , and primary ( p ) , secondary ( 2 ) and tertiary ( 3 ) pigment cells .one ommatidium contains four times the angles , , , , and , and two times and . ] , . *b * : same as ( a ) , but with lower tension ( stronger adhesion ) between the polar and equatorial cone cell , .* c * : same as ( a ) , but with lower tension ( stronger adhesion ) between the primary pigment cells : . ]
hayashi and carthew ( nature 431 [ 2004 ] , 647 ) have shown that the packing of cone cells in the _ drosophila _ retina resembles soap bubble packing , and that changing e- and n - cadherin expression can change this packing , as well as cell shape . the analogy with bubbles suggests that cell packing is driven by surface minimization . we find that this assumption is insufficient to model the experimentally observed shapes and packing of the cells based on their cadherin expression . we then consider a model in which adhesion leads to a surface increase , balanced by cell cortex contraction . using the experimentally observed distributions of e- and n - cadherin , we simulate the packing and cell shapes in the wildtype eye . furthermore , by changing only the corresponding parameters , this model can describe the mutants with different numbers of cells , or changes in cadherin expression . ell adhesion molecules are necessary to form a coherent multicellular organism . they not only hold cells together , but differential expression of different types of these molecules plays a central role during development . members of the cadherin family are the most widespread molecules that mediate adhesion between animal cells , and their role has been demonstrated in cell sorting , migration , tumor invasibility , cell intercalation , packing of epithelial cells , axon outgrowth and many more . we here focus on the role of adhesion in the determination of epithelial cell shape . in the compound eye of _ drosophila _ , the basic unit , the ommatidium , is repeated approximately times . all ommatidia have the same cell packing , which is essential for correct vision . the ommatidium consists of four cone cells , which are surrounded by two larger primary pigment cells . these ` units ' are embedded in a hexagonal matrix , constituted by secondary and tertiary pigment cells , and bristles ( c.f . , ) . two of us showed that cadherin expression influences ommatidial cone cell packing . two cadherin types , e- and n - cadherin , are expressed in different cells : all interfaces bear e - cadherin , while n - cadherin is present only at interfaces between the four cone cells ( ) . cadherin - containing adherens junctions form a zone close to the apical cell surface , allowing the retina epithelium to be treated as a 2d tissue . in the wildtype and in _ roi_-mutant ommatidia with two to six cone cells , these cone cells assume a packing ( or topology , that is , relative positions of cells ) strikingly similar to that of a soap bubble cluster . when cadherin expression is changed in a few or all of the cells , the topology can change . more frequently , only the geometry ( individual cell shapes , contact angles at the vertices , interface lengths ) changes . the soap films between bubbles are always under a positive tension , . this surface tension describes the energy cost of a unit of interface between bubbles , and drives their packing . at equilibrium , in a 2d foam layer , soap bubbles meet by three at each vertex , since four - bubble vertices are unstable . in addition , since is constant and the same for all interfaces , bubble walls meet at equal ( _ i.e. _ 120 ) angles . more precisely , the surface energy ( or rather the perimeter energy , for a 2d foam ) is , where is the total perimeter of soap films . the foam reaches equilibrium when it minimises ( since is constant ) , balanced by another constraint fixing each bubble s area . it has been proposed that cells minimize their surface , like soap bubbles . since the surface mechanics of bubbles are quite simple , they can easily be described in a model . however , calculating the equilibrium shape of a cluster of more than four bubbles is difficult ; for this purpose , we use a numerical method , in order to test if cell patterning is based on surface minimization . here , the only biological ingredient is differential adhesion : an interface between two cells has a constant tension , that is lower when the adhesion is stronger . cells , however , differ greatly from bubbles , both in their membrane and internal composition . surface tension has been shown to be determined up to a large extent by the cortical cytoskeleton . adhesive cells have a tendency to increase their contact interfaces , not to minimize them . lecuit and lenne recently reviewed a large number of experiments , and show that a cell s surface tension results from the opposite actions of adhesion and cytoskeletal contraction . these are the ingredients of a second model . our approach is to find out if the observed cell packings and shapes can be described with one of these models , based on the knowledge we have from the experiments . with minimal and realistic assumptions , only the second model reproduces the topology and geometry of the wildtype and mutant ommatidia . this shows that the competition between adhesion and cell cortex tension is needed to describe this specific cell pattern . we thus confirm and refine the conclusion that surface mechanics are involved in the establishment of cell topology and geometry . adhesion plays an important role therein , but its role can only be understood when taking into account its effect on the cortical cytoskeleton .
despite their phenomenological success , the standard model of particle physics and general relativity leave unresolved a number of theoretical questions .for this reason , a considerable amount of theoretical effort is currently directed toward the search for a more fundamental theory that includes a quantum description of the gravitational field . however , experimental tests of theoretical approaches to a quantum theory of gravity face a key issue of practical nature : most quantum - gravity effects in virtually all leading candidate models are expected to be minuscule as a result of planck - scale suppression .for instance , measurements at presently attainable energies are likely to demand sensitivities at the level or better .this mini course gives an overview of one recent approach to this issue involving the violation of spacetime symmetries .due to the expected minute size of candidate quantum - gravity effects , promising experimental avenues are difficult to identify .one idea in this context is testing physical laws that obey three key criteria .the first criterion is that one should focus on fundamental laws that hold _ exactly _ in established physical theories at the fundamental level .any observed deviations from these laws would then definitely imply qualitatively new physics .second , the likelihood of measuring such deviations is increased by testing laws that are predicted to be _ violated _ in credible approaches to more fundamental physics .the third criterion is a practical one : for the potential to detect planck - suppressed effects , these laws should be amenable to _ ultrahigh - precision _ tests .one sample physics law that satisfies all of these criteria is cpt symmetry . as a brief reminder ,this fundamental law states that all physics remains invariant under the combined operations of charge conjugation ( c ) , parity inversion ( p ) , and time reversal ( t ) . here , the c transformation links particles and antiparticles , p denotes the spatial reflection of physics quantities through the coordinate origin , and t reverses a given physical process in time .the standard model of particle physics is cpt symmetric by construction , so that the first criterion is satisfied . in the context of criterion two , we mention that a variety of candidate fundamental theories can accomodate cpt violation .such approaches include string theory , spacetime foam , nontrivial spacetime topology , and cosmologically varying scalars .the third criterion above is met as well .consider , for example , the conventional figure of merit for cpt symmetry in the neutral - kaon system : its value lies presently at , as quoted by the particle data group .similar arguments can also be made for other spacetime symmetries , such as lorentz and translational symmetry .they are ingrained into our current understanding of physics at the fundamental level ; they can be affected in various quantum - gravity approaches because quantum gravity is likely to require a radically different `` spacetime '' concept at the planck length ; and being symmetries , ultrahigh - senstivity searches for deviations from lorentz and translational invariance can be devised .the point is that tests of discrete and continuous spacetime symmetries have become a key tool in the phenomenology of new physics that possibly arises at the planck scale .this mini course is organized as follows .section [ symmetries ] discusses the relation between various spacetime symmetries .two sample mechanisms for cpt- and lorentz - symmetry violation in lorentz - invariant underlying theories are reviewed in sec .[ mechanisms ] .the basic philosophy and ideas behind the construction of the standard - model extension ( sme ) are presented in sec .[ smesec ] .section [ tests ] comments on a number of lorentz and cpt tests in a variety of physical systems .a brief summary is contained in sec .spacetime transformations can be divided into two distinct sets , namely discrete and continuous transformations .the discrete transformations include c , p , and t discussed in the introduction , as well as various combinations of these , such as cp and cpt .examples of possible continuous transformations are translations , rotations , and boosts . if symmetry under one or more of these transformations is lost , it is natural to ask as to whether the remaining transformations continue to be symmetries , or if the violation of one type of spacetime symmetry can lead to the breakdown of other spacetime invariances. this sections contains a few remarks about this issue .we begin by considering the cpt transformation .the renowned cpt theorem , established by bell , lders , and pauli , essentially states the following : under a few mild assumptions , symmetry under cpt is a consequence of quantum theory , locality , and lorentz invariance .if deviations from cpt symmetry were observed in nature , one or more of the ingredients necessary to prove the cpt theorem must be incorrect .the question now becomes which one of the key ingredients that enter the cpt theorem should be dropped .the answer depends largely on the presumed underlying physics . but suppose the low - energy leading - order effects of new physics can be described within a local effective field theory .( effective field theory is an enormously flexible tool . in the past, it has been successfully applied in numerous contexts including condensed - matter systems , nuclear physics , and elementary - particle physics . )it then seems unavoidable that exact lorentz symmetry needs to be abandoned .this expectation has recently been proven rigorously in the context of axiomatic quantum field theory by greenberg .his `` anti - cpt theorem '' roughly states that in any unitary , local , relativistic point - particle field theory cpt violation comes with lorentz breakdown .however , it is important to note that the converse of this statement namely that lorentz violation implies cpt breakdown does not hold true in general . in any case, we see that in the above general and plausible context , cpt tests also probe lorentz invariance .we remark that other types of cpt violation arising from apparently non - unitary quantum mechanics have also been considered .we continue by supposing that translational invariance is violated .this possibility can arise in the context of cosmologically varying scalar fields ( see next section ) . when translational symmetry is lost , the generator of translations ( i.e. , the energy momentum tensor ) is typically no longer a conserved current .we now turn to the question as to whether lorentz symmetry would be affected in such a scenario .we begin by looking at the generator for lorentz transformations , the angular - momentum tensor , which is given by note that this definition contains the energy momentum tensor , which is not conserved in the present context . as a result, will generally exhibit a nontrivial dependence on time , so that the ordinary time - independent lorentz - transformation generators no longer exist .for this reason , exact lorentz symmetry is not guaranteed .it is apparent that ( with the exception of special cases ) translation - symmetry breaking leads to lorentz - invariance violation .in the previous section , we have argued that under certain circumstances the breakdown of one spacetime invariance may lead to the violation of another spacetime symmetry .perhaps a more interesting question is how a translation- , lorentz- , and cpt - invariant underlying model can lead to the breakdown of a particular spacetime symmetry in the first place .the present section addresses this issue by giving some intuition regarding mechanisms for spacetime - symmetry violation in candidate fundamental theories . of the various possibilities for lorentz breakdown mentioned in the introduction, we will focus on spontaneous lorentz and cpt violation as well as lorentz and cpt breaking through cosmologically varying scalars .* spontaneous lorentz and cpt breakdown . * the mechanism of spontaneous symmetry breaking is well established in many areas of physics including the physics of elastic media , condensed - matter physics , and elementary - particle theory . from a theoretical perspective , this mechanism is very appealing for the following reason .often , a symmetry is needed for the internal consistency of a quantum field theory , but the symmetry is not observed in nature .this is exactly what spontaneous symmetry breaking achieves : at the dynamical level , the symmetry remains intact and ensures consistency .it is only the ground - state solution ( which pertains to experiments ) that is associated with the loss of the symmetry . in order to gain intuition about the spontaneous breaking of lorentz and cpt symmetry, we will look at three sample physical systems whose features will lead us step by step to a better understanding of the effect .these three examples are illustrated in fig .[ fig3 ] .let us first consider classical electrodynamics .any electromagnetic - field configuration possesses an energy density , which is determined by here , natural units have been implemented , and and denote the electric and magnetic field , respectively . equation ( [ max_en_den ] ) yields the field energy of any given solution of the usual maxwell equations .notice that if the electric field , or the magnetic field , or both are different from zero in some region of spacetime , then the energy stored in these fields will be strictly positive .the field energy only vanishes when both and are zero throughout spacetime .the vacuum is usually identified with the ground state , which is the lowest - energy configuration of a system .it is thus apparent that in conventional electrodynamics the configuration with the lowest energy is the field - free one , so that the maxwell vacuum is empty ( disregarding lorentz- and cpt - symmetric zero - point quantum fluctuations ) . let us next consider a higgs - type field .such a field is contained in the phenomenologically very successful standard model of particle physics . as opposed to the electromagnetic field , which is a vector ,the higgs field is a scalar . in our example, we will simplify various aspects of the model without distorting the features relevant in the present context . in the case for a constant higgs - type field ,the expression for the energy density of is given by where and are constants .( a possible spacetime dependence would lead to additional , positive - valued contributions to the energy density , so we can indeed focus on constant . )paralleling the electrodynamics case described above , the lowest possible energy of is zero .however , in contrast to the maxwell example this lowest - energy configuration _ requires _ to be nonzero : . as a result ,the physical low - energy vacuum for a system involving a higgs - type field is not empty ; it contains , in fact , the spacetime - constant scalar field . here, the quantity denotes the vacuum expectation value ( vev ) of .we remark in passing that a key physical effects of the vev of the standard - model higgs is to generate masses for many elementary particles .it is important to note that is a scalar , and therefore it does _ not _ select a preferred direction in spacetime . and fields .the vacuum stays essentially free of fields . for a higgs - type field ( b ) ,interactions generate an energy density that requires a non - vanishing value of in the ground state .the vacuum contains a scalar condensate depicted in gray .lorentz and cpt symmetry still hold ( but other , internal symmetries may be broken ) .vector fields present in string theory ( c ) , for example , can possess interactions similar to those of the higgs demanding a nonzero field value in the lowest - energy state .the vev of a vector field would select a preferred direction in the vacuum , which breaks lorentz and possibly cpt invariance . ]our final example is a 3-vector field .( the relativistic generalization to 4-vectors or 4-tensors is straightforward . )this field ( or its generalization ) is not contained in the standard model , and there is no observational evidence for such a field at the present time .however , additional vector fields like are present in numerous candidate fundamental theories . paralleling the higgs case , we take the expression for the energy density of to be it is apparent that the lowest possible energy is zero , just as in the previous two examples involving electromagnetism and a higgs - type scalar . as for the higgs ,this lowest - energy configuration necessitates a nonzero . in particular, we must require that , where is any constant vector obeying .as in the higgs case , the vacuum does not stay empty ; it rather contains the vev of the vector field , . since we have only taken spacetime - independent solutions into consideration , is also constant .( a possible dependence would lead to positive definite derivative terms in eq .( [ vec_en_den ] ) raising the energy density , as in the other two of the above examples . )hence , the true vacuum in the above model exhibits an intrinsic direction given by .the upshot is that such an intrinsic direction violates rotation invariance and thus lorentz symmetry .we note that interactions generating energy densities like those in eq .( [ vec_en_den ] ) are absent in conventional renormalizable gauge theories , but they can be found in the context of string field theory , for instance . *spacetime - varying scalars . * a spacetime - dependent scalar , regardless of the mechanism causing this dependence , typically leads to the breakdown of spacetime - translation invariance . in sec .[ symmetries ] , we have argued that translations and lorentz transformations are closely linked in the poincar group , so that translation - symmetry violation typically leads to lorentz breakdown . in the remainder of this section, we will focus on an explicit example for this effect .consider a system with a spacetime - dependent coupling and scalar fields and , and take the lagrangian to contain a kinetic - type term . under mild assumptions , one may integrate by parts the action for this system ( for instance with respect to the first partial derivative in the above term ) without modifying the equations of motion .an equivalent lagrangian would then be given by where is an external prescribed 4-vector .this 4-vector clearly selects a preferred direction in spacetime breaking lorentz invariance .we remark that for variations of on cosmological scales , is spacetime constant locally ( say on solar - system scales ) to an excellent approximation .the breakdown of lorentz symmetry in the presence of a varying scalar can be understood intuitively as follows .the 4-gradient of the scalar has to be nonzero in some spacetime regions , for otherwise the scalar would be constant .this 4-gradient then singles out a preferred direction in such regions , as is illustrated in fig .consider , for example , a particle that possesses certain interactions with the scalar .its propagation properties might be affected differently in the directions perpendicular and parallel to the gradient .but physically inequivalent directions are associated with rotation - symmetry breaking .since rotations are contained in the lorentz group , lorentz invariance must be violated .in order to establish the low - energy phenomenology of lorentz and cpt breaking and to identify relevant experimental signals for these effects , a suitable test framework is desirable .a number of lorentz - symmetry tests are motivated and analyzed in purely kinematical models that describe small deviations from lorentz invariance , such as robertson s framework and its mansouri sexl extension , the model , and phenomenologically constructed modified one - particle dispersion relations . however , the cpt properties of these test models lack clarity , and the absence of dynamical features greatly restricts their scope . to circumvent these issues ,the sme , already mentioned in the introduction , has been developed .the present section contains a brief review of the philosophy behind the construction of the sme .let us first argue in favor of a dynamical rather than a purely kinematical test model .when the kinematical rules are fixed , there is certainly some residual freedom in introducing corresponding dynamical features .however , the dynamics is constrained by the requirement that established physics must be recovered in certain limits . moreover, it seems complicated and may not even be possible to construct an effective theory that contains the standard model with dynamics considerably different from that of the sme .we also mention that kinematical investigations are limited to only a subset of potential lorentz - violation signals emerging from fundamental physics . from this point of view, it appears to be desirable to implement explicitly dynamical features of sufficient generality into test frameworks for lorentz and cpt invariance . * the generality of the sme . * in order to recognize the generality of the sme , we review the main ingredients of its construction . starting from the conventional standard - model and general - relativity lagrangians and , respectively , lorentz - violating corrections added : here , denotes the sme lagrangian .the correction terms are formed by contracting standard - model and gravitational fields of any mass dimensionality with lorentz - breaking tensorial coefficients that describe a nontrivial vacuum with background vectors or tensors .this background is presumed to originate from effects in the underlying theory , such as those discussed in the previous section . to ensure coordinate independence, these contractions must yield coordinate lorentz scalars .we remark that in a curved - background context involving gravity , this procedure is most easily implemented employing the vierbein .it thus becomes clear that all possible contributions to determine the most general effective dynamical description of first - order lorentz violation at the level of observer lorentz - invariant unitary effective field theory .other potential features of underlying physics , such as non - pointlike elementary particles or a discrete spacetime structure at the planck length , are not likely to invalidate this effective - field - theory approach at presently attainable energies . on the contrary ,the phenomenologically successful standard model and general relativity are widely believed to be effective - field - theory limits of more fundamental physics .if underlying physics indeed leads to minuscule lorentz - breaking effects , it would appear somewhat artificial to consider low - energy effective models outside the framework of effective quantum field theory .we finally note that the requirement for a low - energy description beyond effective field theory is also unlikely to arise within the context of underlying physics with novel lorentz-_symmetric _ features , such as additional particles , new symmetries , or large extra dimensions .note in particular that lorentz - invariant modifications can therefore easily be implemented into the sme , should it become necessary .* advantages of the sme . *the sme allows the identification and direct comparison of virtually all presently feasible experiments that search for deviations from lorentz and cpt symmetry . moreover ,certain limits of the sme correspond to classical kinematics test models of relativity theory ( such as the previously mentioned framework by robertson , its mansouri sexl extension to arbitrary clock synchronizations , or the model ) . a further benefit of the sme is the possibility of implementing additional desirable features besides coordinate independence .for example , one can choose to impose spacetime - translation invariance ( at least in the flat - spacetime limit ) , su(3)(2)(1 ) gauge invariance , power - counting renormalizability , hermiticity , and local interactions .these requirements place additional constraints on the parameter space for lorentz and cpt breakdown .another possibility is to make simplifying choices , such as a residual rotational symmetry in certain inertial frames .this latter hypothesis together with additional simplifications of the sme has been adopted in some investigations .the full sme contains an infinite number of lorentz- and cpt - violating coefficients .however , in an effective field theory one might generically expect the power - counting renormalizable operators to dominate at low energies .the restriction to this subset of the sme is called the minimal standard - model extension ( msme ) . to date , the flat - spacetime limit of the msme has been the basis for numerous phenomenological investigations of lorentz and cpt violation in many physical systems including mesons , baryons , electrons , photons , muons , and the higgs sector .studies involving the curved - spacetime sector of the msme have recently also been performed .we note that neutrino - oscillation measurements harbor the potential for discovery .this section contains a brief description of a representative sample of experimental efforts . * tests involving particle collisions .* one of the key predictions of the msme is that one - particle dispersion relations are typically modified by lorentz and cpt violation .these modifications would result in changes to the kinematics of particle - collision processes .( energy momentum remains conserved in the context of the flat - spacetime msme because the lorentz- and cpt - violating coefficients are taken as spacetime constant . ) for example , reaction thresholds may be shifted , reactions kinematically forbidden in lorentz - symmetric physics may now occur , and certain conventional reactions may no longer be allowed kinematically .consider , for example , the spontaneous emission of a photon from a free electron . in conventional physics , energy momentum conservation does not allow this process to occur .however , certain types of lorentz and cpt breakdown can slow down light relative to the speed of electrons .in analogy to the usual cherenkov effect ( when light travels slower inside a macroscopic medium with refractive index ) , electrons can then emit cherenkov photons in such a lorentz- and cpt - violating vacuum . this `` vacuum cherenkov radiation '' may or may not be a threshold effect depending on the type of msme coefficient . let us consider msme coefficients that are associated with a threshold for the vacuum cherenkov effect .in such a situation , we obtain an observational constraint on the size of these msme coefficients as follows .electrons traveling with a speed above the modified light speed can not do so for long : they would slow down below threshold through the emission of vacuum cherenkov radiation .it follows that if highly energetic stable electrons exist in nature , they must be below threshold . from this informationone can extract a lower bound for the threshold , which in turn gives a constraint on lorentz breaking . employing lep electrons with energies up to in thiscontext yields the limit .next , consider photon decay in vacuum .this is another particle reaction process not allowed by energy momentum conservation in ordinary physics .however , in the presence of certain msme coefficients , light may travel faster than the maximal attainable speed of electrons . in analogy to the above vacuum - cherenkov case , in which high - energy electrons become unstable, we expect that high - energy photons can now decay into an electron positron pair . with the modified dispersion relations predicted by the msme, one can indeed verify that this expectation is met .as for vacuum cherenkov radiation , photon decay in a lorentz - violating vacuum is often a threshold effect and can then be employed to extract an observational limit on this particular type of lorentz breakdown .the idea is the following .if stable photons are observed , they must essentially be below the decay threshold .it then follows that the threshold energy must be higher than the energy of these stable photons .this constraint on the threshold energy results in a limit on the size of the corresponding type of lorentz violation . at the tevatron ,stable photons with energies up to were observed . in this situation , our reasoning gives the bound .we note that the above results assume that both vacuum cherenkov radiation and photon decay are efficient enough .the purely kinematical arguments we have presented are insufficient for conservative observational limits .this is consistent with our remarks in the previous section that a dynamical framework is desirable , and the full msme ( not only the predicted modified dispersion relations ) are needed .appropriate calculations within the msme indeed establish that the rates for vacuum cherenkov radiation and photon decay would be fast enough to validate the above reasoning .* spectropolarimetry of cosmological sources . *the pure electrodynamics sector of the msme contains one type of coefficient that violates both lorentz and cpt invariance .it is a mass dimension three term of chern simons type parametrized by the background 4-vector . among other effects ,the term results in birefringence for photons , the vacuum cherenkov effect , as well as shifts in cavity frequencies .these deviations from established physics are accessible to experimental investigations .birefringence searches in cosmic radiation are particularly well suited since the extremely long propagation time is directly associated with an ultrahigh sensitivity to this type of lorentz and cpt breakdown .spectropolarimetric studies of experimental data from cosmological sources have established a limit on at the level of .* investigations of cold antihydrogen . *a comparison of the spectra of hydrogen ( h ) and antihydrogen ( ) is well suited for lorentz- and cpt - violation searches .there are a number of transitions that one can consider .one of them , the unmixed 1s2s transition , seems to be an exquisite candidate : its projected experimental sensitivity is anticipated to be roughly at the level of , which is auspicious in light of the expected planck - scale suppression of quantum - gravity effects .however , a leading - order calculation within the msme predicts identical shifts for free h or in the initial and final levels with respect to the conventional energy states . from this point of view , the 1s2s transition is actually less satisfactory for the determination of unsuppressed lorentz- and cpt - breaking effects . within the msme , the leading non - trivial contribution to this transitionis generated by relativistic corrections , and it comes with two additional powers of the fine - structure constant . the predicted shift in the transition frequency , already expected to be minute at zeroth order in , is thus associated with a further suppression factor of more than ten thousand . an additional transition that can be used for lorentz and cpt tests is the spin - mixed 1s2s transition . when h or is trapped with electromagnetic fields ( e.g. , in a ioffe pritchard trap ) the 1s and the 2s states are each split as a result of the usual zeeman effect .the msme then predicts that in this case the 1s2s transition between the spin - mixed states is indeed affected by lorentz- and cpt - violating coefficients at leading order .a disadvantage from a practical perspective is the magentic - field dependence of this transition , so that the experimental sensitivity is limited by the size of the inhomogeneity of the field in the trap .the development of new experimental techniques might avoid this issue , and a frequency resolutions close to the natural linewidth could then be attained . a third transition interesting from a lorentz- and cpt - violation perspective is the hyperfine zeeman transitions within the 1s state itself . even in the limit of a zero field ,the msme establishes leading - order level shifts for two of the transitions between the zeeman - split states .we remark that this result may also be advantageous from an experimental perspective because a number of other transitions of this type , such as the conventional h - maser line , can be well resolved in the laboratory . * tests in penning traps .* the msme establishes not only that atomic energy levels can be affected by the presence of lorentz and cpt breakdown , but also , for instance , the levels of protons and antiprotons inside a penning trap .a perturbative calculation predicts that only one msme coefficient ( a cpt - violating -type background vector , which is coupled to the chiral current of a fermion ) affects the transition - frequency shifts between the proton and its antiparticle at leading order . to be more specific ,the anomaly frequencies are displaced in opposite directions for protons and antiprotons .this effect can be used to determine a clean experimental constraint on the proton s coefficient .* neutral - meson interferometry . *a long established standard cpt - symmetry test is the comparison of the k - meson s mass to that of the corresponding antimeson : even extremely small mass differences would yield measurable effects in kaon - interferometry experiments . in spite of the fact that the msme contains only one mass operator for a given quark species and the associated antiquark species , these ( anti)particles are nevertheless influenced differently by the lorentz- and cpt - breaking background in the msme .this causes the dispersion relations for a meson and its antimeson to differ , so that mesons and antimesons can possess distinct energies despite having equal 3-momenta .it is this split in energy that is ultimately measurable in interferometric experiments , and it is thus potentially observable in such systems .we remark that not only the k - meson but also other neutral mesons can be investigated .note in particular that in addition to cpt breakdown , lorentz violation is involved as well , so that boost- and rotation - dependent effects can be looked for .to date , no credible observational evidence for deviations from relativity theory exist .however , in theoretical approaches to underlying physics , such as in models of the planck - length structure of spacetime , minuscule violations of lorentz and translation symmetry can be accommodated . in this mini course, we have given an overview of the motivations , theoretical ideas , and experimental efforts in this spacetime - symmetry - breaking context .we have argued that quantum - gravity models , for example , should describe a quantized version of the dynamics of spacetime .in such a quantized spacetime , the concept of a smooth manifold may break down at some small distance scale , so that the usual spacetime symmetries may only emerge at low energies .we have reviewed two specific examples how lorentz invariance might be violated : in string field theory ( i.e. , via spontaneous symmetry breaking ) or in the context of varying scalars ( i.e. , via the gradient of the scalar ) . at presently attainable energies and under mild assumptions regarding the dynamics ,the effects of general lorentz and cpt breakdown are described by an effective field theory called the sme .this framework contains essentially the entire body of established physics ( i.e. , the standard model and general relativity ) , so that predictions for lorentz- and cpt - breaking effects in essentially all physical systems are possible , at least in principle .the coefficients for lorentz and cpt violation in the sme are prescribed non - dynamical background vectors and tensors assumed to be generated by more fundamental physics .spacetime symmetries underpin numerous physical effects .accordingly , lorentz and cpt invariance can be tested in a wide variety of physical systems .this fact , together with the generality of the sme and the strong motivations for lorentz and cpt violations , has led to a recent surge of experimental efforts to test relativity theory .we have reviewed a representative sample of these efforts in the contexts of dispersion - relation studies , astrophysical polarimetry , and matter antimatter comparisons .a variety of important unanswered questions remain in this field .they are of theoretical , of phenomenological , as well as of experimental nature , and they provide ample ground for further research in spacetime - symmetry physics .the author wishes to thank the organizers for the invitation to present this mini course .this work was funded in part by conacyt under grant no .55310 .kosteleck and s. samuel , phys .d * 39 * , 683 ( 1989 ) ; phys .lett . * 63 * , 224 ( 1989 ) ; * 66 * , 1811 ( 1991 ) ; v.a . kosteleck and r. potting , nucl. phys .b * 359 * , 545 ( 1991 ) ; phys .b * 381 * , 89 ( 1996 ) ; phys .d * 63 * , 046007 ( 2001 ) ; v.a .kosteleck , phys .lett . * 84 * , 4541 ( 2000 ) .j. alfaro , h.a .morales - tcotl , and l.f .urrutia , phys .lett . * 84 * , 2318 ( 2000 ) ; phys . rev .d * 65 * , 103509 ( 2002 ) ; d. sudarsky , phys .d * 68 * , 024010 ( 2003 ) .klinkhamer , nucl .b * 578 * , 277 ( 2000 ) .kosteleck , phys .d * 68 * , 123511 ( 2003 ) ; o. bertolami , phys .d * 69 * , 083513 ( 2004 ) . c. amsler _ et al . _[ particle data group ] , phys .b * 667 * , 1 ( 2008 ) .greenberg , phys .lett . * 89 * , 231602 ( 2002 ) . for a somewhat more pedestrian exposition ,see o.w .greenberg , found .phys . * 36 * , 1535 ( 2006 ) .see , e.g. , n.e .mavromatos , a. meregaglia , a. rubbia , a. sakharov , and s. sarkar , phys .d * 77 * , 053014 ( 2008 ) .d. colladay and v.a .kosteleck , phys .d * 55 * , 6760 ( 1997 ) ; * 58 * , 116002 ( 1998 ) ; v.a .kosteleck and r. lehnert , phys .d * 63 * , 065008 ( 2001 ) ; v.a .kosteleck , phys .d * 69 * , 105009 ( 2004 ) ; r. bluhm and v.a .kosteleck , phys .d * 71 * , 065008 ( 2005 ) .kosteleck and m. mewes , phys . rev .d * 80 * , 015020 ( 2009 ) .berger and v.a .kosteleck , phys .d * 65 * , 091701(r ) ( 2002 ) ; h. belich , phys .d * 68 * , 065030 ( 2003 ) ; m.s .berger , phys .d * 68 * , 115005 ( 2003 ) .kosteleck and m. mewes , phys .d * 66 * , 056005 ( 2002 ) .s. coleman and s.l .glashow , phys .d * 59 * , 116008 ( 1999 ) .ktev collaboration , h. nguyen , arxiv : hep - ex/0112046 ; y.b .hsiung _ et al ._ , nucl .suppl . * 86 * , 312 ( 2000 ) .focus collaboration , j.m .link , phys .b * 556 * , 7 ( 2003 ) .opal collaboration , r. ackerstaff _ et al ._ , z. phys .c * 76 * , 401 ( 1997 ) ; delphi collaboration , m. feindt _ et al ._ , preprint delphi 97 - 98 conf 80 ( 1997 ) ; belle collaboration , k. abe _ et al ._ , phys .lett . * 86 * , 3228 ( 2001 ) ; babar collaboration , b. aubert , phys .92 * , 181801 ( 2004 ) .babar collaboration , b. aubert , arxiv : hep - ex/0607103 ; g. amelino - camelia _ et al ._ , eur .j. c * 68 * , 619 ( 2010 ) .kosteleck and r. potting , phys .d * 51 * , 3923 ( 1995 ) .d. colladay and v.a .kosteleck , phys .b * 344 * , 259 ( 1995 ) ; phys .d * 52 * , 6224 ( 1995 ) ; v.a .kosteleck and r. van kooten , phys .d * 54 * , 5585 ( 1996 ) ; o. bertolami , phys . lett .b * 395 * , 178 ( 1997 ) ; n. isgur , phys .b * 515 * , 333 ( 2001 ) .kosteleck , phys .* 80 * , 1818 ( 1998 ) ; phys .d * 61 * , 016002 ( 2000 ) ; phys .d * 64 * , 076001 ( 2001 ) .d. bear , phys .lett . * 85 * , 5038 ( 2000 ) ; d.f .phillips , phys .a * 62 * , 063405 ( 2000 ) ; phys .d * 63 * , 111101 ( 2001 ) ; m.a .humphrey _ et al ._ , phys .a * 68 * , 063807 ( 2003 ) ; v.a .kosteleck and c.d .lane , phys .d * 60 * , 116010 ( 1999 ) ; j. math .* 40 * , 6245 ( 1999 ) ; i. altarev _ et al ._ , phys . rev. lett . * 103 * , 081602 ( 2009 ) ; europhys .lett . * 92 * , 51001 ( 2010 ) .r. bluhm , phys .lett . * 88 * , 090801 ( 2002 ) ; phys . rev .d * 68 * , 125008 ( 2003 ) ; r. lehnert , phys .d * 68 * , 085003 ( 2003 ) .f. can , phys .rev . lett .* 93 * , 230801 ( 2004 ) ; p. wolf , phys .lett . * 96 * , 060801 ( 2006 ) . h. dehmelt , phys .lett . * 83 * , 4694 ( 1999 ) ; r. mittleman , phys .83 * , 2116 ( 1999 ) ; g. gabrielse , phys .lett . * 82 * , 3198 ( 1999 ) ; phys .lett . * 79 * , 1432 ( 1997 ) ; phys .d * 57 * , 3932 ( 1998 ) ; c.d .lane , phys .d * 72 * , 016005 ( 2005 ) .hou , phys .lett . * 90 * , 201101 ( 2003 ) ; r. bluhm and v.a .kosteleck , phys .* 84 * , 1381 ( 2000 ) ; b.r .heckel , phys .lett . * 97 * , 021603 ( 2006 ) .h. mller , phys .d * 68 * , 116006 ( 2003 ) ; h. mller , phys .d * 71 * , 045004 ( 2005 ) ; b. altschul , phys .* 96 * , 201101 ( 2006 ) ; phys .d * 80 * , 091901 ( 2009 ) phys .d * 81 * , 041701 ( 2010 ) .kosteleck and m. mewes , phys .lett . * 87 * , 251304 ( 2001 ) ; j. lipa , phys . rev . lett .* 90 * , 060403 ( 2003 ) ; q. bailey and v.a .kosteleck , phys .d * 70 * , 076006 ( 2004 ) ; b. feng , phys .lett . * 96 * , 221302 ( 2006 ) ; v.a .kosteleck and m. mewes , phys .lett . * 97 * , 140401 ( 2006 ) ; b. altschul , phys .* 98 * , 041603 ( 2007 ) ; c.d .carone , m. sher , and m. vanderhaeghen , phys .d * 74 * , 077901 ( 2006 ) : v.w .hughes , phys .lett . * 87 * , 111804 ( 2001 ) ; g.w .bennett _ et al ._ [ muon ( g-2 ) collaboration ] , phys . rev .lett . * 100 * , 091602 ( 2008 ) ; r. bluhm , phys .* 84 * , 1098 ( 2000 ) .anderson , phys .d * 70 * , 016001 ( 2004 ) .battat , j.f .chandler , and c.w .stubbs , phys .lett . * 99 * , 241103 ( 2007 ) ; h. mller , phys .* 100 * , 031101 ( 2008 ) ; k.y .chung , phys .d * 80 * , 016002 ( 2009 ) ; q.g . bailey and v.a .kosteleck , phys .d * 74 * , 045001 ( 2006 ) ; v.a .kosteleck , n. russell , and j. tasson , phys .lett . * 100 * , 111102 ( 2008 ) ; q.g .bailey , phys .d * 80 * , 044004 ( 2009 ) ; v.a .kosteleck and j. tasson , phys . rev .lett . * 102 * , 010402 ( 2009 ) .barger , phys .* 85 * , 5055 ( 2000 ) ; j.n .bahcall , phys .b * 534 * , 120 ( 2002 ) ; v.a .kosteleck and m. mewes , phys .d * 70 * , 031902 ( 2004 ) ; phys .d * 70 * , 076002 ( 2004 ) ; t. katori , v.a .kosteleck , and r. tayloe , phys .d * 74 * , 105009 ( 2006 ) ; j.s .daz , v.a .kosteleck , and m. mewes , phys .d * 80 * , 076007 ( 2009 ) ; s. hollenberg , o. micu , and h. ps , phys .d * 80 * , 053010 ( 2009 ) ; j.s .daz and a. kosteleck , arxiv:1012.5985 [ hep - ph ] .kosteleck and m. mewes , phys .d * 69 * , 016005 ( 2004 ) .r. lehnert , j. math .* 45 * , 3399 ( 2004 ) ; b. altschul and v.a .kosteleck , phys .b * 628 * , 106 ( 2005 ) ; j. alfaro , phys .b * 639 * , 586 ( 2006 ) ; r. lehnert , phys . rev .d * 74 * , 125001 ( 2006 ) ; a.j .hariton and r. lehnert , phys .a * 367 * , 11 ( 2007 ) ; r. lehnert , rev .. fs . * 56 ( 6 ) * , 469 ( 2010 ) ; c.m .reyes , l.f .urrutia , and j.d .vergara , phys .d * 78 * , 125011 ( 2008 ) ; phys .b * 675 * , 336 ( 2009 ) ; c. armendariz - picon and a. diez - tejedor , jcap * 0912 * , 018 ( 2009 ) ; c.m .reyes , phys .d * 80 * , 105008 ( 2009 ) ; phys .d * 82 * , 125036 ( 2010 ) ; j. alfaro , int .j. mod .a * 25 * , 3271 ( 2010 ) ; j. alfaro and l.f .urrutia , phys .d * 81 * , 025007 ( 2010 ) ; b. altschul , q.g .bailey , and v.a .kosteleck , phys .d * 81 * , 065028 ( 2010 ) ; c. armendariz - picon , jhep * 1010 * , 079 ( 2010 ) ; m. cambiaso and l.f .urrutia , phys .d * 82 * , 101502 ( 2010 ) ; r. casana , phys .d * 80 * , 125040 ( 2009 ) ; phys .d * 82 * , 125006 ( 2010 ) .r. lehnert and r. potting , phys .* 93 * , 110402 ( 2004 ) ; phys .d * 70 * , 125010 ( 2004 ) ; k.g .zloshchastiev , arxiv:1003.0657 [ hep - th ] .hohensee , phys .lett . * 102 * , 170402 ( 2009 ) ; phys .d * 80 * , 036010 ( 2009 ) ; see also j .-bocquet _ et al . _[ graal collaboration ] , phys .* 104 * , 241601 ( 2010 ) .klinkhamer and m. schreck , phys .d * 78 * , 085026 ( 2008 ) .carroll , g.b . field , and r. jackiw , phys .d * 41 * , 1231 ( 1990 ) .m. mewes , phys .d * 78 * , 096008 ( 2008 ) .r. bluhm , v.a .kosteleck , and n. russell , phys .lett . * 82 * , 2254 ( 1999 ) .
one of the most difficult questions in present - day physics concerns a fundamental theory of space , time , and matter that incorporates a consistent quantum description of gravity . there are various theoretical approaches to such a quantum - gravity theory . nevertheless , experimental progress is hampered in this research field because many models predict deviations from established physics that are suppressed by some power of the planck scale , which currently appears to be immeasurably small . however , tests of relativity theory provide one promising avenue to overcome this phenomenological obstacle : many models for underlying physics can accommodate a small breakdown of lorentz symmetry , and numerous feasible lorentz - symmetry tests have planck reach . such mild violations of einstein s relativity have therefore become a focus of recent research efforts . this presentation provides a brief survey of the key ideas in this research field and is geared at both experimentalists and theorists . in particular , several theoretical mechanisms leading to deviations from relativity theory are presented ; the standard theoretical framework for relativity violations at currently accessible energy scales ( i.e. , the sme ) is reviewed , and various present and near - future experimental efforts within this field are discussed .
and frequency regulation of microgrid are essential in both grid connected and islanded modes . in gridconnected mode , frequency is dictated by the main grid while the voltage within the microgrid can be regulated based on its reactive power generation and consumption . when the microgrid disconnects from the main grid in response to , say , upstream disturbance or voltage fluctuation and goes to islanding mode , both voltage and frequency at all locations in the microgrid have to be regulated to nominal values .typical microgrid control hierarchy includes ( 1 ) primary control for real and reactive power sharing between distributed generators ( dgs ) ; and ( 2 ) secondary control to maintain load voltage and frequency close to nominal values via respective references communicated to each dg davoudi from a controller .using centralized control requires complex and expensive communication infrastructure and is subject to failure in the central controller and communication links . to overcome these limitations , distributed cooperative control can be utilized which employs a sparse network .the application of cooperative control for dg operation in power systems in grid connected mode has been studied in among others .the technique of providing the reference values to only a fraction of the dgs in the network , known as _ pinning _ , has been studied in . in , _ pinning control _ in microgrid islanding mode using energy storage system as master unit was recently studied . in ,bidram _ et al ._ employed multi - agent distributed cooperative secondary voltage control of the microgrid . in the aforementioned studies ,the selection of the pinned ( or leader ) dg(s ) has been assumed to be arbitrary .however , as has been shown in manaffam13a , delellis13,chen07 , the performance and robustness of the network is directly related to the choice of the _ pinning set , _ _i.e. , _ the set of pinned or leader dg(s ) .recent results obtained by the authors in obtain tight upper and lower bounds on the algebraic connectivity of the network to the reference signal . by taking advantage of these novel results , the work in this paper proffers multiple novel contributions .we first formulate the problem of single and multiple pinning of multi - agent distributed cooperative control in microgrids .next , the effect of proper selection of pinning dgs is discussed .then , we propose two implementable pinning node(s ) selection algorithms based on degree and distance of the candidate leader(s ) from the rest of the microgrid .several cases of power system topologies and communication networks as well as different scenarios are numerically simulated to show that intelligent selection of the pinning node(s ) results in better transient performance of the dgs terminal profiles . the remainder of the paper is organized as follows .section ii reviews the microgrid primary and secondary control schemes for regulating voltage and frequency .the intelligent single and multiple pinning problems are formulated in section iii followed in section iv by a presentation of the algorithms for solving these problems .section v illustrates the performance of the proposed algorithms via extensive numerical simulation studies while section vi concludes the paper .in this section , we briefly describe the basic system model consisting of the inverter based dg , the primary controller for the dg , and the distributed cooperative secondary controller . for more details ,the reader is referred to the development in and the references therein .the inverter based dg model consists of a three legged inverter bridge connected to a dc voltage source .the dc bus dynamics and switching process of the inverter can be neglected due to the assumptions of an ideal dc source and realization of high switching frequency of the bridge , respectively . the output frequency and voltage magnitude of each dgare set in accordance with the droop controllers . the output voltage magnitude and frequency control of the inverter itselfis typically implemented with internal current controllers in the standard rotating reference frame with the d - axis aligned to the output voltage vector , _ e.g. , _ see . from the droop control point of view, the inverter can be assumed to be ideal in the sense that its output voltage magnitude and frequency are regulated to the desired values of and , respectively , within the time frame of interest .thus , and are the desired angular frequency and voltage amplitude of the dg , respectively ; and are the active and reactive power outputs of the dg ; and are reference angular frequency and voltage set points determined by the secondary control , respectively ; and and are droop coefficients for real and reactive power . here , and are generated by passing the instantaneous real and reactive power outputs of the inverter through first order low pass filter with corner frequency to eliminate undesired harmonics the goal in secondary voltage control is to determine the primary control reference voltage inside each dg so as to minimize the deviation of the concerned dg s output voltage from the constant nominal reference voltage .mathematically , this is accomplished by the following set of equations is an auxiliary control input given by is the voltage regulation error for dg , is a control gain , while denote the total number of dgs in the network . here , ] and ^{t} ] is the network s pinning gain matrix , ) ] is the laplacian of the adjacency matrix defined as ( [ eq : v_error ] ) and ( [ eq : f_error ] ) , it is clear that pinning choices and are critical to the performance of the secondary controller .as is well known from linear time - invariant system theory , the performance and robustness of the systems in ( [ eq : v_error ] ) and ( [ eq : f_error ] ) are directly dependent on the eigenvalues of .therefore , choosing which dg(s ) to provide the reference values to , _i.e. _ , , has an enormous impact on the performance of the microgrid .the first problem ( pinning problem 1 ) that is addressed in this paper is that of choosing the location of the nodes to be pinned under the specification of a given size of the pinning set . in other words ,given a desired number of desired pinning nodes , one needs to determine the that maximizes the minimum eigenvalue of the closed - loop system . the second problem ( pinning problem 2 ) addressed in this paper is to find the minimum number of pinning nodes ( and their locations ) while guaranteeing a certain specified convergence rate .it is well known that the optimal solutions for the aforementioned problems have exponential complexity .thus , finding the optimal solution is not practical in microgrids with large number of dgs which renders suboptimal solutions with polynomial time complexity to be of immense interest . to develop any effective algorithm for either of the pinning problems stated above , we need to understand how the structural properties ( such as distance , number of connections , _ etc . _ ) of the pinning set in the network affects the algebraic connectivity , , of the network with respect to the reference .one possible way to understand these effects is by utilizing lower and upper bounds on .a set of tight upper and lower bounds on the algebraic connectivity of the network with respect to the reference have recently been developed in and are provided in appendix app : bounds for the case of pinned nodes with . in what follows, we discuss the implications of these bounds . for illustration ,consider first the single pinning case where the reference value is only available in one dg ( _ i.e. , _ ) . based on the bounds in appendix [ app : bounds ] ,it is clear that the upper bound on is a strictly increasing function of the degree of the pinning node and its pinning gain .additionally , if the pinning gain , , is considered to be very large , the upper bound on algebraic connectivity of the network with respect to the reference can be shown to be bounded as where denotes the out degree of the pinning node . since ,the algebraic connectivity of the network with respect to the reference can not exceed for the single pinning case .thus , it is clear to see that the convergence rate of the errors in ( [ eq : v_error ] ) and ( [ eq : f_error ] ) can not exceed and for voltage and frequency , respectively .since we established that the algebraic connectivity of the network with respect to the reference is upper bounded by , it can be conjectured that pinning a dg with higher out - degree is more effective than pinning a dg with lower out - degree . furthermore , from the given lower bound on in appendix [ app : bounds ] , it can be concluded that another topological characteristic to choose the pinning dg is its _ centrality _ value which is a measure of its distance from the farthest dg in the network .this means that the candidate dg for pinning should have a low distance from the rest of the network .the general case of multiple pinning ( _ i.e. , _ ) can be understood by considering the set of pinned dgs as a supernode in the network and by calculating all the topological properties according to this modified definition . before we proceed to state the algorithms arising from these bounds ,let us state the following nomenclature where is the set of all dgs , denotes the pinning set , where denotes minus operation for sets , while d is the number of links directed from the pinning dgs to the rest of the network .this algorithm will choose pinning dgs to maximize the minimum eigenvalue of the closed - loop system .since the upper bound on the minimum eigenvalue of the pinned network is an increasing function of the degree of the pinning dg , the dgs are sorted by decreasing degree ; furthermore , from ( [ eq : lower_multiple ] ) , we know that the minimum path length between the pinning set and the rest of the network should be minimized . hence , the added pinning dg to the pinning set should maximize the combined measure , which is the minimum path length between the candidate pinning dg and the rest of the network subtracted from the out - degree of pinning .this procedure should continue until there dgs in the pinning set .the pseudo - code of this algorithm can be given as 1 . and , 2 .while do * * , ..45 .35 [ fig:4buspinning ] in the second problem , let be the desired algebraic connectivity of the network to the reference . since the algebraic connectivity of the network to the reference can not exceed the maximum out - degree of the pinning set , the minimum number of pinning dgs to achieve a target convergence rate is given as .consequently , the smallest number of pinning dgs , , to achieve the desired convergence rate should be chosen such that summation of the highest degree dgs should exceed .algorithm 1 is started with this and arrives at a pinning set .if the condition is satisfied , the algorithm stops , otherwise , one more pinning dg is added using algorithm 1 until the desired convergence rate is achieved .the pseudo - code of this algorithm can be expressed as 1 .sort the degree of the dgs such that 2 .set to be smallest integer such that 3 . and , 4 . while do [ step1 ] 1 . 2 . , .if , then stop ; else : set and go to [ step1 ] ..55 .4 . , title="fig:",width=321,height=143 ] . , title="fig:",width=321,height=143 ] . , width=321,height=143 ] [ fig:5ringpower ] for our numerical simulations, we have used the simpower system toolbox of simulinkfor 4 bus and 5 bus power systems , shown in figs . [ fig:4buspinning ] and [ fig:5busring ] , with different topologies and communication networks to show the adaptability and effectiveness of the proposed pinning control method .microgrid operates on a 3-phase , 380v(l - l ) and frequency of 50 hz ( ) .unequal parameters of the dgs are given in the top section of table [ tab : dgs ] while the remaining parameters are adopted from .in all case studies , dg 3 and dg 4 are assumed to be type ii dgs and the rest of the dgs are considered to be type i. loads are as given in the bottom section of table [ tab : dgs ] .as mentioned earlier , an ideal dc source is assumed from dg side , therefore , the weather effect is not considered in this study .the cut - off frequency of the low - pass filters , , is set to .the control gains , and are all set to .it should be noted that microgrid islanding operation is detected based on the status of the main breaker and disconnect switches at utility / grid point of connection .we note here that the undershoot / overshoot of voltage amplitude and frequency of the dgs in microgrid during the transient from grid connected to islanding mode should not exceed 10 - 20 cycles to avoid the operation of 27 , 59 , and 81 protective relays .generally , the protective power relays for voltage and frequency are typically set to and for 10 - 20 cycles .the microgrid s main breaker opens at and goes to islanding mode at which time the secondary voltage and frequency control are activated . as shown in figs .[ fig:4buspinning ] and [ fig:5busring ] , our study cases will illustrate both two - way ( undirected ) and one - way ( directed ) communication links in the microgrid .in one - way communication links , we restrict the transition function so that the new state of the sender does not depend on the current state of the receiver ( a neighboring dg ) .security of the location of a dg and the criticality of the load that it feeds are factors to be considered when deciding on directed vs undirected communication links . :voltage ( top ) and frequency ( bottom ) evolution of 5-bus system .algorithm 2 results in simultaneous pinning of dg1 and dg3.,title="fig:",width=321,height=143 ] : voltage ( top ) and frequency ( bottom ) evolution of 5-bus system .algorithm 2 results in simultaneous pinning of dg1 and dg3.,title="fig:",width=321,height=143 ] l .dg and load parameters of the systems .[ cols= " < , < , < " , ] 1 1intelligent single and multiple pinning based distributed cooperative control algorithms have been proposed to efficiently synchronize dgs in a microgrid to their nominal voltage and frequency values after disconnecting from the main grid .it has been shown that selection of pinning nodes depends directly on the power system and communication network topologies .case studies using different types of microgrid configurations and scenarios demonstrate that the proposed methodology helps the dgs in the microgrid achieve convergence with desired rate and improved transient voltage and frequency behavior after going to islanding mode .99 a. mehrizi - sani and r. iravani , potential - function based control of a microgrid in islanded and grid - connected models , _ ieee trans .power syst ._ , vol . 25 , pp1883 - 1891 , nov . 2010 .y. a. r. i. mohamed and a. a. radwan , hierarchical control system for robust microgrid operation and seamless mode transfer in active distribution systems , _ ieee trans . smart grid _ ,352 - 362 , jun . 2011 .a. bidram and a. davoudi , hierarchical structure of microgrids control system , _ ieee trans . smart grid _ , 3(4 ) , pp .1963 - 1976 , 2012 .k. d. brabandere , b. bolsens , j. v. den keybus , a. woyte , j. driesen , and r. belmans , a voltage and frequency droop control method for parallel inverters , _ ieee transactions on power electronics _ , vol .1107 - 1115 , 2007 .m. chandorkar and d. divan , decentralized operation of distributed ups systems , in _ proc .ieee pedes _ ,565 - 571 , 1996 .m. prodanovic and t. green , high - quality power generation through distributed control of a power park microgrid , _ ieee transactions on power delivery _ , vol .1471 - 1482 , 2006 .m. n. marwali , j .- w .jung , and a. keyhani , control of distributed generation systems part ii : load sharing control , _ ieee transactions on power electronics _1551 - 1561 , 2004 . d. e. olivares _et al . _ ,trends in microgrid control , in _ ieee trans . smart grid _ , vol . 5 , no .1905 - 1919 , july 2014 .ren , w. , beard , r.w ., _ distributed consensus in multi - vehicle cooperative control : theory and applications _ , springer , london , 2009 .r. olfati - saber , j. fax , and r. m. murray , consensus and cooperation in networked multi - agent systems , _ proceedings of the ieee _ ,215 - 233 , jan . 2007 .xin , h. , qu , z. , seuss , j. , maknouninejad , a self - organizing strategy for power flow control of photovoltaic generators in a distribution network , _ ieee trans .power syst ._ , 26 , pp. 1462 - 1473 , 2011 .j. w. simpson - porco , f. drfler , and f. bullo , synchronization and power sharing for droop - controlled inverters in islanded microgrids , _ automatica _ , vol .9 , pp . 2603 - 2611 , sep .l. y. lu and c. c. chu , autonomous power management and load sharing in isolated micro - grids by consensus - based droop control of power converters , _ future energy electronics conference ( ifeec ) _ , pp.365 - 370 , nov .j. y. kim , j. h. jeon , s. k. kim , c. cho , j. h. park , h. m. kim , and k. y. nam , cooperative control strategy of energy storage system and microsources for stabilizing the microgrid during islanded operation , _ ieee transactions on power electronics _ , vol .3037 - 3048 , dec . 2010 .h. su , z. rong , m. z. q. chen , x. wang , g. chen , and h. wang , decentralized adaptive pinning control for cluster synchronization of complex dynamical networks , _ ieee trans .394 - 399 , feb . 2013 .x. li , x. f. wang , and g. r. chen , pinning a complex dynamical network to its equilibrium , _ ieee trans .circuits syst .i , reg . papers _ , vol . 5 , no . 10 ,2074 - 2087 , oct .w. liu , w. giu , w. sheng , x. meng , sh .xue , and m. chen , pinning - based distributed cooperative control for autonomous microgrids under uncertain communication topologies , _ieee trans .power syst ._ , vol . 31 , no .1320 - 1329 , mar .a. bidram , a. davoudi , f. l. lewis , and j. m. guerrero , distributed cooperative secondary control of microgrids using feedback linearization , _ ieee trans .power syst .3 , pp.3462 - 3470 .s. manaffam and a. seyedi , pinning control for complex networks of linearly coupled oscillators , in _ american control conference ( acc ) _ , 2013 , vol .6364 - 6369 , june 2013 .p. delellis , m. di bernardo , and f. garofalo , adaptive pinning control of networks of circuits and systems in lure form , _ ieee trans . circuits syst .i , reg . papers _ , vol .60 , pp . 1 - 10 ,t. chen , x. liu , and w. lu , pinning complex networks by a single controller , _ ieee trans .circuits syst .i _ , vol .1317 - 1326 , jun .s. manaffam , m. talebi , a. jain , and a behal , synchronization in networks of identical systems via pinning : application to distributed secondary control of microgrids , _ ieee transactions on control systems technology _ , under review , available at https://arxiv.org/abs/1603.08844 .a. bidram , a. davoudi , f. l. lewis , and z. qu , secondary control of microgrids based on distributed cooperative control of multi - agent systems , _ iet gener .transm . distrib ._ , vol . 7 , no . 8 , pp . 822 - 831 , aug. 2013 .n. pogaku , m. prodanovic , and t. c. green , modeling , analysis and testing of autonomous operation of an inverter - based microgrid , _ ieee trans .power electron .613 - 625 , mar .w. yu , g. chen , and j. lu , on pinning synchronization of complex dynamical networks , _ automatica _ , vol .429 - 435 , 2009 . h. a. palizban and h. farhangi , low voltage distribution substation integration in smart microgrid , presented at the _ _ 2011 ieee 8__ conference on power electronics and ecce asia _1801 - 1808 , may - june 2011 .v. kounev , d. tipper , a. a. yavuz , b. m. grainger , and g. reed , a secure communication architecture for distributed microgrid control , _ ieee transactions on smart grid , _ vol .5 , september 2015 .let us assume that the nodes are pinned , this is called the pinning set . let the farthest node to pinning set , , be .then , define the set as are entries of the adjacency matrix of the communication network , , and denotes minus operation for sets . for each set in ( [ eq : set ] ) , we define the following . please note that due to the definition in ( [ eq : set ] ) , . from theorem 3 in (* theorem 2 ) , given an undirected network ( _ i.e. , _ information can flow both ways ) with pinning set , , and , the algebraic connectivity of the network can be bounded as follows the upper bound the lower bound , , is the smallest positive root of the polynomials , .[ fig : topology_sorting_1dg ] and [ fig : topology_sorting_3dg ] , respectively , illustrate the sets and variables defined above for sample single ( ) and multiple ( ) pinning cases .
motivated by the fact that the location(s ) and structural properties of the pinning node(s ) affect the algebraic connectivity of a network with respect to the reference value and thereby , its dynamic performance , this paper studies the application of intelligent single and multiple pinning of distributed cooperative secondary control of distributed generators ( dgs ) in islanded microgrid operation . it is shown that the intelligent selection of a pinning set based on the degree of connectivity and distance of leader dg(s ) from the rest of the network improves the transient performance for microgrid voltage and frequency regulation . the efficacy of the distributed control strategy based on the proposed algorithms is illustrated via numerical results simulating typical scenarios for a variety of microgrid configurations .
entanglement is a key resource for quantum information processing .as an open quantum system is susceptible to external environment , entanglement would decay due to losses caused by unwanted interaction between the quantum system and its external electromagnetic field , which may lead to failure of quantum communication between two distant parties ( alice and bob ) and limit transmission distance .therefore , reliable generation and distribution of entanglement between two distant communicating parties ( alice and bob ) has become increasingly important .continuous - variable entanglement has an advantage over discrete - variable one due to its high efficiency in generation and measurement of quantum states .as the most widely used continuous variable entangled resource , gaussian epr - like entangled pairs can be generated between amplitude and phase quadratures of two outgoing light beams of a nondegenerate optical parametric amplifier ( nopa ) .the main component of a nopa is a two - end cavity which consists of a nonlinear crystal and mirrors . with a strong undepleted coherent pump beam employed as a source of energy ,interactions between the pump beam and two modes inside the cavity generate a pair of outgoing beams in gaussian epr - like entangled states . in fig[ fig : dual - nopa - cfb ] , a nopa is simply denoted by a block with four inputs and two outputs .more details of the nopa are given in section [ sec : system - model ] . our previous work presents a dual nopa coherent feedback system where two nopas are separately located at two distant endpoints ( alice and bob ) and connected in a feedback loop without employing any measurement devices , shown in fig [ fig : dual - nopa - cfb ] . in the network ,two entangled outgoing fields and are generated .our previous work shows that under the same pump power , decay rate and transmission losses , the dual - nopa coherent feedback network generates stronger epr entanglement than a single nopa placed in the middle of the two ends ( at charlie s ) .the paper also examines effects of losses and time delays on the dual - nopa system .not surprisingly , epr entanglement worsens as transmission and amplification losses increase ; transmission time delays reduce the range of frequency over which epr entanglement exists . in this paper, we examine the effect of phase shifts along the transmission channels on epr entanglement generated by the dual - nopa coherent feedback system .what we are interested in is whether phase shifts degrade epr entanglement ; if they do , then whether we can recover it or minimize the epr entanglement reduction by placing two adjustable phase shifters separately at each output .the paper is organised as follows .section [ sec : prelim ] briefly introduces linear quantum systems and an epr entanglement criterion between two continuous - mode gaussian fields .a description of our dual - nopa coherent feedback system under influence of losses and phase shifts is given in section [ sec : system - model ] .section [ sec : analysis ] investigates the stability condition , as well as epr entanglement under effects of phase shifts in a lossless system and a more general case where transmission losses and amplification losses are considered .finally section [ sec : conclusion ] gives the conclusion of this paper .this paper employs the following notations . denotes , the transpose of a matrix of numbers or operators is denoted by and denotes ( i ) the complex conjugate of a number , ( ii ) the conjugate transpose of a matrix , as well as ( iii ) the adjoint of an operator . denotes an identity matrix .trace operator is denoted by } ] .the dynamics of the system can be described by the time - varying interaction hamitonian between the system and environment in which is the -th system coupling operator and is the field operator describing the -th environment field .when the environment is under the condition of the markov limit , the field operator under the vacuum state satisfies =\delta(t - s) ] , ] and ] and denote the corresponding two - mode squeezing spectra as , we have the following definition of epr entanglement . fields and are epr entangled at the frequency rad / s if ] , =0 ] , =0 ] .the dynamics of the system in fig .[ fig : dual - nopa - cfb - ps ] is given by with outputs define the quadratures ^t,\nonumber \\ \xi=&[\xi^q_{in , a,1},\xi^p_{in , a,1},\xi^q_{in , b,2},\xi^p_{in , b,2},\xi^q_{loss , a,1},\xi^p_{loss , a,1},\xi^q_{loss , b,1},\xi^p_{loss , b,1},\nonumber\\ & \xi^q_{loss , a,2},\xi^p_{loss , a,2},\xi^q_{loss , b,2},\xi^p_{loss , b,2},\xi^q_{bs,1},\xi^p_{bs,1},\xi^q_{bs,2},\xi^p_{bs,2}]^t,\nonumber\\ \xi_{out}=&[\xi^q_{out , b,1},\xi^p_{out , b,1},\xi^q_{out , a,2},\xi^p_{out , a,2}]^t.\end{aligned}\ ] ] according to ( [ eq : dynamics ] ) , ( [ eq : output]) ( [ eq : nopas - internal ] ) and ( [ eq : nopas - out ] ) , we have where , and are real matrices . as mentioned in section [ sec : entanglement ] , two - mode squeezing spectra can be approximated to at in low frequency domain . in the remainder of this paper , we evaluate degree of epr entanglement between outgoing fields and by . based on ( [ eq : v_+ ] ) and ( [ eq : v_- ] ) , we get ,\label{eq : entanglement}\end{aligned}\ ] ] where , ] .here we analyse effects of phase shifts and on stability and epr entanglement of the dual - nopa coherent feedback system .we investigate epr entanglement when the system is lossless , that is , amplification and transmission losses are neglected , as well as epr entanglement of the system with losses .moreover , we examine effects of adjustable phase shifters with phase shifts and to see whether they can recover the epr entanglement impacted by and . parameters of the system are defined as follows .based on and , we define hz as a reference value of the transmissivity mirrors , hz and hz , where and ( ) are adjustable real parameters . following , we assume that when and the value of is proportional to the absolute value of , so we set .transmission rate ] .note that we employ mathematica to perform the complex symbolic manipulations that are required in this paper . to make the system workable, stability must be guaranteed . in our case ,the system is stable which means that as time goes to infinity , the mean total number of photons within cavities of the two nopas must not increase continuously .mathematically , stability condition holds when matrix in equation ( [ eq : dual - nopa - dynamics-1 ] ) is hurwitz , that is , real parts of all eigenvalues of are negative . based on this, we have the following theorem which states the stability condition of our system with parameters , , , , and .[ th : stability ] the dual - nopa coherent feedback system under the influence of losses and phase shifts is stable if and only if with , and .based on ( [ eq : nopas - internal ] ) and ( [ eq : dual - nopa - dynamics-1 ] ) , we have + eigenvalues of the matrix are where real parts of the eigenvalues are hence , stability holds when . by solving the inequality with , , ,the theorem is obtained .the theorem directly shows that , stability of the system is only impacted by the difference between values of and , not by the values of and individually .however , as and has positive value , as long as the system without phase shifts is stable , the system maintains stability in the presence of phase shifts due to the transmission distance . in this part , we investigate the effect of the phase shifts on the epr entanglement between and when the system has no transmission losses ( ) and no amplification losses ( ) .based on section [ sec : system - model ] , we obtain the two - mode squeezing spectra between the two outgoing fields of the dual - nopa coherent feedback system as a function of , and at when and , where what is of our interest is whether and decrease the degree of epr entanglement ; if they do , whether can recover the original epr entanglement ( when ) or at least improve the epr entanglement , as well as how much epr entanglement can be improved . to this end , we define the following functions at , * , the two - mode squeezing spectra between the two outgoing fields of the dual - nopa coherent feedback system without phase shifts . that is , is as in ( [ eq : v_lossless ] ) at ; * , the two - mode squeezing spectra between the two outgoing fields of the dual - nopa coherent feedback system under the effect of the phase shifts and , but without and .that is , is as in ( [ eq : v_lossless ] ) at ; * , the two - mode squeezing spectra between the two outgoing fields of the dual - nopa coherent feedback system under the effect of phase shifts and with fixed values of and .that is , is as in ( [ eq : v_lossless ] ) for fixed and ; * . if and degrade the epr entanglement , then ; * .if the epr entanglement degraded by a fixed value of and is fully recovered by and , then ; * . if the epr entanglement impacted by and is improved by and , then .let us begin with a simple case , where phase shifts .according to ( [ eq : v_lossless ] ) , we have based on the stability condition ( [ eq : stability - cond ] ) , here system is stable when , that is , , hence .moreover .therefore ( equality holds when ) , which implies epr entanglement worsens in the presence of phase shifts and .now we examine the effect of .we have we can see that as long as , the epr entanglement is fully recovered . herewe consider the lossless system in a general situation , where phase shifts and can be different .let , , , , ] .then ( [ eq : v_lossless ] ) can be written as where ( ) is as in ( [ eq : b1 - 5 ] ) . analysing ( [ eq : v_lossless_mn ] ) gives the lemmas below .[ le : f_lossless ] the presence of the phase shifts and degrades the two - mode squeezing spectra ( ) , thus degree of epr entanglement becomes worse or epr entanglement may vanish . based on the functions defined at beginning of this subsection , we have is a periodic continuous twice differentiable function with variables and , and it is convenient to take the range of and to be the entire real line. hence global minima of must be stationary points . therefore for } .\end{array } \right.\label{eq : phi0_lossless}\end{aligned}\ ] ] in particular , when , fully recovers the epr entanglement .however , if , has no effect on the system and epr entanglement vanishes .we have the first derivative vanishes at . as and , we get m \in ( -\frac{\pi}{2 } , \frac{\pi}{2}) m \in ( -\pi , -\frac{\pi}{2 } ) \cup ( \frac{\pi}{2 } , \pi ] } , \end{array } \right.\end{aligned}\ ] ] at which } \\ 2 & \textrm{if }. \end{array } \right .\label{eq : vim_lossless}\end{aligned}\ ] ] denote the first derivative of as . by applying mathematica to solve on ( [ eq : b1 - 5 ] ) , we obtain that stationary points of are and .values of at the stationary points and the non - differentiable points are hence , , which implies that at , fully recovers the original epr entanglement ; at , has no effect on the epr entanglement and the epr entanglement vanishes ; in remaining cases of , improves the epr entanglement impacted by and but can not fully recover the epr entanglement .[ fig : vps_f_lossless ] and fig .[ fig : vim_lossless ] illustrate an example of the lossless dual - nopa coherent feedback system undergoing phase shifts with and , according to values reported in .note that in all the figures of two - mode squeezing spectra in the rest of the paper , values of squeezing spectra are given in db unit , that is , .hence , epr entanglement exists when db based on ( [ eq : entanglement - criterion ] ) and the epr entanglement is stronger as is more negative . in fig .[ fig : vps_f_lossless ] , the left plot shows that at some values of and , db , which implies that phase shifts in the paths of the system can lead to death of epr entanglement .the right plot shows the difference between values of and . when , we see that , which indicates phase shifts in the paths between two nopas degrade the epr entanglement .( left ) and ( right ) of the lossless dual - nopa coherent feedback system with , , , and . ]( top row ) , ( middle row ) and ( bottom row ) of the lossless dual - nopa coherent feedback system with , , and .ranges of values of are ] ( right column ) . ] fig .[ fig : vim_lossless ] shows the effects of on the two - mode squeezing spectra .note that as is an even function of , plots of over intervals ] of are symmetric , thus we do not show the plots of squeezing spectra versus varying values of ranging from to .the plots of against the parameter in the top row shows that epr entanglement exists ( db ) over the range of , except for ( db ) .the middle row illustrates the original epr entanglement is fully recovered by at .the bottom row displays the difference between values of and against the parameters and .we see that the difference value is not positive which implies that , improves the two - mode squeezing spectra in most scenarios , but does not impact the system when ) ] .note that based on lemma [ le : phi_lossless ] , in the first case where ) ] , though does not have an effect on the epr entanglement impacted by and , is the best choice based on the proof of lemma [ le : phi_lossless ] .now let us investigate the performance of the dual - nopa coherent feedback system under the presence of phase shifts , transmission losses and amplification losses .let , , ] , the two - mode squeezing spectra between the two outputs in the dual - nopa coherent feedback system under the effect of phase shifts and losses is where similar to section [ sec : general_case_lossless ] , we have the following lemmas . [ le : f_loss ] the presence of the phase shifts and degrades the two - mode squeezing spectra ( ) , thus degree of epr entanglement becomes worse or epr entanglement may vanish . based on the functions defined at the beginning of section [ sec : entanglement - loss ] , we have similar to the proof in section [ sec : general_case_lossless ] , global minima of are stationary points . as given by mathematica , the first order partial derivatives of with respect to the variable and vanish at , at which values of are based on stability condition ( [ eq : stability - cond ] ) , replacing , and in ( [ eq : c1-c5 ] ) with definitions , , , , and noting , mathematica gives that where we see that , and .therefore , , that is , .equality holds when , which is the case with no phase shifts .we obtain lemma [ le : f_loss ] . [le : phi_loss ] minimizes the two - mode squeezing spectra at impacted by and if its value is set as } \\ 2\frac{c_1-c_3}{c_4+c_5 } & \textrm{if }. \end{array } \right .\label{eq : vim_loss}\end{aligned}\ ] ] has four real roots denoted by and , with .epr entanglement under the influence of phase shifts and losses exists on intervals , and } .\end{array } \right.\end{aligned}\ ] ] employing ( [ eq : c1-c5 ] ) and solving via mathematica , we obtain that stationary points of are and .values of at stationary points and non - differentiable points are noting and in ( [ eq : d1-d3 ] ) .mathematica shows that hence , .consequently , global minima of are at at which the original epr entanglement is fully recovered . recall and from ( [ eq : d1-d3 ] ) .mathematica then gives ] .proof is completed .( left ) and ( right ) of the dual - nopa coherent feedback system with , , , and . ]( top row ) , ( middle row ) and ( bottom row ) of the dual - nopa coherent feedback system with , , and .ranges of values of are ] ( right column ) . ] fig .[ fig : vps_f_loss ] and fig .[ fig : vim_loss ] illustrate an example of the dual - nopa coherent feedback system undergoing both phase shifts and losses with , and .similar to fig .[ fig : vps_f_lossless ] , the left plot in fig .[ fig : vps_f_loss ] shows that epr entanglement vanishes at some values of and .the right plot shows that the non - zero phase shifts in the paths decrease the degree of epr entanglement .[ fig : vim_loss ] illustrates the effect of .based on symmetric property of function , we can see from fig .[ fig : vim_loss ] that the top row shows that under the effect of , for some values of near there is no epr entanglement between the two outgoing fields ( db ) ; the middle row shows the original epr entanglement is fully recovered at and the bottom row shows that improves the two - mode squeezing spectra except for the cases where ( ) ] .note that based on lemma [ le : phi_loss ] , any value of does not impact the epr entanglement of the system when ; while is the best option in the last two scenarios where and , n=\pi)$ ] .table [ tb : transmission ] and table [ tb : amplification ] illustrate the effect of transmission and amplification losses on the existence of epr entanglement with an optimal choice of .we see that as either transmission losses or amplification losses increase , the range of values of over which the epr entanglement does not exist becomes larger , and the performance of epr entanglement worsens in the presence of losses , as can be expected ..influence of transmission losses on the range of nonexistence of epr entanglement with , and [ cols="^,^,^",options="header " , ]this paper has investigated the effects of phase shifts on stability and epr entanglement of a dual - nopa coherent feedback network .stability condition determined by parameters of the system with losses and phase shifts is derived .the system remains stable in the presence of phase shifts , whenever the system is stable in the absence of phase shifts . in the lossless system , in the absence of transmission and amplification losses ,the presence of phase shifts and in the paths between two nopas degrades the two - mode squeezing spectra between the two outputs in the system , which implies epr entanglement worsens or even vanishes .the two - mode squeezing spectra under the influence of and is minimized by setting . however , existence of epr entanglement and the degree of epr entanglement recovered by depend on the parameter .epr entanglement is fully recovered by if .epr entanglement vanishes when .when transmission and amplification losses are not neglected , the two - mode squeezing spectra are degraded by phase shifts in the paths and are maximally recovered by setting .however , existence of epr entanglement is impacted by both phase shifts and losses in the paths .the range of values of over which the epr entanglement can be improved by decreases as losses grow . z. shi and h. i. nurdin ,coherent feedback enabled distributed generation of entanglement between propagating gaussian fields , to appear in quantum information processing ( 2014 ) .[ online ] available : http://dx.doi.org/10.1007/s11128-014-0845-4 .j. laurat , g. keller , j.a .oliveira - huguenin , c. fabre , t. coudreau , a. serafini , g. adesso and f. illuminati , entanglement of two - mode gaussian states : characterization and experimental production and manipulation , j. opt .b : quantum semiclass7 , s577-s587 ( 2005 )
recent work has shown that deploying two nondegenerate optical parametric amplifiers ( nopas ) separately at two distant parties in a coherent feedback loop generates stronger einstein - podolski - rosen ( epr ) entanglement between two propagating continuous - mode output fields than a single nopa under same pump power , decay rate and transmission losses . the purpose of this paper is to investigate the stability and epr entanglement of a dual - nopa coherent feedback system under the effect of phase shifts in the transmission channel between two distant parties . it is shown that , in the presence of phase shifts , epr entanglement worsens or can vanish , but can be improved to some extent in certain scenarios by adding a phase shifter at each output with a certain value of phase shift . in ideal cases , in the absence of transmission and amplification losses , existence of epr entanglement and whether the original epr entanglement can be recovered by the additional phase shifters are decided by values of the phase shifts in the path .
horizontal differences in density between two fluids lead to the propagation of so - called gravity currents .these currents are of interest in a number of industrial as well as natural applications and so obtaining an understanding of the way in which they propagate is a subject that has motivated a considerable amount of current research . in previous publications ,our understanding of axisymmetric viscous gravity currents on an impermeable boundary has been generalised to take account of the effects of a slope as well as the propagation of a current in a porous medium . here, we consider the propagation of a gravity current from a point source in a porous medium at an impermeable sloping boundary .of particular interest is the evolution of the current away from the axisymmetric similarity solution found by .we begin by deriving the evolution equations for the shape of a current whose volume varies in time like .a scaling analysis of these governing equations reveals the extent of the current as a function of time up to a multiplicative constant .the full form of the similarity solutions that give rise to these scalings can only be determined by numerical means , however , and to do so we modify the numerical code of .for some particular values of , it is possible to make analytical progress ; these cases are considered separately and provide a useful check of the numerical scheme .we then compare the results of the numerical calculations to a series of experiments and find good quantitative agreement between the two .finally , in the last section , we discuss the implications of our results in geological settings , with particular emphasis on the implications of our work for the sequestration of carbon dioxide . , propagating in a porous medium saturated with liquid of density above an inclined plane .( a ) plan view of the current and ( b ) horizontal section through the current ., height=264 ]we consider a gravity current consisting of fluid material of density in a deep porous medium saturated with fluid of density , which is bounded by an impermeable barrier at an angle to the horizontal ( see figure [ setup ] for a sketch of the setup ) . that the saturated porous medium is deep in comparison with the vertical extent of the current allows us to neglect the motion of the surrounding fluid , simplifying the problem considerably .we use the natural cartesian co - ordinate system centred on the mass source and aligned with the slope of the impermeable boundary .the depth , , of the gravity current is then determined by continuity combined with darcy s law ( see * ? ? ? * for example ) and the assumption that the pressure in the current is hydrostatic , i.e. with constant . here, darcy s law takes the form ,\ ] ] where is the permeability of the porous medium and is the viscosity of the liquid . the velocity within the porous medium is therefore given by using this along with the conservation of mass ,we obtain where is the porosity of the porous medium and .equation is a nonlinear advection diffusion equation for the current thickness , with the two terms on the right hand side representing the gravity driven spreading of the current and its advection downslope , respectively .it is common to close the system by requiring that the volume of the current depend on time like for some constant .this constraint leads to solutions of self - similar form ( as we shall see again in this case ) but also covers the natural cases of a fixed volume release ( ) and a constant flux release ( ) . to impose this volume constraint , ( [ pde ] )must be solved along with with giving the edge of the current for .note that contains an extra multiplicative factor of , which was omitted in the study of an axisymmetric current in a porous medium by .equations ( [ pde ] ) and ( [ vol_const ] ) may be non - dimensionalized by setting , , and , where and is the natural velocity scale in the problem . in non dimensional terms , therefore , the current satisfies along with the volume conservation constraint to aid our physical understanding of the spreading of the gravity current , we begin by considering the scaling behaviour of the spreading in the limits of short and long times . for , shows that at short times ( ) the typical horizontal velocity scale is so that .further , and volume conservation requires that . from thiswe therefore find the axisymmetric scalings obtained by , namely at long times ( ) , again for , the typical downslope velocity of the current is while in the across - slope direction we have .combined with volume conservation these scalings lead to so that the current spreads predominantly downslope .it is worth noting here that the long time scaling is unsurprising because may be simplified by moving into a frame moving at unit speed downslope .we also note that the scaling is identical to that found by for a viscous current on a slope . when , the importance of the two downslope terms ( the diffusive and translational terms ) reverses .in particular , at long times , so that we in fact recover the axisymmetric spreading scalings given in as being relevant for .conversely , for we recover the non - axisymmetric scalings of .a summary of the different scaling regimes expected is given in dimensional terms in table [ tabl_scale ] . that we observe axisymmetric spreading if and is surprising , but is a consequence of the fact that the downslope flux in a porous medium gravity current is only weakly dependent on the local height and so can be swamped by the spreading terms in . in the viscous case , this is not possible because the downslope flux is able to remove the incoming flux much more efficiently and penalizes the accumulation of material at a particular point more ..summary of the asymptotic scalings for the dimensions of a gravity current in a porous medium at an inclined plane .here dimensional notation is used for clarity , and and are as defined in and , respectively . [ cols="^,^,^,^,^ " , ] the experimental results plotted in figure [ gc_res ] shows that the experimental results are in good agreement with the theoretical results produced by solving .the comparison between experimentally observed current profiles and those predicted from theoretical solutions of shown in figure [ profiles ] is also favourable particularly away from the source region . two possible mechanisms may account for the slight discrepancy between experiments and theory observed :the drag exerted by the solid substrate on the current and the fact that the pore reynolds number in our experiments is typically .such a value of the pore reynolds number suggests that we may be approaching the regime where darcy s law begins to break down , which is around . , and the maximum horizontal extent of the current , , as functions of time .the symbols used to represent each experimental run are given in table [ expt_dets].,width=377 ]our experimental and numerical analyses have shown that shortly after the initiation of a constant flux gravity current ( ) it begins to spread axisymmetrically in the manner described by .however , at times much longer than the characteristic time given in , the current loses its axisymmetry and propagates predominantly downslope .since it propagates at constant velocity in this regime , the current propagates much further and faster in this case than would be the case if it remained axisymmetric .this is potentially problematic in a range of practical applications , such as the sequestration of carbon dioxide in which super - critical carbon dioxide is pumped into aquifers .since the density of the liquid carbon dioxide lies in the range , it remains buoyant with respect to the ambient water and so will rise up any inclined boundaries . the time - scale , , over which asymmetric spreading develops is of interest to those wishing to predict the course of the released current . while it is difficult to evaluate in a precise manner because of the uncertainties in the properties of the surrounding rock , we can perform some estimates on the basis of the available data from the sleipner field . in this norwegian field , around of liquid is currently pumped into the local sandstone each year . presumably due to geological complications ,this single input flux is observed later to separate into around ten independent currents propagating within different horizons of the permeable layer , each of which has a volume flux lying in the region .combined with typical measured values for the porosity and permeability of and as well as the viscosity , we can estimate upper and lower bounds on the value of . when , we find that .this suggests that the effects of non - axisymmetric spreading may indeed be important in the field . because of the variety of values of the slope that we might expect to encounter in any geological setting , we note also that for , . for constant pumping rate ( ), this gives : i.e. the precise value of the timescale over which the current becomes asymmetric depends sensitively on .this suggests that the different spreading regimes discussed here may be observed in the field and may also have practical implications .since injection occurs into confined layers of sediment , estimates for the vertical scale of the current , , are also important .interestingly , is independent of for ( measured in radians ) and so that , with the parameter values given above , we find .this suggests that , near the source , the depth of the sediment layer may be similar to that of the current ( and so exchange , confined flows may become significant ) .however , we expect that the scaling valid away from the source ensures that the present study will remain valid downstream .we are grateful to john lister for access to his code for a viscous current on a slope and to robert whittaker for discussions .mike bickle , andy chadwick , paul linden and john lister also provided valuable feedback on an earlier draft of this paper .2005 4d seismic imaging of a plume . in _ petroleum geology :north - west europe and global perspectives proceedings of the 6th petroleum geology conference _a. g. dor & b. a. vining ) , pp .13851399 . the geological society , london .
we consider the release from a point source of relatively heavy fluid into a porous saturated medium above an impermeable slope . we consider the case where the volume of the resulting gravity current increases with time like and show that for , at short times the current spreads axisymmetrically , with radius , while at long times it spreads predominantly downslope . in particular , for long times the downslope position of the current scales like while the current extends a distance across the slope . for , this situation is reversed with spreading occurring predominantly downslope for short times . the governing equations admit similarity solutions whose scaling behaviour we determine , with the full similarity form being evaluated by numerical computations of the governing partial differential equation . we find that the results of these analyses are in good quantitative agreement with a series of laboratory experiments . finally , we discuss the implications of our work for the sequestration of carbon dioxide in aquifers with a sloping , impermeable cap .
in the past decade , european research institutions , scientific collaborations and resource providers have been involved in the development of software frameworks that eventually led to the set - up of unprecedented distributed e - infrastructures such as the european grid infrastructure ( egi ) ; their collaboration made it possible to produce , store and analyze petabytes of research data through hundreds of thousands of compute processors , in a way that has been instrumental for scientific research and discovery worldwide .this is particularly true in the area of high energy physics , where distributed computing has been instrumental as data analysis technology in the framework of the lhc computing grid ( wlcg ) .new technological advancements , such as virtualization and cloud computing , pose new important challenges when it comes to exploit scientific computing resources .services to researchers need to evolve in parallel to maximize effectiveness and efficiency , in order to satisfy different scenarios : from everyday necessities to new complex requirements coming from diverse scientific communities . in order to expose to researchers the power of current technological capabilities, a key challenge continues to be accessibility .virtualization techniques have the potential to make available unprecedented amounts of resources to researchers .however , serving scientific computing is much more complex than providing access to a virtualized machine .it implies being able to support in a secure way complex simulations , data transfer & analysis scenarios , which are by definition in constant evolution .this is particularly the case of the hep areas , where we can distinguish three paradigmatic usage scenarios : * massive data analysis at the lhc and experiments which in general must deal with analysis of large amounts of data in computing farms .the paradigmatic case is the data analysis of the lhc at experiments like cms and atlas . during lhc run 1 , the tools developed by wlcg oriented to grid computing worked very smoothly .however , both the amount of data involved in run 2 and beyond , and the evolution of the computing infrastructures implies that new scenarios for accessing resources need to be devised .the experiments cms and atlas have both already analyzed the technical feasibility of using resources in cloud mode ( see ) . * high performance computing in facilities with low latency interconnects dedicated to monte carlo simulations , for example for lattice quantum chromodynamics ( lattice qcd ) .resource requirements for development phases are medium size clusters with up to a few hundred cores ; in the production phase , lattice qcd is served by large hpc farms which can require from a few thousand , up to tens of thousands of cores for ground breaking projects .the storage requirements are in the order of a few terabytes for development and petabyte for production phases .* phenomenological simulation & prediction codes , often including legacy software components and complex library dependencies . the resources requirements can reach about a thousand cores , with storage in the range of the terabyte .here one current showstopper for researchers is the possibility of having installed the right environment ( in terms of legacy libraries for example ) . making computing and storage resourcesreally exploitable by scientists implies adapting access and usage procedures to the user needs .the challenge that remains is developing such procedures in a manner which is reliable , secure , sustainable , and with the guarantee that results are reproducible . in order to settle the scenario where our platform is to be deployed, we must specify what we mean by the term resources . by thiswe always refer to a _ large pool of computing and storage resources _ , in which the users do not need to know ( because it is not relevant for the computation ) which machine it is actually being used . if the computing requirements would have a strong hardware dependency ( optimized for a particular processor ) , or if the application is so large scale that a significant fraction of the available resources in a given computing center needs to be used , the discussion would be completely different .this is the context of the work of the horizon 2020 project indigo - datacloud , referred to as indigo from now on , addressing the challenge of developing advanced software layers , deployable in the form of a data / computing platform , targeted at scientific communities . in this paperwe present an architectural design to satisfy the challenges mentioned above , and which we have developed in the framework of the indigo project .we highlight the interrelations among the different components involved and outline the reasoning behind the choices that were made .the remainder of this paper is structured as follows .we first summarize the proposed progress beyond the state of the art ( section [ sec : summary ] ) . in order to motivate our choiceswe next describe several generic user scenarios in section [ sec : motivation ] , with the main functionalities we want to satisfy , offering a global , high - level view of the indigo architecture highlighting the interrelations among the different layers .section [ sec : paas ] describes the architecture of the paas layer , together with the description of the user interfaces to be developed .section [ sec : infrastructure ] is devoted to the computer center layer of the architecture .in particular , it describes the middleware choices and developments needed to fully support the paas layer at the infrastructure level .section [ sec : portals ] describes how indigo interfaces with users .finally , section [ sec : data ] describes the solutions for unified data management .the paper is concluded by section [ sec : conclusions ] , drawing some conclusions and highlighting future work .in order to provide the reader with a global understanding of the practical implications of this work , we highlight here the technical progress we intend to achieve at the different layers that compose a scientific computing infrastructure . at the resource provider level ,computing centers offer resources in an infrastructure as a service ( iaas ) mode .the main challenges to be addressed at this level are : 1 . improved scheduling for allocation of resources by popular open source cloud platforms , i.e. openstack and opennebula .+ in particular , both better scheduling algorithms and support for spot - instances are currently much needed .the latter are in particular needed to support allocation mechanisms similar to those available on commercial clouds such as amazon web services and google cloud platform , while the former are useful to address the fact that typically all computing resources in scientific data centers are always in use .+ for lhc data analysis several attempts at using such spot - instances in the framework of commercial cloud providers have already taken place in the experiment cms and atlas .having an open source software framework supporting such operation mode at the iaas level will therefore be very important for the centers aiming to support such data analysis during lhc run 2 and beyond .2 . improved quality of service ( qos ) capabilities of storage resources .the challenge here is to develop a better support for quality of services in storage , to enable high - level storage management systems ( such as fts ) and make it aware of information about the underlying storage qualities .+ the impact of such qos when applied to storage interfaces such as dcache will be obvious for lhc data analysis and for the long - term support , preservation and access of experiment data .+ for example can we save a lot of redundant copies , when the high level storage manager knows how many copies are already available at one storage location .due to a lack of this informationthe assumption made is that only one copy is available .3 . improved capabilities for networking support .this is particularly the case when it comes to deploy tailored network configurations in the framework of opennebula and openstack .4 . improved and transparent support for docker containers .+ containers provide an easy and efficient way to encapsulate and transport applications .indeed , they represent a higher level of abstraction than the crude concept of a `` virtual machine '' .the benefits of using containers , in terms of easiness in the deployment of specialized software , including contextualization features , eg . for phenomenology applications , are clear .+ they offer obvious advantages in terms of performance when compared with virtual machines , while also opening the door to exploit specialized hardware such as gpgpus and low - latency interconnection interfaces ( infiniband ) .+ the general idea here is to make containers `` first - class citizens '' in scientific computing infrastructures . in the next layerwe find the paas layer .this is a set of services whose objective is to leverage disparate hardware resources coming from the iaas level ( grid of distributed clusters , public and private clouds , hpc systems ) to enhance the user experience . in this contexta paas should provide advanced tools for computing and for processing large amounts of data , and to exploit current storage and preservation technologies , with the appropriate mechanisms to ensure security and privacy .the following points describe the most important missing capabilities which today require further developments : 1 . improved capabilities in the geographical exploitation of cloud resources .end users do not need to know where resources are located , because the paas layer should be hiding the complexity of both scheduling and brokering .2 . support for data requirements in cloud resource allocations .resources can be allocated where data is stored , therefore facilitating interactive processing of data .the benefits of such an enhancement are clear for software stacks for interactive data processing tools such as root and proof .3 . support for application requirements in cloud resource allocations .for example , a given user can request to deploy an application on a cluster with infiniband interfaces , or with access to specialized hardware such as gpgpus .elasticity in the provisioning of such specialized small size clusters for development purposes would have a great impact in the everyday work of many researchers in the area of lattice qcd for example . 4 .transparent client - side import / export of distributed cloud data .deployment , monitoring and automatic scalability of existing applications , including batch systems on - demand .for example , existing applications such as web front - ends , proof clusters or even a complete batch system cluster ( with appropriate user interfaces ) can be automatically and dynamically deployed in highly - available and scalable configurations .integrated support for high - performance big data analytics and workflow engines such as taverna , ophidia or spark for dynamic and elastic clusters of computational resources . in the next layerwe find the user interface , which is responsible to convey all the above mentioned developments to the user .this means in particular that it should provide ready - to - use tools for such capabilities to be exploited , with the smoothest possible learning curve .providing such an interface between the user and the infrastructure poses two fundamental challenges : 1 . enabling infrastructure services to accept state of the art user authentication mechanisms ( e.g. openid connect , saml ) on top of the already existing x.509 technology .for example , distributed authorization policies are very much needed in scientific cloud computing environments , therefore a dedicated development effort is needed in this area .hence , the authentication and authorization infrastructure ( aai ) is a key ingredient to be fed into the architecture .2 . making available the appropriate libraries , servlets and portlets , implementing the different functionalities of the platform ( aai , data access , job processing , etc . ) that are the basis to integrate such services with known user tools , portals and mobile applications .we have designed an architecture containing the elements needed to provide scientific users with the capability of using heterogeneous infrastructures , adressing the challenges described above . in the followingwe describe the rational and motivations of the technical choices we made . as a first step we have performed a detailed user requirements analysis , whose main conclusions we show in the form of two generic scenarios : the first is computing oriented , while the second is data analysis oriented . for full details containing user communities description and detailed usage patternswe refer to our requirements document .our architecture is based on the analysis of a number of use cases originating from different research communities in the areas of high energy physics , environmental modelling , bioinformatics , astrophysics , social sciences and others . from this requirements analysiswe have extracted two generic usage scenarios , which can support a wide range of applications in these areas .the first generic user scenario is a computing portal service . in such scenario ,computing applications are stored by the application developers in repositories as downloadable images ( in the form of vms or containers ) .such images can be accessed by users via a portal , and require a back - end for execution ; in the most common situation this is typically a batch queue .the number of nodes available for computing should increase ( scale out ) and decrease ( scale in ) , according to the workload .the system should also be able to do cloud - bursting to external infrastructures when the workload demands it .furthermore , users should be able to access and reference data , and also to provide their local data for the runs .a solution along these lines is shown in figure [ fig:1 ] .a second generic use case is described by scientific communities that have a coordinated set of data repositories and software services ( for example proof , or r - studio ) to access , process and inspect them .processing is typically interactive , requiring access to a console deployed on the data premises . in figure [ fig:2 ]we show a schematic view of such a use case .as pointed out in the introduction , the current technology based on lightweight containers and related virtualization developments make it possible to design software layers in the form of platforms that support such usage scenarios in a relatively straightforward way .we can see already many examples in the industrial sector , in which open source paas solutions such as openshift or cloud foundry are being deployed to support enterprise work in different sectors . however , the case of supporting scientific users is more complex , first because of the heterogeneous nature of the infrastructures at the iaas level ( i.e. the resource centers ) , and secondly because of the inherent complexity of the scientific work requirements .the key point here is to find the right agreement to unify interfaces between the paas and iaas levels .for the architecture to go beyond just a theoretical implementation of tools and apis , we must include the practicalities of the computing centers in the discussion .the architecture should be capable of supporting the interaction with the resource centers via standard interfaces . herethe word standard is meant in a very wide sense including _ de jure _ as well as _ de facto _ standards .virtualization of resources is the key word in order to properly address the interface with the resource centers . in other words ,the software stack should be able to virtualize local compute , storage and networking iaas resources , providing those resources in a standardized , reliable and performing way to remote customers or to higher level federated services .the iaas layer is normally provided to scientists by large resource centers , typically engaged in well - established european e - infrastructures .the e - infrastructure management bodies or the resource centers themselves will select the components they operate .therefore , the success of any software layer in this respect is being able to be flexible enough as to interact with the most popular choices of the computer centers , without interfering , or very minimally , in the operation of their facilities . as a consequence , as a part of the development effort, we have analyzed a selection of the most prominent components to interface computing and storage in the resource centers , and develop the appropriate interfaces to high - level services based on standards .figure [ fig:3 ] shows a schematic view of the interrelation among those components .the paas core components will be deployed as a suite of small services using the concept of `` micro - service '' .this term refers to a software architecture style , in which complex applications are composed of small independent processes communicating with each other via lightweight mechanisms like http resource apis .the modularity of micro - services makes the approach highly desirable for architectural design of complex systems , where many developers are involved .kubernetes , an open source platform to orchestrate and manage docker containers , will be used to coordinate the micro - services in the paas .kubernetes is extremely useful for the monitoring and scaling of the services , and will ensure the reliability of all of them . in figure [ fig:4 ]we show the high - level view of the paas in which the interrelations among services are also indicated with arrows . the following list briefly describes the key components of the indigo paas : * the orchestrator : this is the core component of the paas layer .it receives high - level deployment requests from the user interface software layer , and coordinates the deployment process over the iaas platforms ; * the identity and access management ( iam ) service : it provides a layer where identities , enrolment , group membership , attributes and policies to access distributed resources and services can be managed in a homogeneous and interoperable way ; * the monitoring service : this component is in charge of collecting monitoring data from the targeted clouds , analysing and transforming them into information to be consumed by the orchestrator ; * the brokering / policy service : this is a rule - based engine that allows to manage the ranking among the resources that are available to fulfil the requested services .the orchestrator will provide the list of iaas instances and their properties to the rule engine .the rule engine will then be able to use these properties in order to choose the best site that could support the users requirements .the rule engine can be configured with different rules in order to customize the ranking ; * the qos / sla management service : it allows the handshake between a user and a site on a given sla ; moreover , it describes the qos that a specific user / group has , both over a given site or generally in the paas as a whole .this includes a priority for a given user , i.e. the capability to access different levels of qos at each site ( e.g. , gold , silver , bronze services ) ; * the qos / sla management service : it allows the handshake between a user and a site on a given sla ; moreover , it describes the qos that a specific user / group has , both over a given site or generally in the paas as a whole .this includes information about the actual service quality of storage spaces and stored files at endpoints plus the possibility to change these service qualities for stored data . * the managed service / application ( msa ) deployment service: it is in charge of scheduling , spawning , executing and monitoring applications and services on a distributed infrastructure ; it is implemented as a workflow programmatically created and executed by the orchestrator , as detailed in the next section . *the infrastructure manager ( i m ) : it deploys complex and customized virtual infrastructures on iaas cloud deployment providing an abstraction layer to define and provision resources in different clouds and virtualization platforms ; * the data management services : this is a collection of services that provide an abstraction layer for accessing data storage in a unified and federated way .these services will also provide the capabilities of importing data , schedule transfers of data , provide a unified view on qos and distributed data life cycle management .figure [ fig:4 ] shows also the interaction between the msa core service and the external components apache mesos , marathon and chronos .these are open - source components that have been selected after a deep analysis of the cutting edge technologies for application and container management .mesos is a smart resource manager originally conceived as research project at uc berkeley and currently used in production by the industrial sector as well .mesos abstracts cpu , memory , storage and other compute resources away from machines ( physical or virtual ) and allows sharing them across different distributed applications ( called frameworks ) .sophisticated two - level scheduling and efficient resource isolation are the key - features of this middleware that are exploited in the indigo paas , in order to run different workloads ( long - running services , batch jobs , etc ) on the same resources while preserving isolation and prioritizing their execution .the mesos cluster architecture is organized in two sets of nodes : masters , which coordinate the work , and slaves , which execute it .the master nodes are responsible for handling the resources available on the slaves and offer them to the frameworks according to specific policies ; then the frameworks are responsible for the application specific scheduling policy .this allows for more fine - tuned scheduling and dynamic partitioning of resources and for application aware scheduling .the msa service is implemented as a complex workflow managed by the orchestrator that delegates to two already available mesos frameworks for deploying containers on the iaas sites : marathon , which allows to deploy and manage long - running services , and chronos , which allows to execute jobs .the capabilities of mesos and its frameworks will be enhanced by adding crucial features like : the elasticity of the mesos cluster that will automatically shrink or expand depending on the tasks queue , as detailed in the next section , the automatic scaling of the user services that run on top of the mesos cluster , a stronger authentication mechanism based on openid connect .generally speaking , a platform as a service ( paas ) is a software suite , which is able to receive programmatic resource requests from end users , and execute these requests provisioning the resources on some e - infrastructures . in the indigo approach, the paas will deal with the instantiation of services and with application execution upon user requests relying on the concept of micro - services . in turn, the micro - services will be managed using kubernetes , in order , for example , to select the right end - point for the deployment of applications or services .cross - site deployments will also be possible .the language in which the paas is going to receive end user requests is tosca ( topology and orchestration specification for cloud applications ) .it is an oasis specification for the interoperable description of application and infrastructure cloud services , the relationships between parts of these services , and their operational behaviour .in particular we will be using the tosca simple profile in yaml version 1.0 .tosca has been selected as the language for describing applications , due to the wide - ranging adoption of this standard , and since it can be used as the orchestration language for both opennebula ( through the i m ) and openstack ( through heat ) .the paas core provides an entry point to its functionality via the orchestrator service , which features a restful api that receives a tosca - compliant description of the application architecture to be deployed .providing such tosca - compliance enhances interoperability with existing and prospective software .users can choose between accessing the paas core directly or using a graphical user interface or simple apis .a user authenticated on the indigo platform will be able to access and customize a rich set of tosca - compliant templates through a gui - based portlet .the indigo repository will provide a catalogue of pre - configured tosca templates to be used for the deployment of a wide range of applications and services , customizable with different requirements of scalability , reliability and performance .in these templates a user can choose between two different examples of generic scenarios : * scenario a. deploy a customized virtual infrastructure starting from a tosca template that has been imported , or built from scratch ( see figure [ fig:5 ] ). the user will be able to access the deployed customized virtual infrastructure and run / administer / manage applications running on it .* scenario b. deploy a service / application whose life - cycle will be directly managed by the paas platform ( see figure [ fig:6 ] ) .the user will be returned the list of endpoints to access the deployed services . in both casesthe selected template can be submitted to the paas orchestrator using its rest api endpoint .then , the orchestrator collects all the information needed to generate the deployment workflow : * health status and capabilities of the underlying iaas platforms and their resource availability from the monitoring service ; * priority list of sites sorted by the brokering / policy service on the basis of rules defined per user / group / use - case ; * qos / sla constraints from the sla management system ; * the status of the data files and storage resources needed by the service / application and managed by the data management service .this information is used to perform the matchmaking process and to decide where to deploy each service .note that the orchestrator is able to trigger the data migration function provided by the data management service component if the data location does not meet the application deployment requirements .as pointed out before , the msa is implemented as a deployment plan managed by the workflow engine provided by the orchestrator , that , starting from the tosca template that describes the deployment request , creates the workflow programmatically .the msa workflow relies on the capabilities of mesos to manage the distributed set of iaas resources : its execution is accomplished through a set of calls to the apis endpoints of the mesos cluster , whose architecture consists of one or more master nodes , and of slave nodes that register with the master and offer resources from the iaas nodes .the master node is aware of the state of the whole iaas resources , and can share and assign them to the different applications ( called frameworks in the mesos terminology ) according to specific scheduling policies .mesos provides a default scheduling algorithm , the dominant resource fairness ( drf) , but it is possible to develop custom algorithms and easily configure mesos to use them thanks to its modular ( plugin ) architecture . the automatic scaling service , based on ec3/clues , ensures the elasticity and scalability of the mesos cluster by monitoring its status . when additional computing resources ( worker nodes ) are needed , the orchestrator will be requested to deploy them on the underlying iaas matching the qos / sla , health and user / group / use - case policies selected by the broker .in the case of long - running services , the management service / application ( msa ) deployment service will use marathon ( a container orchestration platform available in the mesos framework ) to ensure that the services are always up and running .marathon is able to restart the services , migrate them if problems occur , handle their mutual dependencies and load - balancing , etc .the msa deployment service will also use chronos ( a fault tolerant scheduler available in the mesos framework ) to execute applications having input / output requirements or dependencies .it may also handle the rescheduling of failed applications , or simple workflows composed by different applications . by leveraging the mesos plugin - based architecture ,new frameworks can be developed , such as the one able to deploy a batch cluster ( e.g. htcondor ) on demand , in order to meet specific use - cases .for example , batch execution of lhc data analysis is often using htcondor to manage the job scheduling with respect to data management services ,some interfaces are provided to advanced users for specific data management tasks .first of all , the onedata component provides several features : a web - based interface for managing user spaces ( virtual folders ) and controlling access rights to files on a fine - grained level , a posix interface to a unified file system namespace addressing both local and remote data , caching capabilities and the possibility to map object stores such as s3 to posix filesystems .additionally , the fts-3 service will provide a web - based interface for monitoring and scheduling large data transfers .furthermore , all the standard interfaces exposed by the data management components will be accessible to users applications as well through standard protocols such as cdmi and webdav .the impact of the implementation of indigo software developments at the level of infrastructure resource providers is a key discussion to guarantee the adoption of the solutions being developed .a successful architecture should be able to provide the means to unify the interfaces between the paas layer and the core services .this is necessary , as resources sites have already their own administration software installed .a paas , like any software layer dealing with resource management , needs to be totally customizable to guarantee a good level of adoption by infrastructure providers .therefore our strategy is to focus on the most popular standards and provide well - documented common interfaces to these .examples are occi , tosca or a consistent container support for openstack and opennebula .an example in the data area is the support of cdmi for the various storage systems like dcache , storm , hpss and gpfs . a closely related goal , often being the result of the topic discussed previously , is the functional unification between different software systems .support is being introduced in the i m to be able to deploy application architectures described in tosca on the different cloud back - ends supported by the i m , including opennebula , openstack and public cloud providers . in particular , for openstack , the heat - translator project will be employed to deploy tosca - compliant architectural descriptions on openstack sites another area of development which is currently demanded by scientific communities is the introduction of `` quality of service '' and `` data life - cycle policies '' in the data area .this is the result of the various `` data management plans , dmp '' provided by data intensive communities and also required by the european commision ( ec ) when submitting proposals .one important aspect of dmps is the handling of quality of service and access control of precious and irreproducible data over time , resulting in support and manageability of those attributes at the site or storage system level . although the different types of resources are closely interlinked , we distinguish between computing , storage and network resources for organizational reasons . in the computing areathe provision of standard apis is covered by supporting occi at the lowest resource management level and tosca at the infrastructure orchestration level . within the storage area ,common access and control mechanisms are evaluated for negotiating data quality properties , e.g. access latency and retention policies , as well as the orchestration of data life cycles for archival .together with established standardization bodies , like rda and ogf , we envision to extend the snia cdmi protocol for our purposes . similarly for networking , we need to evaluate commonalities in the use of software defined networks ( sdn ) between different vendors of network appliances .one notably attractive concept is that all features developed at the iaas level will not only be available through the indigo paas layer , but can be utilized by users accessing the iaas layer directly .similarly , tracking of user identities is available throughout the entire execution stack .consequently , users can be monitored down to the iaas layer with the original identities they provided to portals or workflow engines when logged via the paas indigo layer .based on the scientific use cases we have considered ( see ) , we identified a set of features that have the potential to impact in a positive way the usability and easy access to the infrastructure layers . in the computing area , these features are enhanced support for containers , integration of batch systems , including access to hardware specific features like infiniband and general purpose gpus , support for trusted container repositories , introduction of spot instances and fair - share scheduling for selected cloud management frameworks ( cmf ) , as well as orchestration capabilities common to indigo selected cmfs using tosca. see figure [ fig:8 ] for a graphical representation. in certain applications , the use of ` containers ' as a lightweight alternative to hypervisor - based virtualization is becoming extremely popular , due to their significantly lower overhead .however , support in major cloud management frameworks ( cmfs ) is still under development or does not exist at all . for openstack and opennebula , the top two cmf s on the market , indigo , in collaboration with the corresponding open source communities , is spending significant efforts to make containers first - class citizens and , concerning apis and management , indistinguishable from traditional vms .while in openstack integration of nova - docker will introduce support for docker containers , for opennebula , additional developments are required .in particular , we have developed onedock , which introduces docker as an additional hypervisor for opennebula , maintaining full integration with the opennebula apis and web - based portal ( sunstone ) although cloud - like access to resources is becoming popular and cloud middleware is being widely deployed , traditional scientific data centers still provide their computational power by means of batch systems for htc and hpc . consequently , it is interesting to facilitate the integration of containers in batch systems , providing users with the ability to execute large workloads embedded inside a container . with the pressure of optimizing computer center resources but at the same time providing fair , traceable and legally reproducible services to customers ,available cloud schedulers need to be improved .therefore , we are focusing on the support of spot - instances allowing brokering resources based on slas and prices .technically this feature requires the cmf to be able to preempt active instances based on priorities . on the other hand , to guarantee an agreed usage of compute cycles integrated over a time interval , we need to invest in the evaluation and development of fair - share schedulers integrated in cmfs .this requires a precise recording of already used cycles and the corresponding readjustment of permitted current and future usage per individual or group .the combination of both features allows resource providers to partition their resources in a dynamic way , ensuring an optimized utilization of their infrastructures .the middleware also provides local site orchestration features by adopting the tosca standard in both openstack and opennebula , with similar and comparable functionalities . finally , resource orchestration is also covered within this architecture .although this area can be managed at a higher level , we will provide compute , network and storage resource orchestration by means of the tosca language standard at the iaas level as well . as a result , both , the upper platform layer and the infrastructure user may deploy and manage complex configurations of resources more easily . while in the cloud computing area , the specification of service qualities , e.g. number and power of cpus , the amount of ram and the performance of network interfaces , is already common sense , negotiating fine grained quality of service in the storage area , in a uniquely defined way , is not offered yet . therefore , the high level objective of the storage area is to establish a standardized interface for the management of quality of services ( qos ) and data life cycle in storage ( dlc ) .users of e - infrastructures will be enabled to query and control properties of storage areas , like access latency , retention policy and migration policies with one standardized interface .a graphical representation of the components is shown in figure [ fig:9 ] . to engage scientific communities into this endeavour as early as possible ,indigo initiated a working group within the framework of the data research alliance ( rda ) , and will incorporate ideas and suggestions of that group at any stage of the project into the development of the system .as with all infrastructure services , the interface is supposed to be used by either the paas storage federation layer or by user applications utilizing the infrastructure directly . this will be pursued in a component - wise approach .development will focus on qos and interfaces for existing storage components and transfer protocols that are available at the computer centers . ideally , the storage qos component can be integrated just like another additional component into existing infrastructures . besides providing the translation layer between the management interface and the underlying storage technologies, the software stack needs to be integrated into an existing local infrastructure .the high level objective of the network area is to provide mechanisms for orchestrating local and federated network topologies . to unify the orchestration management ,we will ensure that the network part of the occi open standard can be used in indigo supported cmfs : openstack and opennebula .the indigo architecture needs to address the challenge of guaranteeing a simple and effective final usage , both for software developers and application running .a key component with a big impact on the end - user experience is the authentication and authorization meachanism employed to access the e - infrastructures . on the next layer ,the possibility of using user friendly end - points in the form of graphical interfaces , that user communities may tailor to their needs is a also big plus to enhance the end - user experience .we have designed a service providing user identity so that consistent authorization decisions can be enforced across distributed services .we can see an schema of the problem we intend to tackle in figure [ fig:15 ] .today users have different digital identities ( e.g. , institutional credentials , social logins , certificates ) and want the ability to leverage these identities for their scientific computing needs .so one problem that must be solved is how to map different identities to the same individual so that consistent authorization and accounting can be performed at various levels of the infrastructure .this identity harmonisation problem , which we describe in more detail in our aai architecture document , has many aspects that need to be tackled : * ability to authenticate users coming with different credentials * ability to recognize which credentials are linked to which individuals , and provide a unique identifier linked to the individual ( orthogonal to the different credentials used ) * ability to link attributes to the identity that can be used to define and enforce authorisation policies * ability to provision identity information and authorisation policies to relying services to address these points we have developed a service called _ identity access management _ ( iam ) , which provides a central solution for identity harmonisation , user authentication and authorisation .in particular , it provides a layer where identities , enrollment , group membership and other attributes management as well as authorization policies on distributed resources can be managed in a homogeneous way leveraging the supported federated authentication mechanisms ( see figure [ fig:16 ] ) .the iam service supports standard authentication mechanisms as saml , openid - connect ( oidc ) and x.509 .the user identity information collected in this way is then exposed to relying services through openid - connect interfaces . in a way, the iam acts as a credential translator for relying services , harmonizing the different identity of the users and exposing them using a single standardized protocol .this approach simplifies integration at services as it does nt force each service to understand and support each authentication mechanism used by users .openid - connect was chosen as the identity layer in indigo for the following reasons : * easier integration in services .most services today are exposed via restful apis and openid - connect fits naturally to that use case and many products that are part of the indigo stack already support an oidc integration ( e.g. , openstack , kubernetes ) ; * support for a dynamic computing environment : oidc naturally supports dynamic client registration so that trust can be enstabilished across services without human intervention ( but according to well - defined policies ) ; * support for offline access : being based on oauth2 , oidc naturally supports offline access in a way which is independent of the authentication mechanism used ; the iam service will represent an identity hub for indigo services , and provides standard interfaces ( scim ) for the provisioning / deprovisioning of user and group information at relying services .the iam integrates with the indigo token translation service ( tts ) to support non - http services ( ssh , amazon s3 ) and provide other needed credential translation functionality ( ie , x.509 certificate generation ) .the iam deployment model is flexible : it can be deployed centrally to accomodate a federated infrastructure and large user communities , or locally at a site to provide local identity harmonisation / attribute management services .we have developed the tools needed for the development of apis to access the indigo paas framework .it is via such apis that the paas features can be exploited via portals , desktop applications or mobile apps .therefore , the main goals of such endeavour from a high level perspective are to : * provide user friendly front - ends demonstrating the usability of the paas services ; * manage the execution of complex workflows using paas services ; * develop toolkits ( libraries ) that will allow the exploitation of paas services at the level of scientific gateways , desktop and mobile applications ; * develop an open source mobile application toolkit that will be the base for development of mobile apps ( applied , for example , to a use case provided by the climate change community ) .the architectural elements of the user interface ( see figure [ fig:7 ] ) can be described as follows : * futuregateway portal : it provides the main web front - end , enabling most of the operations on the e - infrastructure .a general - purpose instance of the portal will be available to all users .+ advanced features that can be included via portlets are : * * big data portlets for analytics workflows - aiming at supporting a key set of functionalities regarding big data and metadata management , as well as support for both interactive and batch analytics workflows .* * admin portlet - to provide the web portal administrator with a convenient set of tools to manage access of users to resources . * * workflow portlets - to show the status of the workflows being run on the infrastructure , and provide the basic operations . *futuregateway engine : it is a service intermediating the communication between e - infrastructures and the other user services developed .it incorporates many of the functionalities provided by the catania science gateway framework , extended by others specific to indigo .it exposes a simple restful api for developers building portals , mobile and desktop applications .full details can be found in . *scientific workflows systems : these are the scientific workflow management systems orchestrating data and job flow .we have selected ophidia , galaxy , loni and kepler as the ones more demanded by the user communities . * wfms plug - ins these are plug - ins for the scientific workflow systems that will make use of the futuregateway engine rest api , and will provide the most common set of the functionalities .these plug - ins will be called differently depending on the scientific workflow system ( modules , plug - ins , actors , components ) .* open mobile toolkit : these are libraries that make use of the futuregateway engine rest api , providing the most common set of the functionalities that can be used by multiple domain - specific mobile applications running on different platforms . foreseen libraries include support for ios and android and , if required , for windowsphone implementations . * indigo token translation service client - the token translation service client enables clients that do not support the indigo - token to use the indigo aai architecture .the main goal in providing unified data access at the paas level is providing users with seamless access to data .the challenge resides in hiding the complexities and heterogeneities between the infrastructures where the data is actually being stored .this is particularly more important in the case of federated data infrastructures such as the deployment of lhc data over the wlcg . as a matter of factdata access interoperability is currently the main challenge when it comes to federated data management . various grid and cloud infrastructures use different data management and access technologies , either open source or proprietary . although some solutions exist , such as cdmi, none is widely accepted and thus we need to define a unified api for users to access data from heterogeneous infrastructures in a coherent manner . in the case of lhc data analysis in run-2 we expect a much more heterogenous infrastructure in place , besides the wlcg grid , resources based on cloud - like provision is also being explored .therefore it becomes critical to provide a certain level of integration .the indigo paas provides three data management services that allow accessing federated data in an unified way .depending on how data are stored / accessible , they will be made available through a different services in a way which is transparent to the user ( see figure [ fig:12 ] ) . in order to access and manage data, we will exploit the interfaces provided by the infrastructure layer : * posix and webdav for data access .* gridftp for data transfer . * cdmi for the metadata management . *rest apis to expose qos features of the underline storage .onedata is a server - client type of service whose main purpose is to provide a unified federated and optimized data access based on various apis as well as legacy posix protocol .it allows transparent access to storage resources from multiple data centers simultaneously .onedata automatically detects whether data is available on local storage and can be accessed directly , or whether it has to be fetched from remote sites .support for federation is achieved by the possibility of establishing a distributed provider registry , where various infrastructures can setup their own provider registry and build trust relationship between these instances , allowing users from various platforms to share their data transparently the main dependence of onedata is on the storage providers . for onedata to work the storage providers needs to expose cdmi or s3 interfaces , and supporting posix access to site storage .the architecture comprises two major components : oneprovider and oneclient .the former is installed in datacenters to provide a unified view of the site filesystems .the client side connects to the providers which the user registered in onedata portal , and his spaces are automatically provisioned from these providers .if data are stored on a posix - compliant filesystem and the site admin is willing to install the onedata gateway , data can be accessed with a more powerful graphical interface , including : * acls and qos management . * posix access ( also remotely ) .* web or webdav access . * simple metadata management ( based on the concept of key - value pair ) .* data movement , replication and caching across the site of the federation . * transparent local caching of data .* transparent translation of object storage resources ( for example available through an s3 interface ) to a posix filesystem .the fts service will be exploited for the capability of managing third - party transfers among griftp servers .it will be used in order to import data from external gridftp servers , globus - online compatible services , etc .support to fts is needed in order to provide access to most wlcg storage elements .we describe here the main functionalities for completeness . for authentication ftsuses voms proxies .it optimizes transfers by monitoring the performance currently active transfers and adjusting the number of concurrent transfers on the fly .the users typically only provide file origin and destination storage elements . after submitting origin and destination to the clientthe user can monitor the progress , query the status , cancel and delete transfer requests and resubmit previously unsuccessful requests . a successful transfer from source to destination using the specified protocol ( srm / gsiftp / http / root ) can be verified by a checksum that was added during request creation .dynafed is a federated namespace of distributed storage namespaces .it provides a very fast loose coupling of storage endpoints as a single name - space exposed via http and webdav .this allows to have federation of existing storage endpoints without the need of maintaining a file catalog for global to local file name translation .the purpose of providing this service in the paas is to cover the case in which data are available only via a webdav gateway , they can be aggregated using dynafed .users will be able to read them via a federation layer regardless of where the data are really stored . to wrap things up , in the indigo paas framework, the user will provide information about the data needed to execute the desired service / application , and how those data are to be accessed , at the level of the tosca template . given the data requirements described in the template, the orchestrator will be able to understand if it has to request either fts or onedata to schedule a data import / movement , or if instead it is better to move the application `` close '' to the data . at the end of this cyclethe data will be available to the end user service exploiting onedata or dynafed .this will allow also legacy application / services to be supported with their native data access approach ( webdav or posix ) . in summarythe end user has the possibility to handle data in several ways : * asking for an import action using a tosca template exploiting fts and gridftp . * uploading files or directories using a web interface . *importing data from his desktop via a dropbox - like tool .the indigo architecture , based on a three level approach ( user interface , paas layer and infrastructure layer ) , is able to fulfil the requirements described in the introduction . at the infrastructure site levelthe architecture provides new scheduling algorithms for open source cloud frameworks .also very importantly , it provides dynamic partitioning of batch versus cloud resources at the site level . by implementing the cloud bursting tools available in the architecture , the site can also have access to external infrastructures .the infrastructure site is also enhanced with full support to containers , where the local containers repositories can as well be securely synchronized with dockerhub external repositories , facilitating enormously the automatic instantiation of applications . from the point of view of data ,the architecture is able to integrate local and remote posix access for all types of resources : bare metal , virtual machines or containers .in particular it provides transparent mapping of object storage to posix , and a transparent gateway to existing filesystem ( like gpfs or lustre).data access is also enhanced with the support to webdav , gridftp and cdmi access . using pre - defined available tosca templates we are also able to provide a high level of automatism for a number of scenarios .furthermore , the paas scenario has the advantage that the management of the application / services has advanced capabilities related with service resilience that make it very attractive for users .for example , in the case of failure of one the services or of an application , the platform itself will take care of restarting the service or re - executing the application . throughout this paper we have summarized the work performed at the infrastructure , platform and user interface level , which also includes the description and implementation of some typical use case scenarios , to provide more clarity as to what we want to support with this architectural construction .the plan is to deliver a first release of the platform by july 2016 , implementing the most important features to let users deploy their services and applications across a number of testbed provided by the indigo partners , and provide developers with an initial feedback . in the second indigo release , scheduled by march 2017, we plan to also integrate advanced support for features such as moving applications to cloud infrastructures , addressing cloud bursting , enhancing data services and providing additional sample templates to support use cases as they are presented by scientific communities .the release cycle , roadmap and key procedures of the indigo software are described here .`` dominant resource fairness : fair allocation of multiple resource types '' + a. ghodsi , m. zaharia , b. hindman , a. konwinski , s. shenker and i. stoica + proceedings of the 8th usenix conference on networked systems design and implementation ( nsdi 2011 ) , pp .323 - 336 usenix association , berkeley ( 2011 ) `` the dynamic federations : federate storage on the fly using http / webdav and dmlite '' + f. furano , r. rocha , o. keeble , a. alvarez and p. fuhrmann + isgc 2013 international symposium on grids and clouds ( isgc ) , 17 - 22 march 2013 .+ see pos at https://furano.web.cern.ch/furano/files/dynafedswiki/paper_dynafedsisgc.pdf `` dynamic management of virtual infrastructures . ''+ miguel caballer , ignacio blanquer , germn molt , and carlos de alfonso journal of grid computing 13(1 ) : 5370 ( 2015 ) + http://link.springer.com/article/10.1007/s10723-014-9296-5 .
in this paper we describe the architecture of a platform as a service ( paas ) oriented to computing and data analysis . in order to clarify the choices we made , we explain the features using practical examples , applied to several known usage patterns in the area of hep computing . the proposed architecture is devised to provide researchers with a unified view of distributed computing infrastructures , focusing in facilitating seamless access . in this respect the platform is able to profit from the most recent developments for computing and processing large amounts of data , and to exploit current storage and preservation technologies , with the appropriate mechanisms to ensure security and privacy .
single molecule biochemical experiments provide highly detailed knowledge about the mean time between successive reaction events and hence about the reaction rates .additionally , they deliver qualitatively new information , inaccessible to bulk experiments , by measuring other reactions statistics , such as variances and autocorrelations of successive reaction times . in their turn, these quantities relate to structural properties of the reaction networks , uncovering such phenomena as internal enzyme states or multi - step nature of seemingly simple reactions , and hence starting a new chapter in the studies of the complex biochemistry that underlies cellular regulation , signaling , and metabolism . however , the bridge between the experimental data and the network properties is not trivial .since the class of exactly solvable biologically relevant models is limited , exact analytical calculations of statistical properties of reactions are impossible even for some of the simplest networks . similarly , the variational approach and other analytical approximations are of little help when the experimentally observed quantities depend on features that are difficult to approximate , such as the tails of the reaction events distributions .therefore , computer simulations are often the method of choice to explore an agreement between a presumed model and the observed experimental data .unfortunately , even the simplest biochemical simulations often face serious problems , both conceptual and practical .first , the networks usually involve _ combinatorially many _ molecular species and elementary reaction processes : for example , a single molecular receptor with modification sites can exist in states , and an even larger number of reactions connecting them .second , while it is widely known that _some _ molecules occur in the cell at very low copy numbers ( e.g. , a single copy of the dna ) , which give rise to relatively large stochastic fluctuations , it is less appreciated that the combinatorial complexity makes this true for _ almost all _ molecular species .indeed , complex systems with a large number of molecules , like in eukaryotic cells , may have small abundances of typical microscopic species if the number of the species is combinatorially large .third , and perhaps the most profound difficulty of the `` understanding - through - simulations '' approach , is that only very few of the kinetic parameters underlying the combinatorially complex , stochastic biochemical networks are experimentally observed or even observable .for example , the average rate of phosphorylation of a receptor on a particular residue can be measured , but it is hopeless to try to determine the rate for each of the individual microscopic states of the molecule determined by its modification on each of the other available sites . while some day computers may be able to tackle the formidable problem of modeling astronomically large biochemical networks as a series of random discrete molecular reaction events ( which will properly account for stochastic copy number fluctuations ) , and then performing sweeps through parameter spaces in search of an agreement with experiments , such powerful computers are still far away .more importantly , even if this computational ability were available , it would not help in building a comprehensible , tractable interpretation of the modeled biological processes and in identifying connections between microscopic features and macroscopic parameters of the networks .clearly , such an interpretation can be aided by simplifying the networks through coarse - graining , that is , by merging or eliminating certain nodes and/or reaction processes . ideally , as in fig .[ cascade ] , one wants to substitute a whole network of elementary ( that is , single - step , poisson - distributed ) biochemical reactions with a few complex reaction links connecting the species that survive the coarse - graining in a way that retains predictability of the system . not incidentally ,this would also help with each of the three major roadblocks mentioned above , by reducing the number of interacting elements , increasing the number of molecules in agglomerated hyperspecies , and combining multiple features into a much smaller number of effective , mesoscopic kinetic parameters .( a ) a complex network of elementary reactions connecting the _ input _ and the _ output _ nodes .note that the choice of these nodes usually distinguishes different coarse - graining schemes , and it is rather arbitrary . in our work ,the choice is determined by the adiabatic time scale separation , as described in _methods_. in principle , such networks can be coarse - grained by multiple methods . in ( b ) we illustrate the decimation procedure , where intermediate nodes with fast dynamics get eliminated successively , resulting in complex reactions connecting their immediate neighbors ; the statistics of these complex reactions are determined by the cumulant generating functions ( cgf ) , cf . _results_. other coarse - graining schemes are possible .for example , ( c ) nodes can be merged in hyper - nodes , again connected to each other by complex reactions .combinations of the strategies are also allowed .panels ( b ) and ( c ) resemble the decimation and the blocking procedures in statistical physics , and , not coincidentally , statistical physics is the field where coarse - graining has had the biggest impact and is the most developed . both the decimation and the blocking are in the spirit of the real - space renormalization group on an irregular lattice , and one can also think of momentum - space - like approaches as a complement ., width=14 ] the importance of coarse - graining in biochemistry has been understood since 1913 , when the first deterministic coarse - graining mechanism , now known as the michaelis - menten ( mm ) enzyme , was proposed for the following kinetic scheme : }]{k_{1}[e][s ] } c\xrightarrow{k_2[c ] } e+p .\label{mm-1}\ ] ] here , , and are kinetic rates , , , , and denote the substrate , the product , the enzyme , and the enzyme - substrate complex molecules , respectively , and ] , then the enzyme cycles many times before ] change appreciablythis allows to simplify the enzyme - mediated dynamics by assuming that the enzymes equilibrate quickly at the current substrate concentration , resulting in a coarse - grained , complex reaction with decimated enzyme species : [e]}{[s]+(k_2+k_{-1})/k_1}. \label{mm - rate}\ ] ] however , this simple reduction is insufficient when stochastic effects are important : each complex mm reaction consists of multiple elementary steps , thus the statistics of the number of mm reactions per unit time , in general , is non - poissonian .while some relatively successful attempts have been undertaken to extend simple deterministic coarse - graining to the stochastic domain , a general set of tools for coarse - graining large biochemical networks has not been found yet . in this article , we propose a method for a systematic rigorous coarse - graining of stochastic biochemical networks , which can be viewed as a step towards creation of comprehensive coarse - grained analysis tools .we start by noting that , in addition to the conceptual problems mentioned above , a technical difficulty stands in the way of stochastic simulations methods in systems biology : molecular species and reactions have very different dynamical time scales , which makes biochemical networks stiff and difficult to simulate . herewe propose to use this property of separation of time scales to our advantage .the idea is not new , and multiple related approaches have been proposed in the literature , differing from each other mainly in the definition of fast and slow variables .a common practice is to use _ reaction rates _ to identify fast and slow reactions .however , if two species of very different typical abundances are coupled by one reaction , then a relatively small change in the concentration of the high abundance species can have a dramatic effect on that of the low abundance one .this notion of _ species_-based rather than _reaction_-based adiabaticity has been used in the original mm derivation , and it is also at the heart of our arguments .our method builds upon the stochastic path integral technique from mesoscopic physics , providing three major improvements that make the approach more applicable to biological networks .first , we extend the method , initially developed for large copy number species , to deal with simple discrete degrees of freedom , such as a single mm enzyme or a single gene .second , we explain how to apply the technique to a network of multiple reactions , thereby reducing the entire network to a single complex reaction step .finally , we show how the procedure can be turned into an efficient algorithm for simulations of coarse - grained networks , while preserving important statistical characteristics of the original dynamics .the algorithm is akin to the langevin or -leaping schemes , widely used in biochemical simulations , but it allows to simulate an entire complex reaction in a single step .we believe that this development of a fast , yet precise simulation algorithm is the most important practical contribution of this manuscript . for pedagogical reasons ,we develop the method using a model system that is simple enough for a detailed analysis , yet is complex enough to support our goals . a generalization to more complex systemsis suggested in the _ discussion_. consider the system in fig. [ system ] : an enzyme is attached to a membrane in a cell . molecules are distributed over the bulk cell volume .each molecule can either be adsorbed by the membrane , forming the species , or dissociate from it .enzyme - substrate interactions are only possible in the adsorbed state .one can easily recognize this as an extremely simplified model of receptor mediated signaling , such as in vision , or immune signaling . as usual , the enzyme - substrate complex can split either into or into .let s suppose that the latter reaction is observable ; for example , a gfp - tagged enzyme sparks each time a product molecule is created .we further suppose the reaction is not reversible ( that is , the product leaves the membrane or the reaction requires energy and is far from equilibrium ) .the full set of elementary reactions is 1 .adsorption of the bulk substrate onto the membrane , , with rate ; 2 .reemission of the substrate back into the bulk , , with rate ; 3 .michaelis - menten conversion of into , consisting of 1 .substrate - enzyme complex formation , , with rate ; 2 .complex backward decay , , with rate ; 3 .product emission , with rate .note that here and in the rest of the article , we drop the ] is the average number of membrane - bound substrates , , , , and , finally , is the time step over which changes by a relatively small amount , but many membrane reactions happen . by analogy to the mm reaction , eqs .( [ mm_sim1]-[mm_sim3 ] ) , the results from step 2 allow for simulations of the whole reaction scheme in one langevin - like step : where is a random variable with the cumulants as in eqs .( [ c1_formula]-[c3_formula ] ) , to be generated as in _ methods : simulations with near - gaussian distributions _ , and is an external current , such as production or decay of the bulk substrate in other cellular processes . in analyses of single molecule experiments ,one often calculates the ratio of the variance of the reaction events distribution to its mean , called the fano factor : the fano factor is zero for deterministic systems and one for a totally random process described by a poisson number of reactions . as a result , the fano factor provides a natural quantification of the importance of the stochastic effects in the studied process . in vivo , it can be measured , for example , by tagging the enzyme with a fluorescent label that emits light every time a product molecule is released .traditionally , to compare experimental data to a mathematical model , one would simulate the model using the gillespie kinetic monte carlo algorithm , which is a slow and laborious process that takes a long time to converge to the necessary accuracy ( see below ) .in contrast , our coarse - graining approximations yields an analytic expression for the fano factor of the transformation via eq .( [ fano_f ] ) .this illustrates a first practical utility of our coarse - graining approach . in fig .[ fano factor ] , we compare this analytical expression , derived under the aforementioned quasi - stationary assumption , with stochastic simulations for the full set of elementary reactions in fig .[ reduction](a ) .the results are in an excellent agreement , illustrating the power of the analytical approach .comparison of the analytically calculated fano factor for the reaction , eq .( [ fano_f ] ) , to direct monte carlo simulations with the gillespie algorithm .here we use , , , evolution time ( in arbitrary units ) . each numerical data pointwas obtained by averaging 10000 simulation runs.,width=10 ] note that the fano factor is generally different from unity , indicating a non - poissonian behavior of the complex reaction .the backwards decay of , parameterized by , adds extra randomization and thus larger values of increases the fano factor . at another extreme , when , the fano factor may be as small as 1/2 , indicating that then the entire chain can be described by the sum of two poisson events with similar rates , which halves the fano factor . finally ,when , i.e. , the substrates are removed from the membrane only via conversion to products , the fano factor is one .this is because here the only stochasticity in the problem arises from poisson membrane binding : on long time - scales , all bound substrates will eventually get converted to the outgoing flux .as we alluded to before , in addition to analytical results , such as the expression for the fano factor , we expect the coarse - graining approach to be particularly useful for stochastic simulations in systems biology .this is due to an essential speedup provided by the method over traditional simulation techniques , such as , in particular , the gillespie algorithm , to which most other approaches are compared too .indeed , for the model analyzed in this work , the computational complexity of a single gillespie simulation run is , where is the number of reactions in the system , and is the duration of the simulated dynamics .in contrast , the complexity of the coarse - grained approach scales as since we have removed the internal species and simulate the dynamics in steps of , instead of steps .however , since the coarse - grained approach requires generation of complicated random numbers , the actual reduction in the complexity is unclear .more importantly , the gillespie algorithm is ( statistically ) exact , while our analysis relies on quasi - stationary assumptions .therefore , to gauge the practical utility of our approach in reducing the simulation time while retaining a high accuracy , we benchmark it against the gillespie algorithm .we do this for a single mm enzyme first , and then progress to the full five reaction model of the enzyme on a membrane .details of the computer system used for the benchmarking can be found in _ methods : simulations details_. _ the michaelis - menten model : _ we consider a mm enzyme with , , , .we analyze the number of product molecules produced by this enzyme over time , with the enzyme initially in the ( stochastic ) steady state . to strain both methods, we require a very high simulation accuracy , namely convergence of the fourth moment of the product flux distribution to two significant digits . for both methods ,this means over 10 millions realizations of the same evolution ..[table_mm]comparison of the gillespie and the coarse - grained simulation algorithms .the numbers are reported for 12 million realizations of the same evolution for each of the methods . to highlight deviations from the poisson and the gaussian statistics , we provide ratios of the higher order cumulants to the mean of the product flux distribution . in the last column ,we report analytical predictions , eqs .( [ mmmean]-[mm4 ] ) , obtained from the quasi - steady state approximation to the cgf . [ cols="^,^,^,^",options="header " , ] _ the full conversion : _ finally , we perform similar benchmarking for the gillespie simulations and the coarse - grained simulations of the fully coarse - grained system , represented as a single complex reaction .the third column in tbl .[ tbl_cg_time ] presents the data for this coarse - graining level .note that representing all five reactions as a single one results in a dramatic speedup of about 4000 .this number relates to the ratio of the slow and the fast time scales in the problem , but also to the fact that futile bindings - unbindings are leaped over in the coarse - grained scheme . as discussed in detail in the original literature (the best pedagogical exposition is in ref . ) , in the stochastic path integral formalism , a network of reactions with chemical species ( cf .[ generalization ] ) is generally described by ordinary differential equations specifying the classical ( saddle point ) solution of the corresponding path integral ._ methods : coarse - graining all membrane reactions _ provides a particular example of this technique , and we refer the interested reader the original work . here, we build on the result and focus on developing a relatively simple , yet general coarse - graining procedure for more complex reaction networks .schematic coarse - graining of a network of reactions .( a ) this network has reactions ( red arrows ) and species , of which three are slow ( large circles ) , and five are fast ( small circles ) .( b ) dynamics of each fast node can be integrated out , leaving effective , coarse - grained , pairwise fluxes among the slow nodes .the fluxes along entire pathways connecting the slow pairs ( blue arrows ) are labeled by the corresponding effective hamiltonians .note that , for reversible pathways ( in our example ) , the flux may be positive or negative ( two - sided arrow ) , and it is strictly non - negative for the irreversible pathways ( one - sided arrows ) ., width=12 ] at intermediate time scales , , many fast reactions connecting various slow variables can be considered statistically independent .therefore , in the path integral , every separate chain of reactions that connects two slow variables simply adds a separate contribution to the effective hamiltonian .namely , let s enumerate slow chemical species by .chains of fast reactions connecting them can be marked by pairs of indexes , e.g. , ( cf .fig.[generalization ] ) .an entire such chain will contribute a single effective hamiltonian term , , to the full cgf of the slow fluxes , where , , and are the set of the slow species and the conjugate , counting variables .if necessary , the geometric correction to the cgf , , can also be written out .overall , .\label{netpath}\end{gathered}\ ] ] this expression provides for the following coarse - graining procedure .first , one finds a time scale , small enough for the slow species to be considered as almost static , and yet fast enough for the fast ones to equilibrate .if the fast species consist only of a few degrees of freedom , like in the case of a single enzyme , one can derive the cgf of the transformations mediated by these species either by using techniques presented in this article ( cf ._ methods : coarse - graining the michaelis menten reaction _ ) , or discussed previously .if instead the fast species are mesoscopic , one can use the stochastic path integral technique to derive the cgf by analogy with step 2 of this article . at the next step ,these expressions for the cgfs of the fast species are incorporated into the stochastic path integral over the abundances of the slow variables .for this , one writes down the the full effective hamiltonian , eq .( [ netpath ] ) , assumes adiabatic evolution , and solves the ensuing saddle point equations .the extremum of the effective hamiltonian determines the cumulant generating function . for hierarchies of time scales, this reduction procedure is repeated at every level of the hierarchy .as biology continues to undergo the transformation from a qualitative , descriptive science to a quantitative one , it is expected that more and more rigorous analysis techniques developed in physics , chemistry , mathematics , and engineering will find suitable applications in the biological domain .this article represents one such example , where adiabatic approach , paired with the stochastic path integral formalism of mesoscopic statistical physics , allows one to coarse - grain stochastic biochemical kinetics systems . for stiff systems with a pronounced separation of time scales ,our technique eliminates relatively fast variables .it reduces stochastic networks to only the relatively slow species , coupled by complex interactions that accounts for the decimated nodes .the simplified system is smaller , non - stiff , and hence easier to analyze and simulate , resulting , in particular , in orders of magnitude improvement in the computational complexity of the simulations .thus we believe that the approach has a potential to revolutionize the field of simulations in systems biology , at least for systems with the separation of time scales .fortunately , such separation occurs more prominently in nature than one would intuitively suspect .consider for example , the system given in fig .[ cube ] , briefly mentioned in the _ introduction_. a molecule must be modified on sites in an arbitrary order to move from the inactive ( ) to the active ( ) state . the kinetic diagram for this systemis an -dimensional hypercube , and the number of states of the molecule with modified sites is .therefore , if the total number of molecules is , then a typical times modified state will have molecules in it . this number may be quite small , ensuring the need for a full stochastic analysis .more importantly , it is quite different from either or , e.g. , .as we discussed at length above , different occupancies result in the separation of time scales , and , on practice , the adiabatic approximation works quite well when this separation is a factor of only a few .a molecule must be modified on sites ( here ) in an arbitrary order to get activated . 0 and 1 indicate a non - modified /modified site , respectively .the number of states with modified sites is quite different for different s , which allows for a separation of time scales , as explained in the text.,width=7 ] in addition to the analysis and simulations , our adiabatic path integral - based coarse - graining scheme simplifies interpretation and understanding . for example , in certain cases , the fano factor of the complex reaction , eq .( [ fano_f ] ) , approaches unity , suggesting a simplified , yet rigorous , interpretation with the entire reaction replaced by a simple poisson step .hence the list of relevant , important parameters may be smaller than suggested by the _ ab initio _ description of the system , aiding the understanding of the involved processes and decreasing the effective number of biochemical parameters that must be measured experimentally .recent theoretical analysis suggests that this may be a universal property of biochemical networks , with larger networks having proportionally fewer relevant parameters .thus one may hope that the rigorous identification of the relevant degrees of freedom presented here will become even more powerful as larger , more realistic systems are considered .we demonstrated the strength of our coarse - graining approach in analytical calculations of the fano factor for the model system ( relevant for single molecule experiments ) , and in numerical simulations , where the method substantially decreased the computational complexity . while impressive , this is still far from being able to coarse - grain large , cellular scale reaction networks .however , we believe that some important properties of our approach suggest that it may serve as an excellent starting point .namely , * we reduce a system of stochastic differential equations to a similar number of deterministic ones , which is a substantial simplification ( see _ methods _ ) .* our method can operate with arbitrarily long series of moments of the whole probability distribution of reaction events ; i.e. , it keeps track of mesoscopic fluctuations and even of rare events . *the technique is very suitable for stiff systems , allowing to reduce the complexity by means of standard adiabatic approximations , well developed in classical and quantum physics .* the stochastic path integral approach can deal with species that have copy numbers of order unity , which are ubiquitous in biological systems .this is not true for many other coarse - graining techniques . * finally , unlike many previous approaches , the stochastic path integral is rigorous , can be justified mathematically , and allows for controlled approximations . in the forthcoming publications , we expect to show how these advantageous properties of the adiabatic path integral technique allow to coarse - grain many standard small and medium - sized biochemical networks .if during a time interval the rate of an elementary chemical reaction is ( almost ) constant , then all reaction events are independent , and their number can be approximated as a poisson variable . in its turn ,the cgf of a poisson variable is where is the poisson rate .consider the reaction , described mathematically as in eq .( [ mm-1 ] ) : {k_{1}s_{\rm m } } c\xrightarrow{k_2 } e+p .\label{mm - methods}\ ] ] the probabilities of transitions between bound ( ) and unbound ( ) states of the enzyme are given by a simple two state markov process = - \left [ \begin{array}{ll } \,\,\,\,k_1s_{\rm m } & -k_{-1}-k_2\\ -k_1s_{\rm m } & \,\,\,\ , k_{-1}+k_2 \end{array } \right ] \left [ \begin{array}{l } p_{\rm u}\\ p_{\rm b } \end{array } \right ] , \label{ev1}\ ] ] where . lets introduce the mgf for the number of transitions , here stands for a _ charge _transferred over time in a reaction , and is the mm reaction in toy model , fig . [ system ] .using eqs .( [ ev1 ] , [ pgf1 ] ) , one can show , that satisfies a schrdinger - like equation with a -dependent hamiltonian , leading to a formal solution where is the unit vector , is the probability vector of initial enzyme states , and .\label{hchi}\end{aligned}\ ] ] the hamiltonian , analogous to eq .( [ hchi ] ) , can be derived for a very wide class of kinetic schemes , allowing for a relatively straightforward extension of our methods .the solution , eq .( [ pdf2 ] ) , can be simplified considerably if the reaction is considered in a quasi - steady state approximation , that is is equilibrated at a current value of the other parameters .this means that the time on which the reaction is being studied , , is much larger than a characteristic time of a single enzyme turnover , , so we can consider in eq .( [ pdf2 ] ) .then only the eigenvalue of the hamiltonian with the smallest real part is relevant , and it is possible to incorporate a slow time dependence of the parameters into this answer . by analogy with the quantum mechanical berry phase , the lowest order non - adiabatic correction can be expressed as a geometric phase where , is the vector in the parameter space , which draws a contour during the parameter evolution , and and are the left and the right eigenvectors of corresponding to the instantaneous eigenvalue . the first term in eq .( [ pdf222 ] ) is the geometric phase , which is responsible for various ratchet - like fluxes .after elimination of the fast degrees of freedom , the geometric phase gives rise to magnetic field - like corrections to the evolution of the slow variables .however , since these corrections depend on time derivatives of the slow variables , they usually are small and can be disregarded , unless they break some important symmetry , such as the detailed balance , or the leading non - geometric term is zero . in our model ,the geometric effects are negligible when compared to the dominant contribution when , and we deemphasize them in most derivations. however , we keep the geometric terms in several formal expressions for completeness , and the reader should be able to track its effects if desired .reading the value of from ref . , we conclude that the number of particles converted from to over time , in the adiabatic ( mm ) limit is described by the following cgf : . \label{s3}\end{gathered}\ ] ] a probability distribution with known cumulants , , ... , can be written as a limited gram - charlier expansion , \label{edgeworth}\ ] ] where and is the gaussian density with the mean and the variance .the leading term in the series is a standard gaussian approximation , and the subsequent terms correctly account for skewness , kurtosis , etc .note that if all cumulants scale similarly , as is true for our near - gaussian case , then the terms in the series become progressively smaller , ensuring good approximations in practice . while the gram - charlier expansion provides a reasonable approximation to ,generation of random samples from such a non - gaussian distribution is still a difficult task .however , if , instead of the random numbers _ per se _ , the goal is to calculate the expectation of some function over the distribution , , then the importance sampling technique can be used . specifically , we generate a gaussian random number according to and define its importance factor according to its relative probability in the reference normal distribution and the desired gram - charlier approximation after generating such random numbers , , we obtain the needed expectation values as if a current random number draw represents just one reaction in a larger reaction network , then the overall importance factor of a monte carlo realization is a product of the factors for each of the random numbers drawn within it .note that the method reduces the complexity of simulations to that of a simple gaussian , langevin process with a small burden of ( a ) evaluating an algebraic expression for the gram - charlier expansion , and ( b ) keeping track of the importance factor for each of the monte carlo runs .yet , at least in principle , this small computational investment allows to account for an arbitrary number of cumulants of the involved variables . to illustrate this , in fig .[ comparison ] , we compare the gram - charlier - based , importance - sampling corrected simulations of the mm reaction flux to the exact results in _ results : step 1_. introducing just the third and the fourth cumulant makes the simulations almost indistinguishable from the exact results .comparison of an exact discrete distribution of product molecules generated by mm - enzyme ( discrete points ) , with the fit by continuous approximation by leading terms of gram - charlier series . left column compares the exact result to the gaussian approximation with the same first two cumulants .central column shows improvement of the fit due to inclusion of the third cumulant correction . including the fourth cumulant ( right column ) makes the approximation and the exact result virtually indistinguishable . for these plots, we used , , , , , and time step size ( see _ introduction : the model _ and _ results _ for explanation of the parameters ) ., width=17 ] we end this section with a note of caution : the gram - charlier series produces approximations that are not necessarily positive and hence are not , strictly speaking , probability distributions .however , the leading gaussian term decreases so fast that this may not matter in practice .indeed , in our analysis , we simply rejected any random number that had a negative importance correction , and the agreement with the analytical results was still superb .however , this simplistic solution becomes inadequate for lengthy simulations , where the probability that one of random numbers in a long chain of events falls into a badly approximated region of the distribution approaches one .then the importance factor of the entire chain of events becomes incorrect , spoiling the convergence . in these situations ,other approaches for generating random numbers should be used .a prominent candidate is the well - known acceptance - rejection method . since the true distributions we are interested in are near - gaussian ,a gaussian with a slightly larger variance will be an envelope function for the gram - charlier approximation to the true distribution .then the average random number acceptance probability will be similar to the ratio of the true and the envelope standard deviations , and it can be made arbitrary high .then the rejection approach will require just a bit more than one normal and one uniform random number to generate a single sample from the underlying gram - charlier expansion .the orders - of - magnitude gain due to the transition to the coarse - grained description should fully compensate for this loss .note that , in this case , the negativity of the series is not a problem since it will lead to an incorrect rejection of a single , highly improbable sample , rather than an entire sampling trajectory . to complete the coarse - graining step that connects figs .[ reduction](b ) and [ reduction](c ) , we look for the mgf of the total number of products produced over time : for this , we discretize the time into intervals of durations , and we introduce random variables ( ) , which represent the number of each of the three different reactions in fig .[ reduction](b ) ( membrane binding , unbinding , and mm conversion ) during each time interval .the probability distributions of are given by inverse fourier transforms of the corresponding mgfs : where the cgf is following , the mgf of the total number of product molecules created during time interval is given by the path integral over all possible trajectories of and : e^{i\chi_c \sum_{t_k } \delta q_{3}(t_k)}\\ \times \delta ( s_{\rm m}(t_{k+1})-s_{\rm m}(t_k ) - \delta q_{1 } ( t_k ) + \delta q_{2 } ( t_k)+\delta q_3 ( t_k ) ) .\label{path1}\end{gathered}\ ] ] here we used the fact that .the -function in the path integral expresses the conservation law for the slowly changing number of substrate molecules .we rewrite it as an inverse fourier transform , \ } } , \label{delf}\end{gathered}\ ] ] and we substitute the expression together with eq .( [ pij ] ) into eq .( [ path1 ] ) .then the integration over produces new -functions over , which , in turn , are removed by integration over .this leads to an expression for the mgf : } , \label{path4}\ ] ] where \label{hhh}\end{aligned}\ ] ] notice that , unlike in the original work on the stochastic path integral , which assumed all component reactions to be poisson , here is the cgf of the entire complex mm reaction .this we read as the coefficient in front of in eq .( [ s3 ] ) , and it is clearly non - poisson . this ability to include subsystems with small number of degrees of freedom , such as a single michaelis - menten enzyme or a stochastic gene expression , into coarse - graining mechanism based on the the stochastic path integral techniques opens doors to application of the method to a wide variety of coarse - graining problems .let and solve eq ( [ semicl ] ) .then the cumulants generating function in the quasi - steady state approximation is this formally completes the last step of the coarse - graining by deriving the cumulant generating function for the number of complex transformation over long times .english , a. furube , and p.f .barbara , single - molecule spectroscopy in oxygen - depleted polymer films , _ chem .. lett _ * 324 * , 15 ( 2000 ) .m. orrit , photon statistics in single molecule experiments , _ single mol . _* 3 * , 255 ( 2002 ) .i.v . gopich and a. szabo , statistics of transition in single molecule kinetics , _ j. chem .phys . _ * 118 * , 454 ( 2003 ) .english , et al . , ever - fluctuating single enzyme molecules : michaelis - menten equation revisited . _* 2 * , 87 ( 2006 ) .x. xue , f. liu , and z. ou - yang , single molecule michaelis - menten equation beyond quasistatic disorder _e _ * 74 * , 030902(r ) ( 2006 ) .s. chaudhury and b.j .cherayil , dynamic disorder in single - molecule michaelis - menten kinetics : the reaction - diffusion formalism in the wilemski - fixman approximation ._ j. chem .phys . _ * 127 * , 105103 ( 2007 ) .darvey , b.w .ninham and p.j .staff , stochastic models for second - order chemical reaction kinetics . the equilibrium state ._ j. chem .phys . _ * 45 * , 2145 ( 1966 ) .hornos , et al . , self - regulating gene : an exact solution , _ phys .e _ * 72 * , 051907 ( 2005 ) .s. iyer - biswas , f. hayot , and c. jayaprakash , transcriptional pulsing and consequent stochasticity in gene expression .preprint , http://arxiv.org/abs/0711.1141 ( 2007 ) .m. sasai and p.g .wolynes , stochastic gene expression as a many - body problem .( usa ) _ * 100 * , 2374 ( 2003 ) .y. lan , p.g .wolynes , and g.a .papoian , _ j. chem. phys . _ * 125 * , 124106 ( 2006 ) .hlavacek et al ., rules for modeling signal - transduction systems . _ sci .stke _ , 344 ( 2006 ) .berg , a model for the statistical fluctuations of protein numbers in a microbial population ._ j. theor ._ , * 71 * , 587 ( 1978 ) .h. mcadams and a. arkin , stochastic mechanisms in geneexpression _ proc .( usa ) _ * 94 * , 814 ( 1997 ) .j. paulsson and m. ehrenberg , random signal fluctuations can reduce random fluctuations in regulated components of chemical regulatory networks ._ * 84 * , 5447 ( 2000 ) .p. cluzel , m. surette and s. leibler , an ultrasensitive bacterial motor revealed by monitoring signaling proteins in single cells . _ science _ * 287 * , 1652 ( 2000 ) .m. elowitz , a. levine , e. siggia and p. swain , stochastic gene expression in a single cell ._ science _ , * 297 * , 1183 ( 2002 ) .raser , e.k .oshea , noise in gene expression : origins , consequences , and control ._ science _ * 304 * , 1811 ( 2004 ) .w. bialek and s. setayeshgar , physical limits to biochemical signaling .( usa ) _ * 102 * , 10040 ( 2005 ) .kadanoff and a. houghton , numerical evaluations of the critical properties of the two - dimensional ising model .b _ * 11 * , 377 ( 1975 ) .shang - keng ma , m. k. fung , _ statistical mechanics _( world scientific , 1984 ) .l. michaelis and m.l .menten , die kinetik der invertinwerkung .biochemische zeitschrift ._ biochem .z. _ * 49 * , 333 ( 1913 ) .m. samoilov , a.p . arkin and j. ross , signal processing by simple chemical systems ._ j. phys .a _ * 106 * , 10205 ( 2002 ) .rao and a.p .arkin , stochastic chemical kinetics and the quasi - steady - state assumption : application to the gillespie algorithm ._ j. chem .phys . _ * 118 * , 4999 ( 2003 ) .m. samoilov , s. plyasunov , and a.p .arkin , stochastic amplification and signaling in enzymatic futile cycles through noise - induced bistability with oscillations .( usa ) _ * 102 * , 2310 ( 2005 ) .i.v . gopich and a. szabo , theory of the statistics of kinetic transitions with application to single - molecule enzyme catalysis ._ j. chem .phys . _ * 124 * , 154712 ( 2006 ) .n.a . sinitsyn and i. nemenman , berry phase and pump effect in stochastic chemical kinetics ._ epl _ * 77 * , 58001 ( 2007 ) .s. pilgram et al . , stochastic path integral formulation of full counting statistics . _lett . _ * 90 * , 206801 ( 2003 ) .jordan , e.v .sukhorukov and s. pilgram , fluctuation statistics in networks : a stochastic path integral approach ._ j. math .phys . _ * 45 * , 4386 ( 2004 ) . v. elgart and a. kamenev , rare event statistics in reaction - diffusion systems .e _ * 70 * , 041106 ( 2004 ) .sinitsyn and i. nemenman , universal geometric theory of mesoscopic stochastic pumps and reversible ratchets .lett . _ * 99 * , 220408 ( 2007 ) .jean zinn - justin,_quantum field theory and critical phenomena _( oxford university press , usa , 2002 ) .y. cao , d.t .gillespie , and l.r .petzold , accelerated stochastic simulation of the stiff enzyme - substrate reaction ._ j. chem .* 123 * , 054104 ( 2005 ) .y. cao , d.t .gillespie , and l.r .petzold , efficient step size selection for the tau - leaping simulation method ._ j. chem .phys . _ * 124 * , 4 ( 2006 ) .p. detwiler , s. ramanathan , a. sengupta and b. shraiman .engineering aspects of enzymatic signal transduction : photoreceptors in the retina ._ biophys .j. _ * 79 * 2801 ( 2000 ) . c. wu , h.j .cha , j.j .valdes , w.e .bentley , gfp - visualized immobilized enzymes : degradation of paraoxon via organophosphorous hydrolase in a packed column ._ biotechn .bioeng . _ * 77 * , 212 ( 2001 ) .gillespie , exact stochastic simulation of coupled chemical reactions ._ j. phys .chem . _ * 81 * , 2340 ( 1977 ) .n. le novre and t.s .shimizu , stochsim : modelling of stochastic biomolecular processes ._ bioinformatics _ * 17 * , 575 ( 2001 ) .melvin lax , wei cai , and min xu,_``random processes in physics and finance '' _ ( oxford university press , usa 2006 ) . s. blinnikov and r. moessner , expansions for nearly gaussian distributions .ser . _ * 130 * , 193 ( 1998 ) .r. srinivasan , importance sampling - applications in communications and detection ( springer - verlag : berlin , 2002 ) .d. stekel and d. jenkins , strong negative self regulation of prokaryotic transcription factors increases the intrinsic noise of protein expression , _ bmc syst .* 2 * , 1 ( 2008 ) .d.a . bagrets and y.v .nazarov , full counting statistics of charge transfer in coulomb blockade systems .b _ * 67 * , 085316 ( 2003).r .gutenkunst , j. waterfall , f. casey , k brown , c. myers , j. sethna , universally sloppy parameter sensitivities in systems biology ._ plos comput .biol . _ * 3 * , e189 ( 2007 ) .e. ziv , i. nemenman , and c. wiggins , optimal information processing in small stochastic biochemical networks ._ plos one _ * 2 * , e1077 ( 2007 ) .sukhorukov , a.n .jordan , stochastic dynamics of a josephson junction threshold detector .* 98 * , 136803 ( 2007 ) .sinitsyn , reversible stochastic pump currents in interacting nanoscale conductors .b _ * 76 * , 153314 ( 2007 ) .j. ohkubo , the stochastic pump current and the non - adiabatic geometrical phase ._ j. stat ._ p02011 ( 2008 ) .astumian , adiabatic operation of a molecular machine .( usa ) _ * 104 * , 19715 ( 2007 ) .a.n . jordan and e.v .sukhorukov , transport statistics of bistable systems .lett . _ * 93 * , 260604 ( 2004 ) .j. von neumann , various techniques used in connection with random digits .monte carlo methods .bureau standards _ * 12 * , 36 ( 1951 ) .
we propose a universal approach for analysis and fast simulations of stiff stochastic biochemical kinetics networks , which rests on elimination of fast chemical species without a loss of information about mesoscopic , non - poissonian fluctuations of the slow ones . our approach , which is similar to the born - oppenheimer approximation in quantum mechanics , follows from the stochastic path integral representation of the full counting statistics of reaction events ( also known as the cumulant generating function ) . in applications with a small number of chemical reactions , this approach produces analytical expressions for moments of chemical fluxes between slow variables this allows for a low - dimensional , interpretable representation of the biochemical system , that can be used for coarse - grained numerical simulation schemes with a small computational complexity and yet high accuracy . as an example , we consider a chain of biochemical reactions , derive its coarse - grained description , and show that the gillespie simulations of the original stiff system , the coarse - grained simulations , and the full analytical treatment are in an agreement , but the coarse - grained simulations are three orders of magnitude faster than the gillespie analogue .
stromatolites preserve the only macroscopic evidence of life prior to the appearance of macro - algae . the biogenicity of stromatolites older than 3.2 ga years . ]is unclear .if they are indeed biotic , they are the oldest morphological evidence for life , now that the identification of 3.3 to 3.5 ga microfossils has been challenged . here we propose a mathematical model for stromatolite morphogenesis that endorses a biotic origin for coniform stromatolites .it analyses interaction between upward growth of a phototropic or phototactic microbial mat and mineral accretion normal to the surface of the mat .domical structures are formed when mineral accretion dominates .when vertical growth dominates , coniform structures evolve that reproduce the features of conophyton , a stromatolite that flourished in certain low - sedimentation environments for much of the proterozoic . increasing the dominance of vertical growth produces sharply - peaked conical forms , comparable to coniform stromatolites described from the 3.45 ga warrawoona group , western australia .some authors prefer to avoid a genetic definition for stromatolites , but we regard them as laminated microbialites , biomechanically and functionally analogous to lithified sessile organisms , such as colonial corals , in which living tissue is restricted to the surface1 . in stromatolites the living tissue is a benthic microbial community ( bmc ) .bmcs range from communities composed of a single species to complex trophic networks of photoautotrophs , chemoatotrophs and heterotrophs in which species composition and diversity may change in response to environmental conditions in cases where the bmc includes photoautotrophs , such as cyanobacteria , the stromatolite form represents a record of that community s response to light .there have been very few previous attempts to model stromatolite morphogenesis mathematically .verrecchia proposed a simulation model for microstromatolites in calcrete crusts using a diffusion - limited aggregation model , but this is only relevant to modelling complexly branching stromatolites .insights into morphogenesis of simpler forms may be gained using the interface evolution equation of kardar , parisi and zhang ( kpz ) which contains parameters for surface - normal accretion , surface tension , and noise .grotzinger and rothman attempted to simulate stromatolite form using a modified kpz equation with explicit vertical growth . in their model vertical growth was considered to be due to the deposition of suspended sediment , surface - normal accretion was due to chemical precipitation , surface tension effects were related to both diffusive smoothing of the settled sediment and chemical precipitation , and uncorrelated random noise represented surface heterogeneity and environmental fluctuations .their model simulated the structure of a supposed stromatolite from the cowles lake formation , canada , and this led them to conclude that some and perhaps many stromatolites may be accounted for exclusively by abiotic processes .however , that model was subsequently modified to include a biotic process , mat growth , along with mineral precipitation in the surface - normal growth parameter .as both mineral accretion and biological growth were linked in their surface - normal growth parameter , this model was unable to discriminate biotic effects .a rather different application of the kpz equation has been proposed in which the effects of microbial growth are included in the vertical growth parameter of the equation .this has been used to simulate the morphogenesis of stromatolites from marion lake , south australia .the biotic interpretation of fossil stromatolites is widely accepted , despite the fact that they rarely preserve any remains of the bmc which formed them . as a result, attention has been focussed on how biotic stromatolites might be distinguished from abiotic accretions such as tufa , speleothems , and calcrete .several proterozoic stromatolite forms grew in environments of low - sedimentation and their formation seems to have been due to the growth of a bmc , containing photosynthetic bacteria , and accretion of calcium carbonate in the resulting biofilm . for forms which lack evidence of detrital materialbeing trapped or bound by the bmc we propose a model for stromatolite morphogenesis which involves two processes only : 1 .upward growth of a phototropic or phototactic bmc , 2 .mineral accretion normal to the surface .the function represents the height of the profile above a horizontal baseline which evolves in time according to the equation the co - ordinate measures the distance along the baseline .it is also equivalently a radial co - ordinate in the baseplane for circularly symmetric profiles .we interpret as the average rate of vertical growth due to photic response of microbes and as the average rate of surface - normal growth due to mineral accretion .although non - linear , equation ( [ eqn ] ) can be solved with a change of variables using the method of characteristics and prescribed initial profiles .the choice of initial profile is important .cone - like initial profiles arise naturally in deformations of thin flat sheets . fig .1 shows examples of forms obtained from our model using initial profiles similar to those thought in field and laboratory studies to initiate coniform stromatolites . the functional form of these solutions to equation ( [ eqn ] ) is given by where with , and .the additional parameters , and , are used to tune the concavity or convexity of the flanks and the sharpness of the peak of the initial shape .the results provide possible explanations for variations in coniform stromatolite morphogenesis .when is smaller than or comparable to the result is a domical form ( figs .1a , 1b and 1c ) .coniform structures with thickened apical zones form when ( figs . 1d , 1e and 1f ) .more angular coniform structures form when ( figs .1 g , 1h and 1i ) .the essential characterisitics of conophyton , a columnar stromatolite composed of conical laminae with thickened crests ( figs . 2 and 3 ) are apparent in figs .1d , 1e and 1f .the laminae of conophyton generally lack any evidence of trapping or binding of detrital sediment particles and the various microstructures that have been described all appear to result from a combination of bmc growth and carbonate precipitation .the form is recorded from paleoproterozoic to mezoproterozoic rocks world - wide , but becomes rare in the neoproterozoic .it flourished in extensive fields of conical columns up to 10 m high in environments characterised by very low sedimentation rates .the lack of evidence for significant sedimentation and evidence for an almost complete covering by a bmc during growth suggests that the conophyton form is determined by two factors , light and mineral accretion .the thickening of the crestal zones in figs .1d , 1e and 1f is evocative of the thickened laminae and fenestrae in the delicate crestal zone of conophyton .stromatolite fenestrae are voids in the lithified structure thought to have been left after the decay of the original bmc .a modern analogue for conophyton has been recognised in hot - springs in yellowstone national park , usa , where it has been established that they form as a result of upward growth and motility of phototactic filamentous cyanobacteria combined with precipitation of silica , and that crestal fenestrae and thickening of laminae have been related to the preferential upward growth of the constructing microbes .it has been concluded that crestal thickening of laminae and amplification of bedding irregularities are evidence for phototropic growth in stromatolites .the results of our model , equation ( [ eqn ] ) , shown in fig . 1 , together with field evidence , support the interpretation that the vertical growth parameter represents photic response of the bmc rather than sediment deposition . if the converse were true , coniform stromatolites would only form under conditions of high sedimentation which is precisely contrary to field evidence . indeed , while sediment deposition would tend towards the smoothing of surface irregularities , growth due to photic response would tend to accentuate them .our model shows that a combination of vertical phototropic or phototactic microbial growth and surface - normal mineral accretion can produce coniform forms and structures analogous to those found in both archaean and proterozoic coniform stromatolites .for example , there is a striking similarity between the model forms shown in figs .1 g , 1h and 1i and the sharply - peaked coniform stromatolites in the warrawoona group , thus supporting their biogenic origin and reinforcing the probability that photosynthetic microbes were components of archaean bmcs .the various cases modelled in figs . 1d , 1e and 1fcan all be matched in proterozoic conophytons ( figs . 2 and 3 ) .this sheds some light on why , after flourishing for much of the proterozoic , conophytons virtually disappeared in the neoproterozoic .this demise has been linked to evolutionary changes in bmcs , but since these would not have limited photic response , this seems untenable .conophytons represent an effective growth strategy that is especially vulnerable to predation and competition and their demise is best explained as an indication of the evolution of greater biological diversity in the quiet marine environments that they had dominated for so long .
mathematical models have recently been used to cast doubt on the biotic origin of stromatolites . here by contrast we propose a biotic model for stromatolite morphogenesis which considers the relationship between upward growth of a phototropic or phototactic biofilm ( ) and mineral accretion normal to the surface ( ) . these processes are sufficient to account for the growth and form of many ancient stromatolities . domical stromatolites form when is less than or comparable to . coniform structures with thickened apical zones , typical of conophyton , form when . more angular coniform structures , similar to the stromatolites claimed as the oldest macroscopic evidence of life , form when . and
the presence of adhesive forces between grains can greatly alter the physical behavior of sandpiles .although the importance of intergrain adhesion has been noted in many fields , ranging from soil science to civil engineering , the understanding of the physical principles governing the link between the macroscopic behavior of sandpiles in which adhesion is present and the microscopic attractive force distribution within the sandpile remains limited .some light has been shed on this subject recently through experimental and theoretical studies which have shown that small quantities of liquid added to a sandpile comprised of rough spherical grains can cause sufficient intergrain adhesion so that the angle of repose after failure and also the maximum static angle of stability of the sandpile before failure , known as the critical angle , , greatly increase . a continuum theory that links stress criteria for the macroscopic failure of the wet pile to the cohesion between grains , which in turn was attributed to the formation of liquid menisci with radii of curvature that were determined by the surface roughness characteristics of individual grains ,has provided a satisfying quantitative explanation of the increase of with the liquid volume fraction and air - liquid interfacial tension . in this theory , each intergrain meniscusis assumed to exert the same average attractive force everywhere in the pile .this assumption appears to be valid for sufficiently wet sandpiles ; however , when the volume of the wetting fluid is small enough this theory is incapable of explaining the data .there are two principal features of the small fluid volume data that are incompatible with the continuum theory .the first is that the increase in with small amounts of wetting fluid is independent of the surface tension of that fluid .the second is that , in order to quantitatively fit the data , one must assume that a small fraction of the wetting fluid is sequestered on the grains in such a way that it does not participate in the formation of inter - grain menisci that contribute to the adhesive stresses within the pile .it is possible that , at vanishingly small fluid coverage , the physical / chemical inhomogeneities of the grains surface prevents the transport of the wetting fluid from one mensicus to another thereby allowing a wide distribution of inter - grain cohesive forces . to begin to understand the effect of such a broad distribution of inter - grain forces upon the macroscopic properties of the sandpile, we consider in this paper an extreme example of such a distribution in which some inter - grain contacts have an arbitrarily large cohesion while others have no cohesion at all .consistent with the notion that the non - uniform distribution of inter - grain cohesion is primarily significant at low fluid volumes , we study a system designed so that each grain has at most one strong cohesive contract . in order to begin to investigate how nonuniform distributions of microscopic attractive forces between grains can affect the overall macroscopic stability of sandpiles , we have measured maximum stability angles of dry sandpiles made by mixing a weight fraction of dimer grains ( two spherical grains rigidly bonded together ) into spherical monomer grains . the measured gradually increases over the entire range of , despite a moderate drop in the total packing fraction of grains , , within the pile , due to the more inefficient packing of the dimers. a detailed theoretical study of the failure mechanism of such piles leads us to the following key observations and conclusions , which enable us to quantitatively account for the increase in : \(i ) for piles consisting of perfectly rough particles ( large intergrain friction ) , the stability of the free surface upon tilting is limited by the particles on the surface layer , which fail by rolling out of the surface traps they sit in .\(ii ) for a given surface trap geometry , dimer particles typically remain stable up to larger tilt angles ; thus , for a mixture of monomers and dimers , pile stability is limited by the monomers on the surface , provided that the dimer concentration is not too large .\(iii ) provided that individual grains rolling out of unstable traps do not initiate avalanches , the pile will remain stable as long as the density of _ stable _ surface traps is larger than the density of monomers on the surface layer .\(iv ) the ratio of the density of monomers on the surface layer to the total density of surface traps is .\(v ) a statistical characterization of the particle - scale roughness of the surface associated with grain packing is necessary to determine quantitatively .the stability criteria for a pile consisting of a mixture of monomers and dimers with ideally rough surfaces can be cast as a purely geometrical problem under conditions where rolling - initiated surface failure is the primary mechanism that limits the stability of the pile . in this picture ,monomers and dimers on the surface layer occupy surface traps formed by the particles underneath , each of which have a different stability criterion associated with the trap s shape and orientation with respect to the average surface normal and the downhill direction . for a pile to be stable at a given tilt angle , all the grains on the surface have to be sitting in a stable surface trap , suggesting that the least stable surface traps would control the overall stability of the pile . for a pile consisting of monomers ,there are actually twice as many surface traps as surface grains , and upon very gradually increasing the average tilt of the pile without disturbing the underlying grains , one finds that surface grains in traps that become unstable upon tilting can briefly roll down the pile s surface until they encounter an unoccupied stable surface trap which ends their descent , provided that at least half the surface traps remain stable so that all surface grains can be accomodated . a detailed analysis in sec .[ seccps ] shows that it is much easier to trap dimers than monomers on a perfectly ordered close packed surface of spheres .this suggests that the stability of a random mixed pile of monomer and dimers is actually limited by the monomers on the surface , which accumulate in the most stable surface traps as is approached .if one assumes that the dimers remain essentially stable , one might expect that only a fraction of the surface traps remain stable when the pile is tilted to its maximum stability angle ; at this angle , there are just enough to accomodate all the monomers on the surface layer . a detailed experimental characterization of the positions of surface grains of random monomer and dimer piles in sec . [ secanalysis ] , combined with the computed stability criteria for monomers and dimers occupying surface traps in sec .[ secstability ] , provide a quantitative explanation of the measured increase in with no adjustable parameters up to , where the assumption of monomer failure begins to break down .this good agreement over a wide range of reflects the subtle interplay between the distribution of orientational and size fluctuations of the surface traps , some of which are stabilizing and some destabilizing .thus , accurate characterization of grain - scale " roughness ( as distinct from the microscopic surface roughness of the grains ) is essential to achieve quantitative agreement between theory and experiment .the important role played by grain - scale roughness is also evident in the rheology of gravity - driven chute flows , where the precise nature of the bottom surface has significant influence on the resulting flow .nevertheless , the results convincingly demonstrate that the stability of cohesionless grains with large inter - grain friction is indeed controlled by surface failure , and should otherwise be insensitive to the type of grain material .in addition to clarifying the role of surface failure in the stability of piles by connecting macroscopic measurements of stability angle to grain - scale composition , these results for a well - characterized sandpile also provide a critical link between the attractive forces within the sandpile and the nonspherical geometry of a well - known fraction of constituent grains . in this respect , these measurements provide quantitative insight into similar measurements of the angle of repose after dynamic failure of less well - controlled piles of spheres and cylinders ( e.g. peas and rice ) . from this perspective, dimers may be imagined as short elongated grains that have surface irregularities of the same order as the grain size .these irregularities promote the strong interlocking of adjacent grains , which inhibits the failure of the pile more than the typical contacts between smooth cylinders and ellipsoids .the rest of the manuscript is arranged in the following way . in sec .[ secexperiment ] , we present the experimental method for preparing the dimer and monomer grains , measuring the critical angle of stability and grain packing fraction of the pile as a function of the dimer content , and characterizing the surface configuration of monomer and dimer piles. the results of these measurements are also reported in this section .the development of the theoretical understanding of these results starts in the next section , section [ secproblem ] , in which the stability of a pile [ of spherical grains ] is posed as a geometrical problem .this section summarizes an earlier attempt to treat pile stability geometrically and presents a different and more general strategy for the solution , which can be extended to include dimer grains . using this approach , in sec .[ seccps ] , we fully solve the stability problem on a triangular close - packed surface layer for monomers and dimers . section [ secfluct ] presents a statistical analysis of the measured shape and orientation of traps on real surfaces of piles comprised solely of either monomers or dimers , and uses the method outlined in sec . [ secproblem ] to determine the corresponding solution to the stability angle . in sec .[ secconc ] , we summarize the main findings and insight gained from this study , as well as possible future directions .we prepare the dimer grains by bonding glass spheres of radius mm and density g/ together using methyl acrylate glue .the glue in its carrier solvent completely coats the surfaces of the glass spheres and accumulates in a contact meniscus between the spheres . after less than one day of drying, a strong shear - rigid bond between the spheres is formed .the volume of the dried glue is much smaller than the volume of the spheres , so the two bonded grains have the appearance of an ideal dimer or doublet . because the glue coats the entire surfaces of the grains and may thus alter the friction coefficient of the grains , we have likewise coated all the monomer grains with methyl acrylate so that the friction coefficient at the contact points between grains , whether monomers or dimers , is identical .the sandpiles are prepared by mixing together varying weight fractions of dimer grains into monomers , as determined by a balance .the mixtures are placed in a clear plastic box having a square bottom that is 6.5 cm wide and a height of 5.5 cm , yielding an average number of spheres per pile of about three hundred . to measure the critical angle, we employ a procedure that is identical to one that was used to study the critical angles of wet sandpiles .we tilt the box at an angle , as shown in fig .[ figpile](a ) , and shake it back and forth about five times along the direction of the lower edge ( normal to the page in the figure ) .this distributes the grains so that the surface of the pile is normal to the direction of gravity , as shown in fig .[ figpile](b ) .there is no noticeable size segregation of the grains introduced by the shaking .we have purposefully avoided tapping the pile in order to prevent a densification that could affect the pile s surface characteristics and stability .we then place the lower edge of the box on a table and slowly tilt it so that the bottom rests flat on the table and normal to the direction of gravity .if the pile fails catastrophically , then no measurement is recorded , but if the pile remains stable after only isolated movement and resettling of a few grains , then a measurement of the static angle of the pile , , is recorded , as shown in fig .[ figpile](c ) .after several trials , a rough determination of the stable angle is obtained , and thereafter the initial angles are not chosen randomly , but instead are kept close to this value . using the results of ten trials , we average the values of the three largest angles to obtain the critical angle , .this value of the critical angle is reproducible ; the variation in the three angles used to obtain the average is about ten percent for all values of .this procedure yields slightly larger angles than those found in typical angle of repose measurements in which the sandpile is induced to fail .figure [ figthetac ] depicts as a function of .we find that it increases approximately linearly , from , a well established value for a wide variety of dry spherical grains , to for a sandpile comprised completely of dimers .we have also measured the average packing volume fraction of grains , , in the sandpile as a function of by measuring the mass of water required to fill the voids in the pile as it stands in the tilted configuration shown in fig .[ figpile](a ) . based on the relative number of grains at the surfaces compared to those within the pile, we estimate that the lowering of the local grain packing fraction due to the open or wall surfaces makes the measured .,width=288 ] , as a function of dimer weight fraction , .,width=288 ] , as a function of dimer weight fraction , .,width=288 ] appear to be about two percent smaller than the corresponding bulk value .these measurements of are plotted in fig .[ figdens ] . the overall reduction in the packing fraction of about ten percent indicates that dimer grains pack less efficiently than monomer grains . in order to experimentally characterize the surfaces of random piles of either monomer or dimer grains , we have taken stereo digital images of stable piles using top and front views in order to reconstruct the three - dimensional coordinates of the centers of all of the spheres visible on the surfaces of the pileswe do not include spheres that touch walls or the bottom surface of the container .due to the significant surface roughness of the piles , especially in the case of the dimer pile , we occasionally detect the position of a sphere that lies more than one diameter below the average surface defined by all the grains .these spheres are not true surface spheres and are eliminated from consideration in the subsequent surface trap analysis ( see sec .[ secanalysis ] . )the measured positions of the surface spheres for a stable monomer pile and a dimer pile are shown in fig .[ figpiles](a ) and [ figpiles](b ) , respectively .the monomer pile shown is very close to its critical angle of stability , whereas the dimer pile shown here , while at a much higher angle ( ) , is still somewhat below its .finally , we have qualitatively observed that the roughness of the sandpile s free surface increases somewhat as more dimers are included in the pile .this increase has been quantified in terms of larger fluctuations in the shapes and orientations of the surface traps , as presented in sec .[ secanalysis ] ..,title="fig:",width=288 ] .,title="fig:",width=288 ] 2various measurements of the angle of repose for cohesionless piles of smooth spherical particles come up with the same value of about 22 , largely independent of the makeup of the spheres or of their surface properties . on the other hand, the shape of particles in a pile has a large influence on and the slightly larger critical angle .furthermore , our granular dynamics simulations of piles , made of spheres with hertzian contacts and static friction , show that initially increases rapidly with increasing friction coefficient , although it saturates at a value of about for ( see fig . [ figthetavsmu ] ) .these observations suggest a primarily geometrical origin for the robustness of for perfectly rough spheres ( ) , which can be further studied in an idealized system in which sliding is disallowed , due either to a very large friction coefficient or to interlocking surface irregularities .the spheres in such a static pile can be classified into two groups as follows : a surface layer that consists of spheres held in place by exactly three spheres and their own weight , and interior spheres that have more than three contacts .( we do not consider the small population of rattler " spheres with three contacts that might exist in the interior of the pile , which will presumably not influence the stability of the pile . ) the centers of the three spheres that support each sphere in the surface layer form the vertices of what we henceforth call the base triangle " associated with that sphere .as the tilt angle of the pile is increased , spheres on the surface layer can move by rolling out of the surface trap formed by the three supporting neighbors .however , interior particles ( excluding rattlers ) are held in place by a cage formed by their contacting neighbors , and if the friction coefficient is sufficiently large to preclude any sliding , they can not move until this cage is destroyed by the motion of at least one of their neighbors .this suggests that initiation of failure occurs at the surface layer , provided that sliding is disallowed .if the coefficient for rolling friction is small enough to be neglected , the stability of a sphere on the surface layer , and consequently the determination of , becomes a purely geometrical problem . for finite values of , other failure mechanisms can be expected to reduce from this surface controlled value , as observed in fig .[ figthetavsmu ] .a recent attempt at a theoretical determination of from the perspective of surface stability was made by albert and co - workers .they have considered the stability of spheres at the surface , supported by three close - packed spheres that form a base triangle , and calculated the tilt angle at which the sphere would roll out of the trap formed by the base triangle as a function of yaw " , i.e. , the relative angle of orientation of the triangle with respect to the downslope direction : this stability criterion is periodic with period due to symmetry .yaw corresponds to an orientation in which one of the edges of the base triangle is perpendicular to the downslope direction . in order to account for the randomness in the orientations of the base triangles on a disordered surface, they have suggested that the appropriate value for can be obtained by averaging over yaw ; they assumed a uniform distribution for this quantity .this yielded a value for that closely matched experimental observations . betweeen the spheres , obtained from granular dynamics simulations by extrapolating flow rates as a function of tilt angle to zero flow .details of the simulation technique can be found in refs. ., width=288 ] as already pointed out by its authors , the calculation in ref . represents a mean - field approximation , since it ignores variations in the shape and orientation of individual surface traps , which can be parametrized by their local tilt , yaw and roll angles , and the actual edge lengths of the base triangles .( for definitions of the these parameters , see the appendix . )nevertheless , their result is in good agreement with experiments .we will address the dependence of the stability of a surface trap on some of these additional parameters in more detail in section[secstability ] .there is , however , a more serious complication with the averaging approach than the neglect of fluctuation effects : the stability of the pile requires _ all _ particles on the surface to be stable . thus , the stability of the pile should be dictated by the particle in the _ least stable _ surface trap , and not an average stability criterionone might thus wonder why the averaging approach appears to work so well .in fact , for a pile of monomers , the number of base triangles forming potential surface traps is essentially twice the number of surface particles that actually reside in them .this is easy to see in the case of close - packed layers , where each successive layer to be placed on top has a choice among two sublattice positions ; this is what gives rise to random stacking .the relationship is actually more general .if the surface of a pile is sufficiently smooth such that an average surface normal vector to the pile can be determined ( note that this is a prerequisite to actually being able to define and measure ) , the base triangles associated with surface traps can be identified by a delaunay triangulation of the sphere centers at the surface layer , projected onto the plane of the mean pile surface . in such a triangulation ,the number of triangles per surface layer sphere is exactly two , since the sum of all the interior angles of the triangles is (no .surface particles ) .an intrinsic assumption here is that the surface layer is similar to the sublayer " , consisting of those spheres that would become part of the new surface layer if all the original surface particles were removed simultaneously .the triangulation procedure to identify the surface normal vector and all of the potential surface traps is discussed in greater detail in sec .[ secanalysis ] .this ratio of surface trap to surface sphere density indicates that in a stable pile , only half of the traps are actually filled .the pile will then find a stable configuration as long as at least half of the surface traps are stable at the given tilt angle of the pile , since surface spheres that are in unfavorable traps can roll down the slope until they find a vacant trap of sufficient stability , assuming that they do not gain enough kinetic energy to knock other particles off their traps and cause an avalanche .continuous failure of surface spheres will occur if there are never enough traps to stabilize the entire layer .this leads to the conclusion that the stability of the pile is actually determined by the _median _ stability angle of the traps , not the mean . nevertheless , as shown in sec .[ seccps ] , the quantitative difference between this criterion and that studied in ref . is small ; about 1.6 degrees .before launching a full - scale analysis of the pile stability problem for random piles having random surface grain configurations , it is instructive to consider the implications and power of this new approach to stability on a simplified system consisting of a mixture of spheres ( monomers ) and dimers sitting on a triangular close - packed lattice . the stability analysis leading to eq.([eqsphere ] ) can be extended to dimers .dimers sit in surface traps such that the vector connecting the centers of the two spheres forming the dimer are always parallel to one of the edges of the base triangles , thus their orientation with respect to the downhill direction can be described by the angle as well .the resulting stability angle as a function of , defined in the interval , is : the two functions defined by eqs.([eqsphere ] ) and ( [ eqdimer ] ) are plotted in fig .[ figstab](a ) .it is striking how much more stable dimers are compared to spheres for certain orientations of the traps .this is due to the more favorable position of the center of mass of the dimer , located where the two spheres meet , which makes it more difficult to roll out of the surface traps .an alternate , and perhaps more vivid , way of seeing the relative stability of dimers with respect to monomers is to plot the fraction of stable traps at a given tilt angle : in the above , is defined in terms of the density of surface traps at a given stability angle , . for this particular case ,the distribution of yaw is assumed to be uniform in the interval , corresponding to an isotropic surface geometry . the resulting plot of for monomers and dimers is shown in fig .[ figstab](b ) , and clearly demonstrates the difference in their stability .consequently , the critical angles of stability inferred from the median stability angle for monomers and dimers are : values obtained through the averaging procedure of albert _ et al._ are only slightly larger : and . .monomer and dimer curves are identical for .( b ) the fraction of stable traps at a given tilt angle corresponding to a population of monomers ( dashed line ) and dimers ( solid line ) in surface traps with a uniform yaw distribution.,width=288 ] .monomer and dimer curves are identical for .( b ) the fraction of stable traps at a given tilt angle corresponding to a population of monomers ( dashed line ) and dimers ( solid line ) in surface traps with a uniform yaw distribution.,width=288 ] for a surface layer consisting of a mixture of spheres and dimers , with a dimer volume fraction of , the stability of the surface layer will be primarily controlled by the monomers , since dimers will be stable at most locations on the surface at and do not need to be considered as surface particles for the purposes of the stability analysis .thus , for a given tilt angle , the surface layer can find a stable configuration as long as the fraction of stable traps at that angle , , exceeds , the density needed to accomodate all the monomers .the sequential filling of surface traps starting from the most stable one is somewhat analogous to the filling of an energy band in a fermionic system , with for a trap corresponding to the energy of a fermionic state . can then be interpreted as a density of states " .the monomer pile is analogous to a half - filled energy band and the addition of dimers lowers the filling fraction from 1/2 .thus , the critical stability angle is determined by the fermi energy " of the system at the given filling fraction , defined through the implicit relation this relation has been plotted as a dashed line along with experimental data for in fig .[ fignodistcomp ] .although it captures the essential features of the dependence on dimer mass fraction and provides a compelling mechanism for this effect , the results are not quantitatively comparable . the origin of the discrepancy lies primarily in the simplifications made in characterizing the surface : fluctuations in the shape and orientation of surface traps will broaden the dos spectrum and consequently change the values obtained for . in section [ secfluct ] , we include the most relevant of such fluctuations in the analysis and compare results to available experimental data on the properties of such surfaces .agreement between theory and experiment improves significantly when such fluctuation effects are taken into account . as a function of dimer weight fraction .circles : experiment .dashed line : computation based on a randomly oriented , flat , _ close - packed _in generalizing the approach of the previous section to random surfaces , we will assume that the stability of the pile is still controlled entirely by the monomers .this assumption is likely to break down at very large dimer concentrations , so that comparison to experiments may not be appropriate in that case .however , it enables the determination of stability angles based entirely on the behavior of monomers , and avoids the extremely tedious analysis of dimer surface traps .monomer traps , on the other hand , are completely characterized by their base triangle , formed by the centers of the three supporting spheres .there are two steps that are needed to obtain the monomer dos required to compute through eq .( [ eqtheta1 ] ) .the first step is to determine the stability criterion for individual surface traps as a function of their shape and orientation .the second step is to develop an adequate statistical description of the distributions of these surface traps as functions of the shape and orientation parameters identified in the first step . for a surface trap of specified geometry , represented by its base triangle ,what is the angle to which the pile can be tilted until the trap can no longer stably support a sphere and the sphere would roll out ? in order to answer this question , we first need to quantitatively describe the geometry of the surface trap with respect to the surface of the pile .this is done in the appendix , where the yaw , roll and tilt of a base triangle are defined ( see fig .[ figgeom ] . )the determination of stability criteria as a function of shape , yaw and roll is a straightforward but tedious job .we have used a mathematica notebook to compute the stability diagram for equilateral traps as a function of normalized average edge length , yaw , and roll , where is the diameter of the spheres .the dependence of the maximum stability angle on for traps with is plotted in fig .[ figthetamax]a .it is clear that this parameter greatly influences the stability of the pile . in order to estimate the value of fora random packing of spheres with a given packing fraction , let us consider the tetrahedra in a delaunay tessellation of the packing .provided that the number of tetrahedra per sphere do not change for the packings of interest , the average volume of the tetrahedra will vary as .spheres on the surface layer will settle into the minima of their traps , thus the tetrahedra they from together with their three supporting spheres always have three edges whose lengths are equal to the diameter .thus , the average edge length of the faces that form the base triangle is expected to vary as .since all edge lengths are equal to for the densest packing with , the estimate for the average normalized edge length of base triangles is for the monomer pile with , this gives , in agreement with direct measurements done on the pile ( see sec .[ secanalysis ] . )[ figthetamax]b shows against and for surface traps with .( for certain values of yaw and roll , there is no tilt angle for which the traps are stable , and therefore is undefined . )the analysis in ref. is more limited in the types of surface traps it considers , as it only looks at traps with and . as seen in fig .[ figstability ] , roll and edge length have great potential impact on the stability of a surface trap .although the mathematica notebook can determine the stability diagram for the most general case , we will restrict our analysis to equilateral traps in order to keep the subsequent analysis tractable . of equilateral traps as a function of ( a ) normalized edge length for roll , andyaw ( solid line ) and ( dashed line ) ; ( b ) as a function of yaw and roll for ., width=297 ] of equilateral traps as a function of ( a ) normalized edge length for roll , and yaw ( solid line ) and ( dashed line ) ; ( b ) as a function of yaw and roll for . ,width=297 ] having characterized and obtained stability criteria for a given surface trap , the next task is to obtain a statistical description of their population , through probability density functions ( pdfs ) . in this study, we will neglect short - range correlations between adjacent traps , e.g. , associated with the sharing of edges , and assume that they are drawn independently from an ensemble described by pdfs for the values of edge lengths , yaw , roll , and tilt . for the present, we will assume that all traps are equilateral triangles with edge length , with uniformly distributed yaw angles and gaussian roll and tilt angle distributions : in the above , to the edge lengths ; we will neglect the variability in the shapes of the traps and focus on equilateral traps of uniform size in order to study the effect of orientational disorder .if desired , the subsequent analysis can be generalized to study the impact of disorder in the shapes of the traps as well .the orientational pdfs are motivated by assuming that the pile surface was created with no initial tilt , and rotationally isotropic in the plane of the surface , and that little or no rearrangement took place in the surface traps during the subsequent tilting of the pile .this would result in a uniform pdf of yaw , and nearly identical pdfs for roll and tilt ( ) . fig .[ figstability ] depicts how for a monomer pile changes as a function of change in ( a ) the trap size parameter , and ( b ) the standard deviations of roll and tilt distributions , both individually and jointly . from these plots ,it becomes clear that we need additional information about the grain - scale roughness of the surface in order to quantitatively predict for the monomer - dimer piles . for a monomer pile as surface propertiesare changed from a randomly oriented , flat , close - packed surface with no roll .( a ) an increase in the normalized edge length for equilateral traps stabilizes the traps and increases .( b ) individual effects of including a gaussian roll distribution ( dotted line , destabilizing ) , tilt distribution ( dashed line , stabilizing ) and the combined effect of a simultaneous roll and tilt distribution with the same standard deviation ( solid line , either stabilizing or destabilizing ) ., width=297 ] for a monomer pile as surface properties are changed from a randomly oriented , flat , close - packed surface with no roll .( a ) an increase in the normalized edge length for equilateral traps stabilizes the traps and increases .( b ) individual effects of including a gaussian roll distribution ( dotted line , destabilizing ) , tilt distribution ( dashed line , stabilizing ) and the combined effect of a simultaneous roll and tilt distribution with the same standard deviation ( solid line , either stabilizing or destabilizing ) . , width=297 ] . _left : _ monomer pile , _ right : _dimer pile.,title="fig:",width=288 ] ._ left : _ monomer pile , _ right : _ dimer pile.,title="fig:",width=288 ] ._ left : _ monomer pile , _ right : _ dimer pile.,title="fig:",width=288 ] ._ left : _ monomer pile , _ right : _ dimer pile.,title="fig:",width=288 ] ._ left : _ monomer pile , _ right : _ dimer pile.,title="fig:",width=288 ] . _left : _ monomer pile , _ right : _ dimer pile.,title="fig:",width=288 ] 2 in order to test whether real surfaces of piles exhibit the assumed behavior , and to obtain representative values for the average trap size and the width of yaw , roll and tilt distributions , we imaged a portion of the surface of a monomer and a dimer pile ( see sec . [ secexperiment ] ) .the shape and orientation of surface traps were identified as follows : after locating the centers of the particles on the surface layer by stereographic imaging , we computed the average surface of the plane by a least square fitting of the centers of mass to a plane .we then performed a delaunay triangulation of the particles projected on to this plane in order to identify all base triangles associated with potential surface traps .we then measured the yaw , roll and tilt of all the base triangles and created histograms .we observed a uniform yaw distribution within a characteristic sampling error , justifying the use of eq.([eqpyaw ] ) .we also determined the standard deviations for the roll and tilt histograms .the histograms are shown in figs .[ fighist ] for the monomer and dimer pile .the comparison between the monomer and dimer piles revealed a moderate increase of and from about to , indicating a roughening of the surface along with the originally observed reduction in packing fraction .the average edge length increased from to , in agreement with eq .( [ eqa ] ) . due to the modest changes in these parameters, we have used the trap characteristics obtained from the monomer pile in the computation of for all the mixture piles . integrating the stability diagram shown in fig .[ figthetamax]b with the pdfs given in eqs.([eqps]- [ eqptilt ] ) to obtain the appropriate through a generalized form of eq.([eqdos ] ) , we finally compute through eq.([eqfstab ] ) .the result is shown in fig .[ figthetac2 ] as a solid line , and agrees well with experiment for , particularly considering that all the parameters have been provided by independent measurement .no adjustable parameters remain in the model , suggesting that the assumption of monomer failure at the surface is valid in this range .the disagreement at larger is not surprising , given that the pile is comprised almost entirely of dimers and the assumption of monomer failure in the theory is expected to break down in this limit , resulting in an over - estimate of the stability of the pile . as a function of dimer weight fraction .circles : experiment .solid line : computation based on the statistical description of surface traps given by eqs.([eqps])-([eqptilt ] ) . dashed line : comparison to the earlier result based on a randomly oriented , flat , close - packed surface , reproduced from fig . [ fignodistcomp].,width=288 ]by introducing dimer grains in a sandpile comprised of rough spherical monomer grains , we have shown that the critical angle of the sandpile can be nearly doubled in the limit of high dimer content .qualitatively , this result is not so surprising , given the significant number of previous measurements of the angle of repose of mixtures of cylindrical or spheri - cylindrical grains with spherical grains that have also shown an increase .however , the use of dimers , rather than other elongated objects , permits the random surface of the pile to be described as a collection of spheres that form triangular surface traps .these triangles have a distribution of edge lengths and yaw , roll , and tilt angles that can be directly obtained through stereo imaging and can be included in a theory . by comparing the macroscopic average property of the pile , the critical angle , with the grain - scale structure on the pile s surface obtained through imaging , we are able to show which aspects of the surface structure are important for determining the value of the critical angle . for instance , the treatment of the critical angle of a random pile that considers only the mean angle of stability for a grain on a close packed surface , averaged over the yaw angle , may give a value close to the measured , but is this just a fortuitous agreement ?our results show that , by averaging over realistic distributions of yaw and tilt , the more realistic median critical angle drops below the observed .however , because the pile is random , the intergrain separation has a distribution itself , and the average edge length of a triangular surface trap is slightly greater than the grain diameter .this increase in the edge length of the surface trap , as compared to a perfectly close packed surface , increases the stability of a sphere in the trap .indeed , we believe that it is the combination of the destabilizing influence of the roll distribution , along with the stabilizing influence of the larger edge length that gives the random pile of rough spherical grains its rather well - established value of . these results for mixed monomer - dimer sandpiles shed some light on the observed initial increase in the critical angle of wet sandpiles , independent of container size and liquid surface tension , when liquid content is below a threshold value.. the initial linear increase in with may provide a plausible mechanism , in which the fraction of strongly wetted intergrain contacts increases gradually until all intergrain contacts are nearly uniformly wetted .provided that the formed bonds are strong enough and relatively dilute , the wet pile can be expected to respond similarly to a pile with a small fraction of dimers .one would expect that clusters with grains having more than one cohesive contact with neighboring grains would form in the wet sandpile as the threshold volume fraction is approached , and , above the threshold volume fraction , the picture of an average cohesive force holding grains together everywhere in the pile would become tenable . the data in refs . and suggest an equivalent of about 0.12 at the threshold liquid volume fraction. it may be possible to extend the presented work to systems involving trimers and higher order clusters of grains .however , such clusters can have many different shapes , and to simplify the theoretical treatment , it may be necessary to restrict allowed shapes to close - packed or linear structures . along a different direction , reducing the grain - grain friction coefficient will allow sliding failure modes and thereby lower the critical angle of the sandpile .densification of the pile through tapping might also change the angles of stability . finally , developing a theoretical understanding of the reduction of the grain packing fraction with increasing dimer content would help shed light on how strong intergrain attractive forcescan alter the bulk structure of a random pile .we thank p. chaikin , z. cheng , g. grest and p. schiffer for stimulating discussions and suggestions .ajl was supported in part by the national science foundation under award dmr-9870785 .in this appendix , we define the parameters that describe the shape and orientation of a base triangle that connects the centers of mass of the three supporting spheres that form a surface trap .the geometry is shown in fig .[ figgeom ] .the coordinate system is fixed such that gravity is in the direction , and the pile , whose mean surface is initially in the , is tilted " by rotating it around the . since overall translation of the triangle has no effect on trap stability , the vertex across the shortest edge ( with length ) has been arbitrarily placed on the for ease of illustration .the base triangle can be fully specified by the positions of the two remaining vertices relative to the first one ; this leaves six parameters to be determined .a more useful parameterization than the relative positions of the vertices can be given as follows ( see fig.[figgeom ] ) : the shape of the base triangle is characterized by the lengths of its edges .the two remaining edge lengths can be unambiguously labeled as and anticlockwise around the triangle when observed from a viewpoint above ( at large ) .this leaves three angles that determine the orientation .the plane in which the base triangle resides is described by _ and _ tilt _ .similarly , the orientation of the triangle in the plane with respect to the downhill direction is described by the _ yaw _ , as depicted in fig .[ figgeom] . given these parameters ,the original base triangle can be reconstructed ( modulo translations ) as follows : place a triangle with given edge lengths in the such that the shortest edge is parallel to the and downhill " from the vertex across it , i.e. , the of the vertex is larger . then , rotate the triangle around the axis by , axis by and finally , by . with the proper labeling of vertices as described above, every triangle is uniquely identified except for degenerate cases ( isosceles and equilateral triangles ) , in which case the stability criteria are identical and the particular choice of angles is immaterial .\(i ) the shapes and orientations of surface traps are very likely to be statistically independent of each other , and therefore they will have independent probability distributions . splitting the parameters that describe these two attributes avoids dealing with joint probability distributions across these two classes of parameters .\(ii ) tilting the pile does not change the yaw and roll of a surface trap .thus , a `` stability interval '' $ ] can be defined for a surface trap of given yaw and roll , corresponding to all the values of tilt for which the trap can stably support a surface particle . r. l. brown and j. c. richards , _ principles of powder mechanics _ , pergamon press ( oxford , 1970 ) a. schofield and p. wroth , _ critical state soil mechanics _ , mcgraw - hill ( maidenhead , 1968 ) .d. j. hornbaker , r. albert , i. albert , a .-barabsi , and p. schiffer , nature ( london ) * 387 * , 765 ( 1997 ) . t. g. mason , a. j. levine , d. erta , and t. c. halsey , phys .e * 60 * , r5044 ( 1999 ) .t. c. halsey and a. j. levine , phys . rev .lett . * 80 * , 3141 ( 1998 ) .p. tegzes , r. albert , m. paskvan , a .-barabsi , t. vicsek and p. schiffer , phys .e * 60 * , 5823 ( 1999 ) .d. erta , g. s. grest , t. c. halsey , d. levine , and l. e. silbert , cond - mat/0005051 , europhys . lett .( in press ) .l. e. silbert , d. erta , g. s. grest , t. c. halsey , and d. levine , cond - mat/0105071 , phys .e. ( in press ) .r. albert , i. albert , d. hornbaker , p. schiffer and a .-barabsi , phys .e * 56 * , r6271 ( 1997 ) .j. b. knight , c. g. fandrich , n. l. chun , h. m. jaeger , s. r. nagel , phys .e * 51 * , 3957 ( 1995 ) .v. frette , k. christensen , a. malthe - srenssen , j. feder , t. jssang and p. meakin , nature * 379 * , 49 ( 1996 ) .this value was erroneously reported in ref . as .the distributions of roll and tilt would not be identical since the two finite rotations do not commute .however , the difference is only third order in the angle and quite small for reasonably narrow distributions . c. j. olson , c. reichhardt , m. mccloskey and r. j. zieve , cond - mat/0011508 ( unpublished ) .we have borrowed aviation / navigation terminology to name the three angles , roll , yaw and tilt , that describe the orientation of the base triangle .
we measure how strong , localized contact adhesion between grains affects the maximum static critical angle , , of a dry sand pile . by mixing dimer grains , each consisting of two spheres that have been rigidly bonded together , with simple spherical monomer grains , we create sandpiles that contain strong localized adhesion between a given particle and at most one of its neighbors . we find that increases from 0.45 to 1.1 and the grain packing fraction , , decreases from 0.58 to 0.52 as we increase the relative number fraction of dimer particles in the pile , , from 0 to 1 . we attribute the increase in to the enhanced stability of dimers on the surface , which reduces the density of monomers that need to be accomodated in the most stable surface traps . a full characterization and geometrical stability analysis of surface traps provides a good quantitative agreement between experiment and theory over a wide range of , without any fitting parameters . 2
he scale of resources and computations required for executing large - scale biological jobs are significantly increasing . with this increasethe resultant number of failures while running these jobs will also increase and the time between failures will decrease .it is not desirable to have to restart a job from the beginning if it has been executing for hours or days or months .a key challenge in maintaining the seamless ( or near seamless ) execution of such jobs in the event of failures is addressed under research in fault tolerance .many jobs rely on fault tolerant approaches that are implemented in the middleware supporting the job ( for example ) .the conventional fault tolerant mechanism supported by the middleware is checkpointing , which involves the periodic recording of intermediate states of execution of a job to which execution can be returned if a fault occurs .such traditional fault tolerant mechanisms , however , are challenged by drawbacks such as single point failures , lack of scalability and communication overheads , which pose constraints in achieving efficient fault tolerance when applied to high - performance computing systems . moreover , many of the traditional fault tolerant mechanisms are manual methods and require human administrator intervention for isolating recurring faults .this will place a cost on the time required for maintenance .self - managing or automated fault tolerant approaches are therefore desirable , and the objective of the research reported in this paper is the development of such approaches . if a failure is likely to occur on a computing core on which a job is being executed , then it is necessary to be able to move ( migrate ) the job onto a reliable core . such mechanisms are not readily available . at the heart of this conceptis mobility , and a technique that can be employed to achieve this is using multi - agent technologies .two approaches are proposed and implemented as the means of achieving both the computation in the job and self - managing fault tolerance ; firstly , an approach incorporating agent intelligence , and secondly , an approach incorporating core intelligence . in the first approach ,automated fault tolerance is achieved by a collection of agents which can freely traverse on a network of computing cores .each agent carries a portion of the job ( or sub - job ) to be executed on a computing core in the form of a payload .fault tolerance in this context can be achieved since an agent can move on the network of cores , effectively moving a sub - job from one computing core which may fail onto another reliable core . in the second approach ,automated fault tolerance is achieved by considering the computing cores to be an intelligent network of cores .sub - jobs are scheduled onto the cores , and the cores can move processes executed on them across the network of cores .fault tolerance in this context can be achieved since a core can migrate a process executing on it onto another core .a third approach is proposed which combines both agent and core intelligence under a single umbrella . in this approach ,a collection of agents freely traverse on a network of virtual cores which are an abstraction of the actual hardware cores .the agents carry the sub - jobs as a payload and situate themselves on the virtual cores .fault tolerance is achieved either by an agent moving off one core onto another core or the core moving an agent onto another core when a fault is predicted .rules are considered to decide whether an agent or a core should initiate the move .automated fault tolerance can be beneficial in areas such as molecular dynamics .typical molecular dynamics simulations explore the properties of molecules in gaseous , liquid and solid states .for example , the motion of molecules over a time period can be computed by employing newton s equations if the molecules are treated as point masses .these simulations require large numbers of computing cores that run sub - jobs of the simulation which communicate with each other for hours , days and even months .it is not desirable to restart an entire simulation or to loose any data from previous numerical computations when a failure occurs .conventional methods like periodic checkpointing keep track of the state of the sub - jobs executed on the cores , and helps in restarting a job from the last checkpoint . however , overzealous periodic checkpointing over a prolonged period of time has large overheads and contributes to the slowdown of the entire simulation .additionally , mechanisms will be required to store and handle large data produced by the checkpointing strategy .further , how wide the failure can impact the simulation is not considered in checkpointing .for example , the entire simulation is taken back to a previous state irrespective of whether the sub - jobs running on a core depend or do not depend on other sub - jobs .one potential solution to mitigate the drawbacks of checkpointing is to proactively probe the core for failures .if a core is likely to fail , then the sub - job executing on the core is migrated automatically onto another core that is less likely to fail .this paper proposes and experimentally evaluates multi - agent approaches to realising this automation .genome searching is considered as an example for implementing the multi - agent approaches .the results indicate the feasibility of the multi - agent approaches ; they require only one - fifth the time compared to that required by manual approaches .the remainder of this paper is organised as follows .the methods section presents the three approaches proposed for automated fault tolerance .the results section highlights the experimental study and the results obtained from it .the discussion section presents a discussion on the three approaches for automating fault tolerance .the conclusions section summarises the key results from this study .three approaches to automate fault tolerance are presented in this section .the first approach incorporates agent intelligence , the second approach incorporates core intelligence , and in the third a hybrid of both agent and core intelligence is incorporated . a job , , which needs to be executed on a large - scale system is decomposed into a set of sub - jobs .each sub - job is mapped onto agents that carry the sub - jobs as payloads onto the cores , as shown in figure 1 . the agents and the sub - job are independent of each other ; in other words , an agent acts as a wrapper around a sub - job to situate the sub - job on a core .there are three computational requirements of the agent to achieve successful execution of the job : ( a ) the agent needs to know the overall job , , that needs to be achieved , ( b ) the agent needs to access data required by the sub - job it is carrying and ( c ) the agent needs to know the operation that the sub - job needs to perform on the data .the agents then displace across the cores to compute the sub - jobs .intelligence of an agent can be useful in at least four important ways for achieving fault tolerance while a sub - job is executed .firstly , an agent knows the landscape in which it is located .knowledge of the landscape is threefold which includes ( a ) the knowledge of the computing core on which the agent is located , ( b ) knowledge of other computing cores in the vicinity of the agent and ( c ) knowledge of agents located in the vicinity .secondly , an agent identifies a location to situate within the landscape .this is possible by gathering information from the vicinity using probing processes and is required when the computing core on which the agent is located is anticipated to fail .thirdly , an agent predicts failures that are likely to impair its functioning .the prediction of failures ( for example , due to the failure of the computing core ) is along similar lines to proactive fault tolerance .fourthly , an agent is mobile within the landscape . if the agent predicts a failure then the agent can relocate onto another computing core thereby moving off the job from the core anticipated to fail ( refer figure 2 ) . and are situated on cores and respectively .a failure is predicted on core .the agent moves onto core .,scaledwidth=40.0% ] the intelligence of agents is incorporated within the following sequence of steps that describes an approach for fault tolerance : ' '' '' _ agent intelligence based fault tolerance _ + ' '' '' 1 . decompose a job , , to be executed on the landscape into sub - jobs , 2 . each sub - job provided as a payload to agents , 3 .agents carry jobs onto computing cores , 4 . for each agent , located on computing core , where to 1 .periodically probe the computing core 2 .if predicted to fail , then 1. agent , moves onto an adjacent computing core , 2 .notify dependent agents 3 .agent establishes dependencies 5 .collate execution results from sub - jobs ' '' '' a failure scenario is considered for the agent intelligence based fault tolerance concept . in this scenario , while a job is executed on a computing core that is anticipated to fail any adjacent core onto which the job needs to be reallocated can also fail .the communication sequence shown in figure 3 is as follows .the hardware probing process on the core anticipating failure , notifies the failure prediction to the agent process , , situated on it .since the failure of a core adjacent to the core predicted to fail is possible it is necessary that the predictions of the hardware probing processes on the adjacent cores be requested .once the predictions are gathered , the agent process , , creates a new process on an adjacent core and transfers data it was using onto the newly created process .then the input dependent ( ) and output dependent ( ) processes are notified .the agent process on is terminated thereafter . the new agent process on the adjacent core establishes dependencies with the input and output dependent processes . a job , , which needs to be executed on a large - scale system is decomposed into a set of sub - jobs .each sub - job is mapped onto the virtual cores , , an abstraction over respectively as shown in figure 4 .the cores referred to in this approach are virtual cores which are an abstraction over the hardware computing cores .the virtual cores are a logical representation and may incorporate rules to achieve intelligent behaviour . and are situated on virtual cores and respectively .a failure is predicted on core and moves the job onto virtual core .,scaledwidth=40.0% ] intelligence of a core is useful in a number of ways for achieving fault tolerance .firstly , a core updates knowledge of its surrounding by monitoring adjacent neighbours .independent of what the cores are executing , the cores can monitor each other .each core can ask the question ` are you alive ? ' to its neighbours and gain information .secondly , a core periodically updates information of its surrounding .this is useful for the core to know which neighbouring cores can execute a job if it fails .thirdly , a core periodically monitors itself using a hardware probing process and predicts if a failure is likely to occur on it .fourthly , a core can move a job executing on it onto an adjacent core if a failure is expected and adjust to failure as shown in figure 4 .once a job has relocated all data dependencies will need to be re - established .the following sequence of steps describe an approach for fault tolerance incorporating core intelligence : ' '' '' _ core intelligence based fault tolerance _+ ' '' '' 1 .decompose a job , , to be executed on the landscape into sub - jobs , 2 .each sub - job allocated to cores , 3 . for each core , , where to until sub - job completes execution 1 .periodically probe the computing core 2 .if predicted to fail , then 1 .migrate sub - job on onto an adjacent computing core , 4 .collate execution results from sub - jobs ' '' '' figure 5 shows the communication sequence of the core failure scenario considered for the core intelligence based fault tolerance concept . the hardware probing process on the core predicted to fail , notifies a predicted failure to the core .the job executed on is then migrated onto an adjacent core once a decision based on failure predictions are received from the hardware probing processes of adjacent cores .the hybrid approach acts as an umbrella bringing together the concepts of agent intelligence and core intelligence .the key concept of the hybrid approach lies in the mobility of the agents on the cores and the cores collectively executing a job .decision - making is required in this approach for choosing between the agent intelligence and core intelligence approaches when a failure is expected . a job , , which needs to be executed on a large - scale system is decomposed into a set of sub - jobs .each sub - job is mapped onto agents that carry the sub - jobs as payloads onto the virtual cores , which are an abstraction over respectively as shown in figure 1 . the following sequence of steps describe the hybrid approach for fault tolerance incorporating both agent and core intelligence : ' '' '' _ hybrid intelligence based fault tolerance _ + ' '' '' 1 .decompose a job , , to be executed on the landscape into sub - jobs , 2 . each sub - job provided as a payload to agents , 3 .agents carry jobs onto virtual cores , 4 . for each agent , located on virtual core , where to 1 .periodically probe the computing core 2 .if predicted to fail , then 1 .if ` agent intelligence ' is a suitable mechanism , then 1 .agent , , moves onto an adjacent computing core , 2 .notify dependent agents 3 .agent establishes dependencies 1 .else if ` core intelligence ' is a suitable mechanism , then 1 .core migrates agent , onto an adjacent computing core , 5 .collate execution results from sub - jobs ' '' ''when a core failure is anticipated both an agent and a core can make decisions which can lead to a conflict .for example , an agent can attempt to move onto an adjacent core while the core on which it is executing would like to migrate it to an alternative adjacent core .therefore , an agent and the core on which it is located need to negotiate before either of them initiate a response to move ( see figure 6 ) .the rules for the negotiation between the agent and the core in this case are proposed from the experimental results presented in this paper ( presented in the decision making rules sub - section ) . and are situated on virtual cores and which are mapped onto computing cores and respectively .a failure is predicted on core .the agent and negotiate to decide who moves the sub - job onto core .,scaledwidth=40.0% ]in this section , the experimental platform is considered followed by the experimental studies and the results obtained from experiments .four computer clusters were used for the experiments reported in this paper .the first was a cluster available at the centre for advanced computing and emerging technologies ( acet ) , university of reading , uk .thirty three compute nodes connected through gigabit ethernet were available , each with pentium iv processors and 512 mb-2 gb ram .the remaining three clusters are compute resources , namely brasdor , glooscap and placentia , all provided by the atlantic computational excellence network ( acenet ) , canada .brasdor comprises 306 compute nodes connected through gigabit ethernet , with 932 cores and 1 - 2 gb ram .glosscap comprises 97 nodes connected through infiniband , with 852 cores and 1 - 8 gb ram .placentia comprises 338 compute nodes connected through infiniband , with 3740 cores and 2 - 16 gb ram .the cluster implementations in this paper are based on the message passing interface ( mpi ) .the first approach , incorporating agent intelligence , is implemented using open mpi , an open source implementation of mpi 2.0 . the dynamic process model which supports dynamic process creation and management facilitates control over an executing process .this feature is useful for implementing the first approach .the mpi functions useful in the implementation are ( i ) mpi_comm_spawn which creates a new mpi process and establishes communication with an existing mpi application and ( ii ) mpi_comm_accept and mpi_comm_connect which establishes communication between two independent processes . the second approach , incorporating core intelligence ,is implemented using adaptive mpi ( ampi ) , developed over charm++ , a c++ based parallel programming language .the aim of ampi is to achieve dynamic load balancing by migrating objects over virtual cores and thereby facilitating control over cores .core intelligence harnesses this potential of ampi to migrate a job from a core onto another core .a strategy to migrate a job using the concepts of processor virtualisation and dynamic job migration in ampi and charm++ is reported in .parallel reduction algorithms which implement the bottom - up approach ( i.e. , data flows from the leaves to the root ) are employed for the experiments .these algorithms are of interest for three reasons .firstly , the algorithm is used in a large number of scientific applications including computational biological applications in which optimizations are performed ( for example , bootstrapping ) . incorporating self - managing fault tolerant approachescan make these algorithms more robust and reliable .secondly , the algorithm lends itself to be easily decomposed into a set of sub - jobs .each sub - job can then be mapped onto a computing core either by providing the sub - job as a payload to an agent in the first approach or by providing the job onto a virtual core incorporating intelligent rules .thirdly , the execution of a parallel reduction algorithm stalls and produces incorrect solutions if a core fails .therefore , parallel reduction algorithms can benefit from local fault - tolerant techniques .figure 7 is an exemplar of a parallel reduction algorithm . in the experiments reported in this paper , a generic parallel summation algorithm with three sets of inputis employed .firstly , , , secondly , , , and thirdly , . the first level nodes which receive the three sets of input comprise three set of nodes .firstly , , , secondly , , , and thirdly , , . the next level of nodes , , and receive inputs from the first level nodes . the resultant from the second level nodes is fed in to the third level node .the nodes reduce the input through the output using the parallel summation operator ( ) . andthe three levels of nodes are denoted by .the inputs are passed to the nodes which are then reduced and passed to nodes and finally onto for the output.,scaledwidth=45.0% ] the parallel summation algorithm can benefit from the inclusion of fault tolerant strategies .the job , , in this case is summation , and the sub - jobs , are also summations . in the first fault tolerant approach , incorporating mobile agent intelligence , the data to be summed along with the summation operator is provided to the agent .the agents locate on the computing cores and continuously probe the core for anticipating failures . if an agent is notified of a failure , then it moves off onto another computing core in the vicinity , thereby not stalling the execution towards achieving the summation job . in the second faulttolerant approach , incorporating core intelligence , the sub - job comprising the data to be summed along with the summation operator is located on the virtual core .when the core anticipates a failure , it migrates the sub - job onto another core .a failure scenario is considered for experimentally evaluating the fault tolerance strategies . in the scenario ,when a core failure is anticipated the sub - job executing on it is relocated onto an adjacent core .of course this adjacent core may also fail .therefore , information is also gathered from adjacent cores as to whether they are likely to fail or not .this information is gathered by the agent in the agent - based approach and the virtual core in the core - based approach and used to determine which adjacent core the sub - job needs to be moved to .this failure scenario is adapted to the two strategies giving respectively the agent intelligence failure scenario and the core intelligence failure scenario ( described in the methods section ) .figures 8 through 13 are a collection of graphs plotted using the parallel reduction algorithm as a case study for both the first ( agent intelligence - figure 8 , figure 10 and figure 12 ) and second ( core intelligence - figure 9 , figure 11 and figure 13 ) fault tolerant approaches .each graph comprises four plots , the first representing the acet cluster and the other three representing the three acenet clusters .the graphs are also distinguished based on the following three factors that can affect the performance of the two approaches : * the number of dependencies of the sub - job being executed denoted as .if the total number of input dependencies is and the total number of output dependencies is , then .for example , in a parallel summation algorithm incorporating binary trees , each node has two input dependencies and one output dependency , and therefore . in the experiments, the number of dependencies is varied between 3 and 63 , by changing the number of input dependencies of an agent or a core .the results are presented in figure 8 and figure 9 . *the size of the data communicated across the cores denoted as . in the experiments ,the input data is a matrix for parallel summation and its size is varied between to kb .the results are presented in figure 10 and figure 11 . * the process size of the distributed components of the job denoted as . in the experiments, the process size is varied between to kb which is proportional to the input data .the results are presented in figure 12 and figure 13 .figure 8 is a graph of the time taken in seconds for reinstating execution versus the number of dependencies in the agent intelligence failure scenario . the mean time taken to reinstate execution for 30 trials , ,is computed for varying numbers of dependencies , ranging from 3 to 63 .the size of the data on the agent is kilo bytes .the approach is slowest on the acet cluster and fastest on the placentia cluster . in all cases the communication overheads result in a steep rise in the time taken for execution until .the time taken on the acet cluster rises once again after .figure 9 is a graph of the time taken in seconds for reinstating execution versus the number of dependencies in the core intelligence failure scenario . the mean time taken to reinstate execution for 30 trials , ,is computed for varying number of dependencies , ranging from 3 to 63 .the size of the data on the core is kilo bytes .the approach requires almost the same time on the four clusters for reinstating execution until , after which there is divergence in the plots .the approach lends itself well on placentia and glooscap . figure 10is a graph showing the time taken in seconds for reinstating execution versus the size of data in kilobytes ( kb ) , , where , carried by an agent in the agent intelligence failure scenario . the mean time taken to reinstate execution for 30 trials , ,is computed for varying sizes of data ranging from to kb .the number of dependencies is 10 for the graph plotted .placentia and glooscap outperforms acet and brasdor in the agent approach for varying size of data .figure 11 is a graph showing the time taken in seconds for reinstating execution versus the size of data in kilobytes ( kb ) , , where , on a core in the core intelligence failure scenario . the mean time taken to reinstate execution for 30 trials , ,is computed for varying sizes of data ranging from to kb .the number of dependencies is 10 for the graph plotted . in this graph ,nearly similar time is taken by the approach on the four clusters with the acet cluster requiring more time than the other clusters for .figure 12 is a graph showing the time taken in seconds for reinstating execution versus process size in kilobytes ( kb ) , , where , in the agent intelligence failure scenario . the mean time taken to reinstate execution for 30 trials , ,is computed for varying process sizes ranging from to kb .the number of dependencies is 10 for the graph plotted .the second scenario performs similar to the first scenario .the approach takes almost similar times to reinstate execution after a failure on the four clusters , but there is a diverging behaviour after .figure 13 is a graph showing the time taken in seconds for reinstating execution versus process size in kilobytes ( kb ) , , where , in the core intelligence failure scenario . the mean time taken to reinstate execution for 30 trials , ,is computed for varying process sizes ranging from to kb .the number of dependencies is 10 for the graph plotted .the approach has similar performance on the four clusters , though placentia performs better than the other three clusters for a process size of more than kb .parallel simulations in molecular dynamics model atoms or molecules in gaseous , liquid or solid states as point masses which are in motion .such simulations are useful for studying the physical and chemical properties of the atoms or molecules .typically the simulations are compute intensive and can be performed in at least three different ways .firstly , by assigning a group of atoms to each processor , referred to as atom decomposition .the processor computes the forces related to the group of atoms to update their position and velocities .the communication between atoms is high and effects the performance on large number of processors .secondly , by assigning a block of forces from the force matrix to be computed to each processor , referred to as force decomposition .this technique scales better than atom decomposition but is not a best solution for large simulations .thirdly , by assigning a three dimensional space of the simulation to each processor , referred to as spatial decomposition .the processor needs to know the positions of atoms in the adjacent space to compute the forces of atoms in the space assigned to it .the interactions between the atoms are therefore local to the adjacent spaces . in the first and second decomposition techniques interactionsare global and thereby dependencies are higher .agent and core based approaches to fault tolerance can be incorporated within parallel simulations in the area of molecular dynamics . however , which of the two approaches , agent or core intelligence , is most appropriate ?the decomposition techniques considered above establish dependencies between blocks of atoms and between atoms .therefore the degree of dependency affects the relocation of a sub - job in the event of a core failure and reinstating it .the dependencies of an atom in the simulation can be based on the input received from neighbouring atoms and the output propagated to neighbouring atoms .based on the number of atoms allocated to a core and the time step of the simulation the intensity of numerical computations and the data managed by a core vary .large simulations that extend over long periods of time generate and need to manage large amounts of data ; consequently the process size on a core will also be large . therefore , ( i ) the dependency of the job , ( ii ) the data size and ( iii ) the process size are factors that need to be taken into consideration for deciding whether an agent - based approach or a core - based approach needs to come into play .along with the observations from parallel simulations in molecular dynamics , the experimental results provide an insight into the rules for decision - making for large - scale applications . from the experimental results graphed in figure 8 and figure 9 , where dependencies are varied , core intelligence is superior to agent intelligence if the total dependencies is less than or equal to 10therefore , 1 . if the algorithm needs to incorporate fault tolerance based on the number of dependencies , then if use core intelligence , else use agent or core intelligence . from the experimental results graphed in figure 10 and figure 11 , where the size of data is varied , agent intelligence is more beneficial than core intelligence if the size of data is less than or equal to kb .therefore , 1 . if the algorithm needs to incorporate fault tolerance based on the size of data , then if kb , then use agent intelligence , else use agent or core intelligence . from the experimental results graphed in figure 12 and figure 13 , where the size of the process is varied , agent intelligence is more beneficial than core intelligence if the size of the process is less than or equal to kb .therefore , 1 . if the algorithm needs to incorporate fault tolerance based on process size , then if kb , then use agent intelligence , else use agent or core intelligence .the number of dependencies , size of data , and process size are the three factors taken into account in the experimental results .the results indicate that the approach incorporating core intelligence takes lesser time than the approach incorporating agent intelligence .there are two reasons for this .firstly , in the agent approach , the agent needs to establish the dependency with each agent individually , where as in the core approach as a job is migrated from a core onto another its dependencies are automatically established .secondly , agent intelligence is a software abstraction of the sub - job , thereby adding a virtualised layer in the communication stack .this increases the time for communication .the virtual core is also an abstraction of the computing core but is closer to the computing core in the communication stack . the above rules can be incorporated to exploit both agent - based and core - based intelligence in a third , hybrid approach .the key concept of the hybrid approach combines the mobility of the agents on the cores and the cores collectively executing a job .the approach can select whether the agent - based approach or the core - based approach needs to come to play based on the rules for decision - making .the key observation from the experimental results is that the cost of incorporating intelligence at the job and core levels for automating fault tolerance is less than a second , which is smaller than the time taken by manual methods which would be in the order of minutes .for example , in the first approach , the time for reinstating execution with over 50 dependencies is less than 0.55 seconds and in the second approach , is less than 0.5 seconds .similar results are obtained when the size of data and the process are large . the proposed multi - agent approaches and the decision making rules considered in the above sectionsare validated using a computational biology job .a job that fits the criteria of reduction algorithms is considered . in reduction algorithms ,a job is decomposed to sub - jobs and executed on multiple nodes and the results are further passed onto other node for completing the job . one popular computational biology job that fits this criteria is searching for a genome pattern .this has been widely studied and fast and efficient algorithms have been developed for searching genome patterns ( for example , , and ) . in the genome searching experiment performed in this research multiple nodes of a cluster execute the search operation and the output produced by the search nodes are then combined by an additional node .the focus of this experimental study is not parallel efficiency or scalability of the job but to validate the multi - agent approaches and the decision making rules in the context of computational biology .hence , a number of assumptions are made for the genome searching job .first , redundant copies of the genome data are made on the same node to obtain a sizeable input .secondly , the search operation is run multiple times to span long periods of time .thirdly , the jobs are executed such that they can be stopped intentionally by the user at any time and gather the results of the preceding computations until the execution was stopped .the placentia cluster is chosen for this validation study since it was the best performing cluster in the empirical study presented in the previous sections .the job is implemented using r programming which uses mpi for exploiting computation on multiple nodes of the placentia cluster .bioconductor packages are required for supporting the job .the job makes use of bsgenome.celegans.ucsc.ce2 , bsgenome.celegans.ucsc.ce6 and bsgenome.celegans.ucsc.ce10 as input data which are the ce2 , ce6 and ce10 genome for chromosome i of caenorhabditis elegans . a list of 5000 genome patterns each of which is a short nucleotide sequence of 15 to 25 bases is provided to be searched against the input data .the forward and reverse strands of seven caenorhabditis elegans chromosomes named as chri , chrii , chriii , chriv , chrv , chrx , chrm are the targets of the search operation .when there is a target hit the search nodes provide to the node that gathers the results the name of the chromosome where the hit occurs , two integers giving the starting and ending positions of the hit , an indication of the hit either in the forward or reverse strand , and unique identification for every pattern in the dictionary .the results are tabulated in an output file in the combining node .a sample of the output is shown in figure 14 .redundant copies of the input data are made to obtain 512 mb ( which is kb ) and the job is executed for one hour . in a typical experiment the number of dependencies , was set to 4 ; three nodes of the cluster performed the search operation while the fourth node combined the results passed on to it from the three search nodes . in the agent intelligence based approach the time for predicting the fault is 38 seconds , the time for reinstating execution is 0.47 seconds , the overhead time is over 5 minutes and the total time when one failure occurs per hour is 1 hour , 6 minutes and 17 seconds . in the core intelligence basedapproach the time for predicting the single node failure is similar to the agent intelligence approach ; the time for reinstating execution is 0.38 seconds , the overhead time is over 4 minutes and the total time when one failure occurs per hour is 1 hour , 5 minutes and 8 seconds . in another experiment for 512 mb size of input data the number of dependencies was set to 12 ; eleven nodes for searching and one node for combining the results provided by the eleven search nodes .in the agent intelligence based approach the time for reinstating execution is 0.54 seconds , the overhead time is over 6 minutes and the total time when one failure occurs per hour is 1 hour , 7 minutes and 34 seconds . in the core intelligence basedapproach the time for reinstating execution is close to 0.54 seconds , the overhead time is over 6 minutes and the total time when one failure occurs per hour is 1 hour , 7 minutes and 48 seconds .the core intelligence approach requires less time than the agent intelligence approach when , but the times are comparable when .so , the above two experiments validate rule 1 for decision making considered in the previous section .experiments were performed for different input data sizes ; in one case kb and in the other kb . the agent intelligence approach required less time in the former case than the core intelligence approach .the time was comparable for the latter case .hence , the genome searching job in the context of the experiments validated rule 2 for decision making . similarly , when process size was varied rule 3 was found to be validated .the genome searching job is used as an example to validate the use of the multi - agent approaches for computational biology jobs .the decision making rules empirically obtained were satisfied in the case of this job .the results obtained from the experiments for the genome searching job along with comparisons against traditional fault tolerance approaches , namely centralised and decentralised checkpointing are considered in the next section .all fault tolerance approaches initiate a response to address a failure . based onwhen a response is initiated with respect to the occurrence of the failure , approaches can be classified as proactive and reactive .proactive approaches predict failures of computing resources before they occur and then relocate a job executing on resources anticipated to fail onto resource that are not predicted to fail ( for example ) .reactive approaches on the other hand minimise the impact of failures after they have occurred ( for example checkpointing , rollback recovery and message logging ) . a hybrid of proactive and reactive , referred to as adaptive approaches , is implemented so that failures that can not be predicted by proactive approaches are handled by the reactive approaches . the control of a fault tolerant approach can be either centralised or distributed . in approacheswhere the control is centralised , one or more servers are used for backup and a single process responsible for monitoring jobs that are executed on a network of nodes . the traditional message logging and checkpointing approach involves the periodic recording of intermediate states of execution of a job to which execution can be returned if faults occur .such approaches are susceptible to single point failure , lack scalability over a large network of nodes , have large overheads , and require large disk storage .these drawbacks can be minimised or avoided when the control of the approaches is distributed ( for example , distributed diagnosis , distributed checkpointing and diskless checkpointing ) . in this paper twodistributed proactive approaches towards achieving fault tolerance are proposed and implemented . in both approaches a job to be computed is decomposed into sub - jobs which are then mapped onto the computing cores .the two approaches operate at the middle levels ( between the sub - jobs and the computing cores ) incorporating agent intelligence . in the first approach ,the sub - jobs are mapped onto agents which are released onto the cores .if an agent is notified of a potential core failure during execution of the sub - job mapped onto it , then the agent moves onto another core thereby automating fault tolerance .in the second approach the sub - jobs are scheduled on virtual cores , which are an abstraction of the computing cores .if a virtual core anticipates a core failure then it moves the sub - job on it to another virtual core , in effect onto another computing core .the two approaches achieve automation in fault tolerance using intelligence in agents and using intelligence in cores respectively .a third approach is proposed which brings together the concepts of both agent intelligence and core intelligence from the first two approaches . the conventional approaches to fault tolerance such as checkpointing have large communication overheads based on the periodicity of checkpointing .high frequencies of checkpointing can lead to heavy network traffic since the available communication bandwidth will be saturated with data transferred from all computing nodes to the a stable storage system that maintains the checkpoint .this traffic is on top of the actual data flow of the job being executed on the network of cores .while global approaches are useful for jobs which are less memory and data intensive and can be executed over short periods of time , they may constrain the efficiency for jobs using big data in limited bandwidth platforms . hence , local approaches can prove useful . in the case of the agent based approachesthere is high periodicity for probing the cores in the background but very little data is transferred while probing unlike in checkpointing .hence , communication overhead times will be significantly lesser .lack of scalability is another issue that affects efficient fault tolerance .many checkpointing strategies are centralised ( with few exceptions , such as ) thereby limiting the scale of adopting the strategy .this can be mitigated by using multiple centralised checkpointing servers but the distance between the nodes and the server discounts the scalability issue . in the agentbased approaches , all communications are short distance since the cores only need to communicate with the adjacent cores .local communication therefore increases the scale to which the agent based approaches can be applied .checkpointing is susceptible to single point failures due to the failure of the checkpoint servers .the job executed will have to be restarted .the agent - based approaches are also susceptible to single point failures .while they incorporate intelligence to anticipate hardware failure the processor core may fail before the sub - job it supports can be relocated to an adjacent processor core , before the transfer is complete , or indeed the core onto which it is being transferred may also fail .however , the incorporation of intelligence on the processor core , specifically the ability to anticipate hardware failure locally , means that the numbers of these hardware failures that lead to job failure can be reduced when compared to traditional checkpointing .but since there is the possibility of agent failure the retention of some level of human intervention is still required .therefore , we propose combining checkpointing with the agent - based approaches , the latter acting as a first line of anticipatory response to hardware failure backed up by traditional checkpointing as a second line of reactive response .figure 15 shows the execution of a job between two checkpoints , and , where is the predicted failure and is the actual failure of the node on which a sub - job is executing .figure 15(a ) shows when there are no predicted failures or actual failures that occur on the node .figure 15(b ) shows when a failure occurs but could not be predicted . in this case, the system fails if the multi - agent approaches are solely employed .one way to mitigate this problem is by employing the multi - agent approaches in conjunction with checkpointing as shown in the next section .figure 15(c ) shows when the approaches predict a failure which does not happen .if a large number of such predictions occur then the sub - job needs to be shifted often from one node to the other which adds to the overhead time for executing the job .this is not an ideal case and makes the job unstable .figure 15(d ) shows the ideal case in which a fault is predicted before it occurs . and .( a ) ideal state of the job when no faults occur .( b ) failure state of the job when a fault occurs but is not predicted .( c ) unstable state of the job when a fault is predicted but does not occur .( d ) ideal prediction state of the job when a fault is predicted and occurs thereafter.,scaledwidth=40.0% ] failure prediction is based on a machine learning approach that is incorporated within multi - agents .this prediction is based on a log that is maintained on the health of the node and its adjacent nodes .each agent sends out are you alive signals to adjacent nodes to gather the state of the adjacent node .the machine learning approach is constantly evaluating the state of the system against the log it maintains , which is different across the nodes .the log can contain the state of the node from past failures , work load of the nodes when it failed previously and even data related to patterns of periodic failures .however , this prediction method can not predict a range of faults due to deadlocks , hardware and power failures and instantaneously occurring faults .hence , the multi - agent approaches are most useful when used along with checkpointing .it was observed that nearly 29% of all faults occurring in the cluster could be predicted .although this number is seemingly small it is helpful to not have to rollback to a previous checkpoint when a large job under time constraints is executed .accuracy of the predictions were found to be 64% ; the system was found to be stable in 64 out of the 100 times a prediction was made . to increasethe impact of the multi - agent approaches more faults will need to be captured .for this extensive logging and faster methods for prediction will need to be considered .these approaches will have to be used in conjunction with checkpointing for maximum effectiveness .the instability due to the approaches shifting jobs between nodes when there is a false prediction will need to be reduced to improve the overall efficiency of the approaches . for this, the state of the node can be compared with other nodes so that a more informed choice is made by the approaches .table 1 shows a comparison between a number of fault tolerant strategies , namely centralised and decentralised checkpointing and the multi - agent approaches .an experiment was run for a genome searching job that was executed multiple times on the placentia cluster .data in the table was obtained to study the execution of the genome searching job between two checkpoints ( and ) which are one hour apart .the execution is interrupted by failure as shown in figure 16 .two types of single node failure are simulated in the execution .the first is a periodic node failure which occurs at 15 minutes after and 45 minutes before ( refer figure 16(a ) ) , and the second is a random node failure which occurs minutes after and minutes before ( refer figure 16(b ) ) .the average time when a random failure occurs is found to be 31 minutes and 14 seconds for 5000 trials .the size of data , kb and the number of dependencies , . in table 1 , the average time taken for reinstating execution , for the overheads and for executing the job between the checkpoints is considered . the time taken for reinstating execution is for bringing execution back to normal after a failure has occurred .the reinstating time is obtained for one periodic single node failure and one random single node failure .the overhead time is for creating the checkpoints and transferring data for the checkpoint to the server .the overhead time is obtained for one periodic single node failure and one random single node failure .the execution time without failures , when one periodic failure occurs per hour and when five random failures occur per hour is obtained .centralised checkpointing using single and multiple servers is considered when the frequency of checkpointing is once every hour . in the case of both single and multiple server checkpointing the time taken for reinstating executionregardless of whether it was a periodic or random failure is 14 minutes and 8 seconds . on a single serverthe overhead is 8 minutes and 5 seconds where as the overhead to create the checkpoint is 9 minutes and 14 seconds which is higher than overheads on a single server and is expected .the average time taken for executing the job when one failure occurs includes the elapsed execution time ( 15 minutes for periodic failure and 31 minutes and 14 seconds for random failure ) until the failure occurred and the combination of the time for reinstating execution after the failures and the overhead time .for one periodic failure that occurs in one hour the penalty of execution when single server checkpointing is 62% more than executing without a failure ; in the case of a random failure that occurs in one hour the penalty is 89% more than executing without a failure .if five random failure occur then the penalty is 445% , requiring more than five times the time for executing the job without failures .centralised checkpointing with multiple servers requires more time than with a single server .this is due to the increase in the overhead time for creating checkpoints on multiple servers .hence , checkpointing with multiple servers requires 64% and 91% more time than the time for executing the job without any failures for one periodic and one random failure per hour respectively . on the other hand executing jobs when decentralised checkpointing on multiple servers is employed requires similar time to that taken by centralised checkpointing on a single server .the time for reinstating execution is higher than centralised checkpointing methods due to the time required for determining the server closest to the node that failed .however , the overhead times are lower than other checkpoint approaches since the server closest to the node that failed is chosen for creating the checkpoint which reduces data transfer times .the multi - agent approaches are proactive and therefore the average time taken for predicting single node failures are taken into account which is nearly 38 seconds .the time taken for reinstating execution after one periodic single node failure for the agent intelligence approach is 0.47 seconds and for the core intelligence approach is 0.38 seconds . since the core intelligence approach is selected . in this case, the core intelligence approach is faster than the agent intelligence approach in the total time it takes for executing the job when there is one periodic or random fault and when there are five faults that occur in the job .the multi - agent approaches only require one - fifth the time taken by the checkpointing methods for completing execution .this is because the time for reinstating and the overhead times are significantly lower than the checkpointing approaches .table 2 shows a comparison between centralised and decentralised checkpointing and the multi - agent approaches for a genome searching job that is executed on the placentia cluster for five hours .the checkpoint periodicity is once every one , two and four hours as shown in figure 17 . similar to table 1 periodic and random failures are simulated .figure 17(a ) shows the start and completion of the job without failures or checkpoints .when the checkpoint periodicity is one hour there are four checkpoints , , , and ( refer figure 17(b ) ) ; a periodic node failure is simulated after 14 minutes from a checkpoint and the average time at which a random node failure occurs is found to be 31 minutes and 14 seconds from a checkpoint for 5000 trials .when checkpoint periodicity is two hours there are two checkpoints , and ( refer figure 17(c ) ) ; a periodic node failure is simulated after 28 minutes from a checkpoint and the average time a random node failure occurs is found to be after 1 hour , 3 minutes and 22 seconds from a checkpoint for 5000 trials . when checkpoint periodicity is four hours there is only one checkpoint ( refer figure 17(d ) ) ; a periodic node failure is simulated after 56 minutes from a checkpoint and the average time at which a random failure occurs is found to be after 2 hours , 8 minutes and 47 seconds from each checkpoint for 5000 trials .similar to table 1 , in table 2 , the average time taken for reinstating execution , for the overheads and for executing the job from the start to finish with and without checkpoints is considered . the time to bring execution back to normal after a failure has occurredis referred to as reinstating time . the time to create checkpoints and transfer checkpoint data to the serveris referred to as the overhead time .the execution of the job when one periodic and one random failure occurs per hour and when five random failures occur per hour is considered . without checkpointing the genome searching jobis run such that a human administrator monitors the job from its start until completion . in this case , if a node fails then the only option is to restart the execution of the job . each time the job fails and given that the administrator detected it using cluster monitoring tools as soon as the node failed approximately , then at least ten minutes are required for reinstating the execution . if a periodic failure occurred once every hour from the 14th minute from execution then there are five periodic faults .once a failure occurs the execution will always have to come back to its previous state by restarting the job .hence , the five hour job , with just one periodic failure occurring every hour will take over 21 hours . similarly ,if a random failure occurred once every hour ( average time of occurrence is 31 minutes and 14 seconds after execution starts ) , then there are five failure points , and over 23 hours are required for completing the job .when five random failures occur each hour of the execution then more than 80 hours are required ; this is nearly 16 times the time for executing the job without a failure .centralised checkpointing on a single server and on multiple servers and decentralised checkpointing on multiple servers for one , two and four hour periodicity in the network are then considered in table 2 . for checkpointing methods when one hour frequency is chosen more than five times the time taken for executing the job without failures is required . when the frequency of checkpointing is everytwo hours then just under four times the time taken for executing the job without failures is required . in the casewhen the checkpoint is created every four hours just over 3 times the time taken for executing the job without failures is required .the multi - agent approaches on the other hand take only one - fourth the time taken by traditional approaches for the job with five single node faults that occur each hour .this is significant time saving for running jobs that require many hours for completing execution .the agent and core intelligence approaches are similar in at least four ways .firstly , the objective of both the approaches is to automate fault tolerance .secondly , the job to be executed is broken down into sub - jobs which are executed .thirdly , fault tolerance is achieved in both approaches by predicting faults likely to occur in the computing core .fourthly , technology enabling mobility is required by both the approaches to carry the sub - job or to push the sub - job from one core onto another .these important similarities enable both the agent and core approaches to be brought together to offer the advantages as a hybrid approach . while there are similarities between the agent and core intelligence approaches there are differences that reflect in their implementation .these differences are based on : ( i ) where the job is situated - in the agent intelligence approach , the sub - job becomes the payload of an agent situated on a computing core . in the core intelligence approach ,the sub - job is situated on a virtual core , which is an abstraction of the computing core .( ii ) who predicts the failures - the agent constantly probes the compute core it is situated on and predicts failure in the agent approach , whereas in the core approach the virtual core anticipates the failure .( iii ) who reacts to the prediction - the agent moves onto another core and re - establishes its dependencies in the agent approach , whereas the virtual core is responsible for moving a sub - job onto another core in the core approach .( iv ) how dependencies are updated - an agent requires to carry information of its dependencies when it moves off onto another core and establishes its dependencies manually in the agent approach , whereas the dependencies of the sub - job on the core do not require to be manually updated in the core approach . ( v ) what view is obtained - in the agent approach , agents have a global view as they can traverse across the network of virtual cores , which is in contrast to the local view of the virtual cores in the core approach .the agent based approaches described in this paper offer a candidate solution for automated fault tolerance or in combination with checkpointing as proposed above offer a means of reducing current levels of human intervention .the foundational concepts of the agent and core based approaches were validated on four computer clusters using parallel reduction algorithms as a test case in this paper .failure scenarios were considered in the experimental studies for the two approaches .the effect of the number of dependencies of a sub - job being executed , the volume of data communicated across cores , and the process size are three factors considered in the experimental studies for determining the performance of the approaches .the approaches were studied in the context of parallel genome searching , a popular computational biology job , that fits the criteria of a parallel reduction algorithm .the experiments were performed for both periodic and random failures .the approaches were compared against centralised and decentralised checkpointing approaches . in a typical experiment in which the fault tolerant approaches are studied in between two checkpoints one hour apart when one random failure occurs , centralised and decentralised checkpointing on an average add 90% to the actual time for executing the job without any failures . on the other hand , in the same experimentthe multi - agent approaches add only 10% to the overall execution time .the multi - agent approaches can not predict all failures that occur in the computing nodes .hence , the most efficient way of incorporating these approaches is to use them on top of checkpointing .the experiments demonstrate the feasibility of such approaches for computational biology jobs .the key result is that a job continues execution after a core has failed and the time required for reinstating execution is lesser than checkpointing methods .future work will explore methods to improve the accuracy of prediction as well as increase the number of faults that can be predicted using the multi - agent approaches . the challenge to achieve this will be to mine log files for predicting a wide range of faults and predict them as quickly as possible before the fault occurs .although the approaches can reduce human administrator intervention they can be used independently only if a wider range of faults can be predicted with greater accuracy .until then the multi - agent approaches can be used in conjunction with checkpointing for improving fault tolerance .the authors would like to thank the administrators of the compute resources at the centre for advanced computing and emerging technologies ( acet ) , university of reading , uk and the atlantic computational excellence network ( acenet ) .cappello f ( 2009 ) fault tolerance in petascale / exascale systems : current knowledge , challenges and research opportunities . international journal of high performance computing supplications , 23(3 ) : 212 - 226 .engelmann c , vallee gr , naughton t and scott sl ( 2009 ) proactive fault tolerance using preemptive migration .proceedings of the 17th euromicro international conference on parallel , distributed and network - based processing .252 - 257 .vallee g , engelmann c , tikotekar a , naughton t , charoenpornwattana k , leangsuksun c and scott sl ( 2008 ) a framework for proactive fault tolerance .proceedings of the 3rd international conference on availability , reliability and security .659 - 664 .fagg ge , gabriel e , chen z , angskun t , bosilca g , grbovic jp , dongarra j ( 2005 ) process fault - tolerance : semantics , design and applications for high performance computing .international journal for high performance applications and supercomputing .19(4 ) : 465 - 477 .yeh ch ( 2003 ) the robust middleware approach for transparent and systematic fault tolerance in parallel and distributed systems .proceedings of the international conference on parallel processing .61 - 68 .mourino jc , martin mj , gonzalez p and doallo r ( 2007 ) fault - tolerant solutions for a mpi compute intensive application .proceedings of the 15th euromicro international conference on parallel , distributed and network - based processing .246 - 253 .tsai j , kuo sy and wang ym ( 1998 ) theoretical analysis for communication - induced checkpointing protocols with rollback - dependency trackability .ieee transactions on parallel and distributed systems .9(10 ) : 963 - 971 .chtepen m , claeys fha , dhoedt b , de turuck f , demeester p and vanrolleghem pa ( 2009 ) adaptive task checkpointing and replication : toward efficient fault - tolerant grids .ieee transactions on parallel and distributed systems .20(2 ) : 180 - 190 .sankaran s , squyres jm , barrett b , sahay v , lumsdaine a , duell j , hargrove p and roman e ( 2005 ) the lam / mpi checkpoint / restart framework : system - initiated checkpointing. international journal of high performance computing applications .19(4 ) : 479 - 493 .hursey j , squyres jm , mattox ti , and lumsdaine a ( 2007 ) the design and implementation of checkpoint / restart process fault tolerance for open mpi .proceedings of the 12th ieee workshop on dependable parallel , distributed and network - centric systems .bowers kj , chow e , xu h , dror ro , eastwood mp , gregersen ba , klepeis jl , kolossvary i , moraes ma , sacerdoti fd , salmon jk , shan y and shaw de ( 2006 ) scalable algorithms for molecular dynamics simulations on commodity clusters .proceedings of the acm / ieee conference on supercomputing .je , tobias dj , brooks iii cl and singh uc ( 1991 ) vector and parallel algorithms for the molecular dynamics simulation of macromolecules on shared - memory computers .journal of computational chemistry .12(10 ) : 1270 - 1277 .oliner aj , sahoo rk , moreira je , gupta m ( 2005 ) perfomance implications of periodic checkpointing on large - scale cluster systems . proceedings of the 19th ieee international parallel and distributed processing symposium , 2005 .gabriel e , fagg ge , bosilca g , angskun t , dongarra j , squyres jm , sahay v , kambadur p , barrett b , lumsdaine a , castain rh , daniel dj , graham rl , woodall ts ( 2004 ) open mpi : goals , concept , and design of a next generation mpi implementation .proceedings of the 11th european pvm / mpi users group meeting .97 - 104 .chakravorty s , mendes cl and kale lv ( 2006 ) proactive fault tolerance in mpi applications via task migration .proceedings of ieee international conference on high performance computing , springer .lncs 4297 : 485 - 496 .janakiraman g , santos jr and subhraveti d ( 2005 ) cruz : application - transparent distributed checkpoint - restart on standard operating systems .proceedings of the international conference on dependable systems and networks .260 - 269 .valle g , charoenpornwattana k , engelmann c , tikotekar a , leangsuksun c , naughton t and scott sl ( 2008 ) a framework for proactive fault tolerance .proceedings of the 3rd ieee international conference on availability , reliability and security .659 - 664 .& predicting one single node failure & reinstating execution after one periodic single node failure & reinstating execution after one random single node failure & overheads related to one periodic single node failure & overheads related to one random single node failure & + & & & & & & without failures and checkpoints & with one periodic failure per hour & with one random failure per hour & with five random failures per hour + + 1 hour periodicity & - & 00:14:08 & 00:14:08 & 00:08:05 & 00:08:05 & 01:00:00 & 01:37:13 & 01:53:27 & 05:27:15 + + 1 hour periodicity & - & 00:14:08 & 00:14:08 & 00:09:14 & 00:09:14 & 01:00:00 & 01:38:22 & 01:54:36 & 05:33:00 + + 1 hour periodicity & - & 00:15:27 & 00:15:27 & 00:06:44 & 00:06:44 & 01:00:00 & 01:37:11 & 01:53:25 & 05:27:05 + + agent intelligence & 00:00:38 & 00:00:0.47 & 00:00:0.47 & 00:05:14 & 00:05:14 & & 01:06:17 & 01:06:17 & 01:32:27 + core intelligence & 00:00:38 & 00:00:0.38 & 00:00:0.38 & 00:04:27 & 00:04:27 & & 01:05:08 & 01:05:08 & 01:25:42 + hybrid intelligence & 00:00:38 & 00:00:0.38 & 00:00:0.38 & 00:04:27 & 00:04:27 & & 01:05:08 & 01:05:08 & 01:25:42 + & predicting one single node failure & reinstating execution after one periodic single node failure & reinstating execution after one random single node failure & all overheads related to one periodic single node failure & all overheads related to one random single node failure & + & & & & & & without failures & with one periodic failure per hour & with one random failure per hour & with five random failures per hour + cold restart with no failure tolerance & - & 00:10:00 & 00:10:00 & - & - & 05:00:00 & 21:15:17 & 23:01:00 & 80:31:04 + + 1 hour periodicity & - & 00:14:08 & 00:14:08 & 00:08:05 & 00:08:05 & & 08:01:05 & 09:27:15 & 27:16:15 + 2 hour periodicity & - & 00:15:40 & 00:15:40 & 00:10:17 & 00:10:17 & & 07:41:51 & 07:58:38 & 19:53:10 + 4 hour periodicity & - & 00:16:27 & 00:16:27 & 00:11:53 & 00:11:53 & & 06:24:20 & 07:37:07 & 18:05:35 + + 1 hour periodicity & - & 00:14:08 & 00:14:08 & 00:09:14 & 00:09:14 & & 08:07:14 & 09:33:23 & 27:45:00 + 2 hour periodicity & - & 00:15:40 & 00:15:40 & 00:12:22 & 00:12:22 & & 07:47:52 & 08:07:18 & 20:01:16 + 4 hour periodicity & - & 00:16:27 & 00:16:27 & 00:13:57 & 00:13:57 & & 07:04:28 & 07:52:27 & 18:45:22 + + 1 hour periodicity & - & 00:15:27 & 00:15:27 & 00:06:44 & 00:06:44 & & 08:00:55 & 09:27:05 & 27:15:25 + 2 hour periodicity & - & 00:17:23 & 00:17:23 & 00:09:46 & 00:09:46 & & 07:40:18 & 07:57:36 & 19:48:00 + 4 hour periodicity & - & 00:18:33 & 00:18:33 & 00:13:03 & 00:13:03 & & 06:27:36 & 07:40:23 & 18:21:55 + + 1 hour periodicity & & & & 00:05:14 & 00:05:14 & & 05:31:14 & 05:31:14 & 07:37:44 + 2 hour periodicity & & & & 00:06:38 & 00:06:38 & & 05:20:34 & 05:20:34 & 06:42:41 + 4 hour periodicity & & & & 00:07:41 & 00:07:41 & & 05:16:27 & 05:16:27 & 05:39:16 + + 1 hour periodicity & & & & 00:04:27 & 00:04:27 & & 05:26:13 & 05:26:13 & 07:11:37 + 2 hour periodicity & & & & 00:05:37 & 00:05:37 & & 05:16:22 & 05:16:22 & 06:22:34 + 4 hour periodicity & & & & 00:06:29 & 00:06:29 & & 05:13:32 & 05:13:32 & 05:31:21 +
background : large - scale biological jobs on high - performance computing systems require manual intervention if one or more computing cores on which they execute fail . this places not only a cost on the maintenance of the job , but also a cost on the time taken for reinstating the job and the risk of losing data and execution accomplished by the job before it failed . approaches which can proactively detect computing core failures and take action to relocate the computing core s job onto reliable cores can make a significant step towards automating fault tolerance . + method : this paper describes an experimental investigation into the use of multi - agent approaches for fault tolerance . two approaches are studied , the first at the job level and the second at the core level . the approaches are investigated for single core failure scenarios that can occur in the execution of parallel reduction algorithms on computer clusters . a third approach is proposed that incorporates multi - agent technology both at the job and core level . experiments are pursued in the context of genome searching , a popular computational biology application . + result : the key conclusion is that the approaches proposed are feasible for automating fault tolerance in high - performance computing systems with minimal human intervention . in a typical experiment in which the fault tolerance is studied , centralised and decentralised checkpointing approaches on an average add 90% to the actual time for executing the job . on the other hand , in the same experiment the multi - agent approaches add only 10% to the overall execution time .
graph drawing addresses the issue of constructing geometric representations of graphs in a way to gain better understanding and insights into the graph structures .surveys on graph drawing can be found in .if the given data is hierarchical ( such as a file system ) , then it can often be expressed as a rooted tree . among existing algorithms in the literature for drawing rooted trees ,the work of developed a popular method for drawing binary trees .the idea behind is to recursively draw the left and right subtrees independently in a bottom - up manner , then shift the two drawings along the -direction as close to each other as possible while centering the parent of the two subtrees one level up between their roots .different from the conventional ` triangular ' tree drawing of , -drawings , radial drawings and balloon drawings are also popular for visualizing hierarchical graphs .since the majority of algorithms for drawing rooted trees take linear time , rooted tree structures are suited to be used in an environment in which real - time interactions with users are frequent .consider figure [ fgillustration ] for an example .a _ balloon drawing _ of a rooted tree is a drawing having the following properties : * all the children under the same parent are placed on the circumference of the circle centered at their parent ; * there exist no edge crossings in the drawing ; * the radius of the circle centered at each node along any path from the root node reflects the number of descendants associated with the node ( i.e. , for any two edges on a path from the root node , the farther from the root an edge is , the shorter its drawing length becomes ) . in the balloon drawing of a tree ,each subtree resides in a _wedge _ whose end - point is the parent node of the root of the subtree .the ray from the parent node to the root of the subtree divides the wedge into two _ sub - wedges_. depending on whether the two sub - wedge angles are required to be identical or not , a balloon drawing can further be divided into two types : drawings with _ even sub - wedges _ ( see figure [ fgillustration](a ) ) and drawings with _ uneven sub - wedges _ ( see figure [ fgillustration](b ) ) .one can see from the transformation from figure [ fgillustration](a ) to figure [ fgillustration](b ) that a balloon drawing with uneven sub - wedges is derived from that with even sub - wedges by shrinking the drawing circles in a bottom - up fashion so that the drawing area is as small as possible . another way to differentiatethe two is that for the even sub - wedge case , it is required that the position of the root of a subtree coincides with the center of the enclosing circle of the subtree . _ aesthetic criteria _ specify graphic structures and properties of drawing , such as minimizing number of edge crossings or bends , minimizing area , and so on , but the problem of simultaneously optimizing those criteria is , in many cases , np - hard .the main aesthetic criteria on the angle sizes in balloon drawings are _ angular resolution _ , _ aspect ratio _ , and _ standard deviation of angles_. note that this paper mainly concerns the angle sizes , while it is interesting to investigate other aesthetic criteria , such as the drawing area , total edge length , etc . given a drawing of tree , an angle formed by the two adjacent edges incident to a common node is called an angle incident to node .note that an angle in a balloon drawing consists of two sub - wedges which belong to two different subtrees , respectively ( see figure [ fgillustration ] ) .with respect to a node , the _ angular resolution _ is the smallest angle incident to node , the _ aspect ratio _ is the ratio of the largest angle to the smallest angle incident to node , and the _ standard deviation of angles _ is a statistic used as a measure of the dispersion or variation in the distribution of angles , equal to the square root of the arithmetic mean of the squares of the deviations from the arithmetic mean . the _ angular resolution _ ( resp ., _ aspect ratio _ ; _ standard deviation of angles _ ) of a drawing of is defined as the minimum angular resolution ( resp ., the maximum aspect ratio ; the maximum standard deviation of angles ) among all nodes in . the angular resolution ( resp . ,aspect ratio ; standard deviation of angles ) of a tree drawing is in the range of ( resp . , and ) .a tree layout with a large angular resolution can easily be identified by eyes , while a tree layout with a small aspect ratio or standard deviation of angles often enjoys a very balanced view of tree drawing .it is worthy of pointing out the fundamental difference between aspect ratio and standard deviation .the aspect ratio only concerns the deviation between the largest and the smallest angles in the drawing , while the standard deviation deals with the deviation of all the angles . with respect to a balloon drawing of a rooted tree , changing the order in which the children of a node are listed or flipping the two sub - wedges of a subtree affects the quality of the drawing .for example , in comparison between the two balloon drawings of a tree under different tree orderings respectively shown in figures [ fgexperiments](a ) and [ fgexperiments](b ) , we observe that the drawing in figure [ fgexperiments](b ) displays little variations of angles , which give a very balanced drawing .hence some interesting questions arise : _ how to change the tree ordering or flip the two sub - wedge angles of each subtree such that the balloon drawing of the tree has the maximum angular resolution , the minimum aspect ratio , and the minimum standard deviation of angles ? _ throughout the rest of this paper , we let _ re _ , _ ra _ , and _ de _ denote the problems of optimizing angular resolution , aspect ratio , and standard deviation of angles , respectively . in this paper, we investigate the tractability of the re , ra , and de problems in a variety of cases , and our main results are listed in table [ tb - results ] , in which trees with ` flexible ' ( resp . , ` fixed ' ) uneven sub - wedges refer to the case when sub - wedges of subtrees are ( resp . , are not ) allowed to flip ; a ` semi - ordered ' tree is an unordered tree where only the circular ordering of the children of each node is fixed , without specifying if this ordering is clockwise or counterclockwise in the drawing .note that a semi - ordered tree allows to flip uneven sub - wedges in the drawing , because flipping sub - wedges of a node in the bottom - up fashion of the tree does not modify the circular ordering of its children .see figure [ fgexperiments ] for an experimental example with the drawings which achieve the optimality of ra1ra4 . in table [ tb - results ] , with the exception of re1 and ra1 ( which were previously obtained by lin and yen in ) , all the remaining results are new .we also give 2-approximation algorithms for ra3 and ra4 , and -approximation algorithms for de3 and de4 .finding improved approximation bounds for those intractable problems remains an interesting open question .clcccc & & denotation & complexity & reference + & ( 0,0)(0,-8 ) unordered trees with ( 0,0)(0,-8 ) & angular resolution & re1 & & + & even sub - wedges & aspect ratio & ra1 & & + & & standard deviation & de1 & & [ thm [ thm - de1 ] ] + & ( 0,0)(0,-8 ) semi - ordered trees with ( 0,0)(0,-8 ) & angular resolution & re2 & & [ thm [ thm - re2 ] ] + & flexible uneven sub - wedges & aspect ratio & ra2 & & [ thm [ thm - ra2 ] ] + & & standard deviation & de2 & & [ thm [ thm - de2 ] ] + & ( 0,0)(0,-8 ) unordered trees with ( 0,0)(0,-8 ) & angular resolution & re3 & & [ thm [ thm - re3 ] ] + & fixed uneven sub - wedges & aspect ratio & ra3 & npc & [ thm [ thm - ra3 ] , [ thm - ra3-approx ] ] + & & standard deviation & de3 & npc & [ thm [ thm - de3 ] , [ thm - de3-approx2 ] ] + & ( 0,0)(0,-8 ) unordered trees with ( 0,0)(0,-8 ) & angular resolution & re4 & & [ thm [ thm - re4 ] ] + & flexible uneven sub - wedges & aspect ratio & ra4 & npc & [ thm [ thm - ra4 ] , [ thm - ra4-approx ] ] + & & standard deviation & de4 & npc & [ thm [ thm - de4 ] , [ thm - de4-approx2 ] ] + the rest of the paper is organized as follows .some preliminaries are given in section [ sec : preliminary ] .the problems for cases c1 and c2 are investigated in section [ sec : c1c2 ] .the problems for cases c3 and c4 are investigated in section [ sec : c3c4 ] .the approximation algorithms for those intractable problems are given in section [ sec : approx ] .finally , a conclusion is given in section [ sec : conclusion ] .in this section , we first introduce two conventional models of balloon drawing , then define our concerned problems , and finally introduce some related problems .there exist two models in the literature for generating _ balloon drawings _ of trees .given a node , let be the radius of the drawing circle centered at . if we require that = for arbitrary two nodes and that are of the same depth from the root of the tree , then such a drawing is called a balloon drawing under the _ fractal model _ .the fractal drawing of a tree structure means that if and are the lengths of edges at depths and , respectively , then where is the predefined ratio ( ) associated with the drawing under the fractal model .clearly , edges at the same depth have the same length in a fractal drawing . unlike the fractal model, the _ subtrees with nonuniform sizes _ ( abbreviated as _ sns _ ) model allows subtrees associated with the same parent to reside in circles of different sizes ( see also figure [ fgillustration](a ) ) , and hence the drawing based on this model often results in a clearer display on large subtrees than that under the fractal model . givena rooted ordered tree with nodes , a balloon drawing under the sns model can be obtained in time ( see ) in a bottom - up fashion by computing the edge length and the angle between two adjacent edges respectively according to and ( see figure [ fgillustration](a ) ) where is the radius of the inner circle centered at node ; is the circumference of the inner circle ; is the radius of the outer circle enclosing all subtrees of the -th child of , and is the radius of the outer circle enclosing all subtrees of ; since there exists a gap between and the sum of all diameters , we can distribute to every the gap between them evenly , which is called a free arc , denoted by .note that the balloon drawing under the sns model is our so - called balloon drawing with even sub - wedges .a careful examination reveals that the area of a balloon drawing with even sub - wedges ( generated by the sns model ) may be reduced by shrinking the free arc between each pair of subtrees and shortening the radius of each inner circle in a bottom - up fashion , by which we can obtain a smaller - area balloon drawing with uneven sub - wedges ( e.g. , see the transformation from figure [ fgillustration](a ) to figure [ fgillustration](c ) ) . in what follows, we introduce some notation , used in the rest of this paper .circular permutation _ is expressed as : where for , is placed along a circle in a counterclockwise direction .note that is adjacent to ; denotes ; denotes .due to the hierarchical nature of trees and the ways the aesthetic criteria ( measures ) for balloon drawings are defined , an algorithm optimizing a _ star graph _ can be applied repeatedly to a general tree in a bottom - up fashion , yielding an optimum solution with respect to a given aesthetic criterion .thus , it suffices to consider the balloon drawing of a star graph when we discuss these problems .a star graph is characterized by a root node together with its children , each of which is the root of a subtree located entirely in a _wedge _ , as shown in figure [ fgillustration](a ) ( for the even sub - wedge type ) and figure [ fgunevennotation ] ( for the uneven sub - wedge type ) . inwhat follows , we can only see figure [ fgunevennotation ] because the even sub - wedge type can be viewed as a special case of the uneven sub - wedge type .the ray from to further divides the associated wedge into two sub - wedges and with sizes of angles and , respectively .note that and need not be equal in general .an _ ordering _ of s children is simply a circular permutation , in which for each .there are two dimensions of freedom affecting the quality of a balloon drawing for a star graph .the first is concerned with the ordering in which the children of the root node are drawn . witha given ordering , it is also possible to alter the order of occurrences of the two sub - wedges associated with each child of the root . with respect to child and its two sub - wedges and , we use to denote the index of the first sub - wedge encountered in a counterclockwise traversal of the drawing . for convenience ,we let .we also write ( ) , which is called the _ sub - wedge assignment _( or simply _ assignment _ ) . as shown in figure [ fgunevennotation ] , the sequence of sub - wedges encountered along the cycle centered at in a counterclockwise direction can be expressed as : if for each , then the drawing is said to be of _ even sub - wedge type _ ; otherwise , it is of _ uneven sub - wedge type_. as mentioned earlier , the order of the two sub - wedges associated with a child ( along the counterclockwise direction ) affects the quality of a drawing in the uneven sub - wedge case . for the case of uneven sub - wedge type ,if the assignment is given _ a priori _ , then the drawing is said to be of _ fixed _ uneven sub - wedge type ; otherwise , of _ flexible _ uneven sub - wedge type ( i.e. , is a design parameter ) .as shown in figure [ fgunevennotation ] , with respect to an ordering and an assignment in circular permutation ( [ e - subwedge - uneven ] ) , and , , are neighboring nodes , and the size of the angle formed by the two adjacent edges and is . hence , the _ angular resolution _ ( denoted by ) , the _ aspect ratio _ ( denoted by ) , and the _ standard deviation of angles _ ( denoted by ) can be formulated as we observe that the first and third terms inside the square root of the above equation are constants for any circular permutation and assignment , and hence , the second term inside the square root is the dominant factor as far as is concerned .we denote by the sum of products of sub - wedges , which can be expressed as : we are now in a position to define the re , ra and de problems in table [ tb - results ] for four cases ( c1 , c2 , c3 , and c4 ) in a precise manner .the four cases depend on whether the circular permutation and the assignment in a balloon drawing are fixed ( i.e. , given a priori ) or flexible ( i.e. , design parameters ) .for example , case c3 allows an arbitrary ordering of the children ( i.e. , the tree is unordered ) , but the relative positions of the two sub - wedges associated with a child node are fixed ( i.e. , flipping is not allowed ) .the remaining three cases are easy to understand .we consider the most flexible case , namely , c4 , for which both and are design parameters , which can be chosen from the set of all circular permutations of and the set of all -bit binary strings , respectively . the re and ra problems , respectively , are concerned with finding and to achieve the following : the de problem is concerned with finding and to achieve the following : as stated earlier , is closely related to the sop problem , which is concerned with finding and to achieve the following : before deriving our main results , we first recall two problems , namely , the _ two - station assembly line problem _ ( 2sal ) and the _ cyclic two - station workforce leveling problem _ ( 2slw ) that are closely related to our problems of optimizing balloon drawing under a variety of aesthetic criteria .consider a serial assembly line with two stations , say and , and a set of jobs .each job consists of two tasks processed by the two stations , respectively , where ( resp . , )is the workforce requirement at ( resp . , ) .assume the processing time of each job at each station is the same , say .consider a circular permutation of where is a circular permutation of .at any time point , a single station can only process one job .we also assume that the two stations are always busy . during the first time range ] , and are processed at and stations respectively , and the workforce requirement is .for example , consider where , , , and .for a certain circular permutation of , the workforce requirements for each period of time as well as the jobs served at the two stations are given in figure [ fgworkforceplanning ] , where the largest workforce requirement is 11 ; the range of the workforce requirements among all the time periods is [ 3,11 ] . [cols="^,^,^,^",options="header " , ] the and problems are defined as follows : * * 2sal * : given a set of jobs , find a circular permutation of the jobs such that the largest workforce requirement is minimized . * * 2slw * ( decision version ) : given a set of jobs and a range ] , the 2slw problem decides wether a circular permutation exists such that the workforce requirement ( i.e. , the sum of the workforce requirements for two jobs respectively executed at two stations at the same time ) for each time period is between and .given a balloon drawing of an unordered tree with fixed uneven sub - wedges , the ra3 decision problem decides whether a circular permutation so that the size of each angle ( i.e. , the sum of two adjacent subwedges respectively from two various children ) is between and .it is obvious that the decision version of the ra3 problem can be captured by the 2slw problem ( and vice versa ) in a straightforward way , hence np - completeness follows . as for the ra4 problem ,since the upper bound ( i.e. , in np ) for the ra4 problem is easy to show , we show the ra4 problem to be np - hard by the reduction from the 2slw problem as follows .the idea of our proof is to design an ra4 instance so that one can not obtain any better solution by flipping sub - wedges . to this end , from a 2slw instance a set of jobs and two numbers where for each , we construct a ra4 instance a set of sub - wedges and two numbers and in which we let and ; and for each ; and .now we show that there exists a circular permutation of so that the workforce requirement for each time period is between and if and only if there exist a circular permutation of and a sub - wedge assignment so that the size of each angle in the ra4 instance is between and .we are given a 2slw instance with a circular permutation of so that the workforce requirement for each time period is between and .it turns out that for each .it implies that for each .consider and in the ra4 instance constructed above . since and for each in the construction , thus .that is , for each .conversely , we are given a ra4 instance with a circular permutation of and a sub - wedge assignment so that the size of each angle in the ra4 instance is between and .for any , since , hence . in the ra4 instance, the size of each angle can be , , or for some . for convenience , the angle with size ( resp ., ; ) for some is called a type-00 ( resp . ,01 ; 11 ) angle ( note that the order of and is not crucial here ) .if there exists a type-00 angle in the ra4 instance , then there must exist at least one type-11 angle in this instance ; otherwise , all the angles are type-01 angles . in the casewhen there exists a type-00 angle with size so that there exists a type-11 angle with size for some , then w.l.o.g . , the sub - wedge sequence of the instance is expressed as a circular permutation where are sub - wedge subsequences ; the number of sub - wedges in each of and ( resp . , )is odd ( resp . , even ) .let be the reverse of .consider a new circular permutation , in which the size of each angle is between and , because the size of each angle in and is originally between and ; ( since and ) ; similarly , .if there still exists a type-00 angle in the new circular permutation , then we repeat the above procedure until we obtain a circular permutation where all the angles are type-01 angles . by doing this , the size of each angle in is between and , and the sub - wedge assignment in the drawing achieved by is or . in the case of ,we let , then becomes .consider the 2slw instance ( constructed above ) corresponding to the circular permutation . in the 2slw instance , for each , workforce requirement .hence , , which implies . we can utilize a technique similar to the reduction from _ hamiltonian - circle problem on cubic graphs _( hc - cg ) to 2slw ( ) to establish np - hardness for de3 and de4 .hence , we have the following theorem , whose proof is given in appendix because it is too cumbersome and our main result for the de3 and de4 problems is to design their approximation algorithms .[ thm - de3][thm - de4]both the de3 and de4 problems are np - complete .we have shown ra3 and ra4 to be np - complete .the results on approximation algorithms for those problems are given as follows .[ thm - ra3-approx][thm - ra4-approx]algorithm is a -approximation algorithm for ra3 and ra4 .let ( resp ., and ) be the minimal angle ( resp ., the maximal angle and the aspect ratio ) among the circular permutation generated by algorithm [ alg : re3-re4 ] . denote ( resp . , and ) as the maximum of the minimal angle ( resp . , the minimum of the maximal angle and the optimal aspect ratio ) among any circular permutation . since where is the sub - wedge adjacent to in the circular permutation with the minimum of the maximal angle, we have . by theorem [ thm - re3 ] , we have . therefore , . next , we design approximation algorithms for the np - complete de problems . herewe only consider the approximation algorithms for the sop4 and de4 problems because the approximation algorithms for the sop3 and de3 problems are similar and simpler .recall that the sop4 problem is equivalent to finding a matching for bipartite graph , such that is the minimal , where .consider a matching for bipartite graph in which is matched with for each , i.e. , assume that consists of subcycles for , in which we recall that denotes the set of the edges corresponding to each pair for . according to matching , we have that each subcycle in contains at least one matched edge between and for some .let the _ exchange graph _ for bipartite graph be a complete graph in which * each node in corresponds to a subcycle of , i.e. , ; * each edge in corresponding to two subcycles and in has cost , .( in fact , the cost represents the least cost of exchanging edges and in . ) when for some , we denote and .let be a minimum spanning tree over . with exchange graph and its minimum spanning tree as the input of algorithm [ alg : sop4 ], we can show that algorithm [ alg : sop4 ] is a 2-approximation algorithm for the sop4 problem .construct the exchange graph for find the minimum spanning tree of exchange graph where let for each edge ( noticing that if for some , then and ) , where each is said to correspond to ( i.e. , there are ) let find a set that includes element but is not considered before append the elements in set to the end of set ( i.e. , the duplicate elements are not deleted ) let both edges and correspond to , where edges and in correspond to and , respectively for each set in , remove the duplicate elements in each set order the elements in each set , and then denote the new set as where ( resp . , ) is the -th minimum ( resp . , maximum ) in ; the cardinality of is + is matched with for is matched with output such a matching for figure [ fgalgsop4 ] gives an example to illustrate how the algorithm works .figure [ fgalgsop4](a ) is where the solid lines ( resp ., dash lines ) are the edges in ( resp . , in ) .figure [ fgalgsop4](b ) is its exchange graph , and we assume that figure [ fgalgsop4](c ) is the minimum spanning tree for where each edge in has weight .we illustrate each after each modification in line 11 of algorithm [ alg : sop4 ] as follows : * initial : , , , , . * the elements in appended to the end of : + , , , . * the elements in appended to the end of : + .based on the above , algorithm [ alg : sop4 ] returns , and is shown in figure [ fgalgsop4](d ) .in fact , algorithm [ alg : sop4 ] provides a 2-aproximation algorithm for sop4 .a slight modification also yields a 2-approximation algorithm for sop3 . before showing our result , we need the following notation and lemma . a _ permutation _ is a 1-to-1 mapping of onto itself , which can be expressed as : or in compact form in terms of _ factors_. ( note that it is different from the circular permutation used previously . ) if for , and , then is called a _ factor _ of the permutation . a factor with called a nontrivial factor .note that a matching for the bipartite graph constructed above can be viewed as a permutation .[ lemma - perm ] for , let ( resp ., ) where ( resp . , ) is the -th maximum ( resp . , minimum ) among all .let be a -to- mapping , i.e. , a permutation of . if is a permutation consisting of only a nontrivial factor with size , then where for any .moreover , if for each and , then note that the difference between equation and inequality is that inequality can be applied only when the factor size is known .we proceed by induction on the size of . if , holds .suppose that the required two inequalities hold when . when , where is a size- permutation consisting of a nontrivial factor with size . then , for proving equation ( [ e - lemma - perm-1 ] ) , we replace the first term in equation ( [ e - lemma - perm-3 ] ) by the inductive hypothesis of equation ( [ e - lemma - perm-1 ] ) , and then obtain : since and . for proving equation ( [ e - lemma - perm-2 ] ) ,we replace the first term in equation ( [ e - lemma - perm-3 ] ) by the inductive hypothesis of equation ( [ e - lemma - perm-2 ] ) , and then obtain : since by the premise of equation ( [ e - lemma - perm-2 ] ) ( note that the permutation consists of a nontrivial factor of size , and hence the case does not occur except for ) .now , we are ready to show our result : [ thm - de4-approx1]there exist -approximation algorithms for sop3 and sop4 , which run in time .recall that given an unordered tree with fixed ( resp . , flexible )subwedges , the sop3 ( resp . ,sop4 ) problem is to find a circular permutation of ( resp ., a circular permutation of and a sub - wedge assignment ) so that the sum of products of adjacent subwedge sizes ( ) is as small as possible .we only consider sop4 ; the proof of sop3 is similar and simpler . in what follows ,we show that algorithm [ alg : sop4 ] correctly produces the 2-approximation solution for sop4 in time . from , we have , which is explained briefly as follows . from , we have that can be transformed from by a sequence of exchanges which can be constructed as follows .let denote the matching transformed by the sequence of exchanges for .we say a node in is _ satisfied _ in if its adjacent node in is the same as its adjacent node in . for , if the sub - wedge is satisfied , then is a null exchange .otherwise , if the node adjacent to in is adjacent to the sub - wedge in for ( i.e. , is not adjacent to in ) , then let be the exchange between the edges respectively incident to and in . here , by observing each non - null exchange , .hence , .let we claim that . since is a hamiltonian cycle transformed from consisting of subcycles , there exist at least times of merging subcycles during the transformation ( the sequence of exchanges ) .we can view as a permutation with several factors .there must exist a set of edges in forming a spanning tree for exchange graph such that each edge in must correspond to an edge in which can not be in a trivial factor of permutation , i.e. , it can not be for some .therefore , by inequality ( [ e - lemma - perm-1 ] ) of lemma [ lemma - perm ] , since is the edge set of minimum spanning tree of .in what follows , we show the approximation ratio to be 2 .note that denotes the matching generated by algorithm [ alg : sop4 ] .let and in algorithm [ alg : sop4 ] . the last inequality above holds since for any never presents in the first summation term more than twice ; otherwise we can find another spanning tree with cost strictly less than that of .for example , we consider figure [ fgalgsop4](c ) .suppose that the cost of edge in is , rather than , i.e. , is used three times by , , and ( with costs , , and , respectively ) .we can obtain a contradiction by considering a spanning tree replacing edge by edge with cost , which is less than in general .( the cost of is less than that of . )recall that and .hence , combining the first and third terms of the above inequality , we obtain : the above inequality holds due to for any .since in every , we obtain : in what follows , we explain how the algorithm runs in time . in line 1, the exchange graph can be constructed in time as follows .it takes time to construct a complete graph with nodes in which the nodes corresponds subcycles in , and the cost of each edge is assumed to be infinity .then , it takes time to compute all possible for any .consider each .if and belong to two different subcycles in , say and , respectively , and for their corresponding edge in graph , then . obviously , after considering all possible in time , graph is the required exchange graph . in line 2 ,it is well - known that the minimum spanning tree for graph can be found in time .line 3 runs in time since each element is denoted only once .line 4 is done in time .we explain how lines 513 can be done in time as follows. note that in line 3 , in addition that each set includes four elements , we record that each element knows which set includes it .hence , in line 7 , any set including element can be found in time .line 8 is done in time , since each set is a linked list .note that in line 7 all the sets that includes element will be considered at the end of line 12 , because in line 8 a duplicate element of is appended to and will be considered again in later iteration .lines 9 and 10 are done in time .therefore , lines 710 are done in time .we observe from lines 5 , 6 , 8 , 10 that each element in , , is considered once at the end of line 12 . since the number of elements in is ,there are iterations , each of which is done in time .hence , lines 512 are done in time . in line 13 , by scanning each set in , all duplicate elements are deleted in time .line 14 can be done in time , because the ordering of is known .lines 1518 are done in time , because each element is matched only once .note that algorithm [ alg : sop4 ] is a 2-approximation algorithm for the sop4 problem rather than the de4 problem because the approximation ratio is incorrect when the minus of the first and third items inside the square root of equation ( [ e - stddev2 ] ) is negative .therefore , we rewrite equation ( [ e - stddev2 ] ) as : note that the combination of first and fourth items inside the square root of the above equation is the variance of , and hence must be positive .therefore , the de4 problem is equivalent to minimizing the sum of the second and third items , i.e. , to minimize algorithm [ alg : de4 ] provides an -approximation algorithm for de4 .a slight modification also yields an -approximation algorithm for de3 .figure [ fgalgde4](a ) is an example for algorithm [ alg : de4 ] .the algorithm is almost the same as algorithm [ alg : sop4 ] except lines 1518 in algorithm [ alg : sop4 ] is replaced as follows : + 13 : * for each * with ( otherwise trivially ) * do * + 14 : let and + 15 : an element is said to be _ available _ if it is not matched yet + 16 : * for each * * do * + 17 : * if * * do * + 18 : the available maximum is matched with the available second minimum + 19 : * else * + 20 : the available minimum is matched with the available second maximum + 21 : * end if * + 22 : * end for * + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 23 : is matched with ; is matched with , where and are the remaining elements excluded in the above condition for some + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 24 : * end for * [ thm - de3-approx2 ] [ thm - de4-approx2]there exist -approximation algorithms for de3 and de4 , which run in time. recall that given an unordered tree with fixed ( resp . , flexible )subwedges , the de3 ( de4 ) problem is to find a circular permutation of ( resp ., a circular permutation of and a sub - wedge assignment ) so that the standard deviation of angles ( ) is as small as possible .we only concern de4 ; the proof of de3 is similar and simpler . in what follows ,we show that algorithm [ alg : de4 ] correctly produces -approximation solution in time .let be the matching for witnessing the optimal solution of the de4 problem , and be the matching generated by algorithm [ alg : de4 ] . from theorem [ thm - de4-approx1 ] , , and hence since for every edge .observing the matching for each generated by algorithm [ alg : de4 ] ( e.g. , see also figure [ fgalgde4](a ) ) , without lose of generality , we assume that and is odd such that in , inequality ( [ e - jrelation ] ) is explained as follows .since line 17 in algorithm [ alg : de4 ] considers the relationship between and for , thus , without loss of generality , we use numbers ( i.e. , ) to classify all data .then the data is alternately expressed as inequality ( [ e - jrelation ] ) , in which , , are local minimal ; , , , are local maximal .then , therefore , consider figure [ fgalgde4](b ) for an example .the above multiplication relationship of those and for figure [ fgalgde4](a ) is given in figure [ fgalgde4](b ) .figure [ fgalgde4pattern ] shows how to transform from figure [ fgalgde4](a ) to figure [ fgalgde4](b ) . by inequality ( [ e - jrelation ] ) , since for or or or or or or , we obtain : considering figure [ fgalgde4](b ) for an example , . by inequalities ( [ e - de4 - 1 ] ) and ( [ e - de4 - 2 ] ) , we have in what follows, we explain how the algorithm runs in time .it suffices to explain lines 1324. lines 14 and 15 are just notations for the proof of correctness , not being executed . in line 17, and can be calculated in time .hence , lines 1324 in algorithm [ alg : de4 ] runs in time , because the concerned availability ( available maximum , minimum , second maximum , second minimum ) is recorded and updated at each iteration in time ( noticing that and have been sorted , so has ) ; each element is recorded as the concerned availability at most and matched only once .this paper has investigated the tractability of the problems for optimizing the angular resolution , the aspect ratio , as well as the standard deviation of angles for balloon drawings of ordered or unordered rooted trees with even sub - wedges or uneven sub - wedges .it turns out that some of those problems are np - complete while the others can be solved in polynomial time .we also give some approximation algorithms for those intractable problems .a line of future work is to investigate the problems of optimizing other aesthetic criteria of balloon drawings . 10 url # 1`#1`urlprefix g. d. battista , p. eades , r. tammassia , i. g. tollis , graph drawing : algorithms for the visualization of graphs , prentice hall , 1999 .j. carri , r. kazman , research report : interacting with huge hierarchies : beyond cone trees , in : iv 95 , ieee cs press , 1995 .p. eades , drawing free trees , bulletin of institute for combinatorics and its applications ( 1992 ) 1036 . c. jeong , a. pang , reconfigurable disc trees for visualizing large hierarchical information space , in : infovis 98 , ieee cs press , 1998 .kao , m. sanghi , an approximation algorithm for a bottleneck traveling salesman problem , journal of discrete algorithms 7 ( 3 ) ( 2009 ) 315326 .k. kaufmann , d. wagner ( eds . ) , drawing graphs : methods and models , vol .2025 of lncs , springer , 2001 .h. koike , h. yoshihara , fractal approaches for visualizing huge hierarchies , in : vl 93 , ieee cs press , 1993 .lee , g. l. vairaktarakis , workforce planning in mixed model assembly systems , operations research 45 ( 4 ) ( 1997 ) 553567 .lin , h .- c .yen , on balloon drawings of rooted trees , journal of graph algorithms and applications 11 ( 2 ) ( 2007 ) 431452 .g. melanon , i. herman , circular drawing of rooted trees , reports of the centre for mathematics and computer sciences , report number ins-9817 .e. reingold , j. tilford , tidier drawing of trees , ieee trans .software eng .se-7 ( 2 ) ( 1981 ) 223228 .y. shiloach , arrangements of planar graphs on the planar lattice , ph.d .thesis , weizmann institute of science ( 1976 ) .g. l. vairaktarakis , on gilmore - gomory s open question for the bottleneck tsp , operations research letters 31 ( 6 ) ( 2003 ) 483391 . * on proof of theorem [ thm - de4 ] * + recall that the de problem is concerned with minimizing the standard deviation , which involves keeping all the angles as close to each other as possible .such an observation allows us to take advantage of what is known for the 2slw problem ( which also involves finding a circular permutation to bound a measure within given lower and upper bounds ) to solve our problems .it turns out that , like 2slw , de3 and and de4 are np - complete .even though de3 , de4 and 2slw bear a certain degree of similarity , a direct reduction from 2slw to de3 or de4 does not seem obvious .instead , we are able to tailor the technique used for proving np - hardness of 2slw to showing de3 and de4 to be np - hard . to this end , we first briefly explain the intuitive idea behind the np - hardness proof of 2slw shown in to set the stage for our lower bound proofs .the technique utilized in for the np - hardness proof of 2slw relies on reducing from the _ hamiltonian - circle problem on cubic graphs _( hc - cg ) ( a known np - complete problem ) .the reduction is as follows . for a given cubic graph with nodes ,we construct a complete bipartite graph consisting of _ blocks _ in the following way .( for convenience , ( resp . , ) is called the upper ( resp ., lower ) side . ) for each node adjacent to , , in cubic graph , a _ block_ of 14 nodes ( 7 on each side ) is associated to , where the upper side ( resp ., lower side ) contains three -nodes ( resp ., nodes ) corresponding to , , , and each side has a pair of -nodes , as well as a pair of -nodes ( as shown in figure [ fig : fg2slw ] ) .for the three blocks , , and associated with nodes , , and , respectively , each has a -node corresponding to ( because is adjacent to , , and ) .these three -nodes are labelled as , , and . in the construction ,nodes in and correspond to those tasks to be performed in stations and , respectively , in 2slw .as shown in figure [ fig : fg2slw ] , the nodes on the upper and lower sides in from the left to the right are associated with the following values , where and is any integer ; and . each edge in weight equal to the sum of the values of its end points .the instance of 2slw consists of jobs , in which jobs associated with pairs of -nodes are , , jobs associated with -nodes are , , , and jobs associated with pairs of -nodes are , .note that is a perfect matching for , and such a matching is called a _city matching_. the crux of the remaining construction is based on the idea of relating a permutation of the jobs } , j_{[2 ] } , ... , j_{[7n]}) ] , , are matches .note that } , w_{[i+1]1} ] ) to indicate an edge of a city matching ( resp . , transition matching ) . consider a special transition matching : i = 1 , ... , n , j = 1 , ... , 7\} ] , -subcycles for ( e.g. , (u_{11},v_{12})[v_{12},u_{12}](u_{12},v_{13})[v_{13},u_{13}](u_{13},v_{11}) ] for .hence , a chc for is formed by combining the subcycles . from , in order to yield a chc for , there are exactly three possible transaction matchings for as shown in figure [ fig : fg4matching ] .the design is such that edge ( resp . , and ) is in a hc of g if figure [ fig : fg4matching](i ) ( resp ., ( ii ) and ( iii ) ) is the chosen permutation for the constructed 2slw instance . following a somewhat complicated argument , proved that there exists a hamiltonian cycle ( hc ) for the cubic graph if and only if there exists a chc for , and such a chc for in turn suggest a sufficient and necessary condition for a solution for 2slw . .( sketch ) now we are ready to show the theorem .we only consider the de4 problem ; the de3 problem is similar and in fact simpler . recall that the de4 problem is equivalent to finding a balloon drawing optimizing .consider the following decision problem : : : given a star graph with flexible uneven angles specified by equation ( [ e - subwedge - uneven ] ) and an integer , determine whether a drawing ( i.e. , specified by the permutation and the assignments ( 0 or 1 ) for ( ) ) exists so that .it is obvious that the problem is in np ; it remains to show np - hardness , which is established by a reduction from hc - cg . in spite ofthe similarity between our reduction and the reduction from hc - cg to 2slw ( ) explained earlier , the correctness proof of our reduction is a lot more complicated than the latter , as we shall explain in detail shortly . in the new setting ,equations ( [ e-2slw - valuea ] ) and ( [ e-2slw - valueb ] ) become : where , , and where ( resp . , )is the -th maximum ( resp . , minimum ) among the values .( note that such a setting satisfies the premise of inequality ( [ e - lemma - perm-2 ] ) in lemma [ lemma - perm ] , and hence can utilize the inequality . )hence , we have that : note that the above implies that the -th upper ( resp ., lower ) node in is ( resp ., ) for and .define and in . hence , which are often utilized throughout the remaining proof .if is a set of transition edges , the sum of the transition edge weights is denoted by . if is a chc for where ( resp . , )is the city matching ( resp . , transition matching ) of the chc and for ( i.e., flipping sub - wedges is not allowed ) , then where is the weight of the transition edge .now based on the above setting , we show that there exists a hc for the cubic graph if and only if there exists a chc for the instance of the de4 problem such that .suppose that has a hamiltonian cycle .let },v_{[2 ] } , ... , v_{[n]} ] , there exists a pair ( } , v_{[2]} ] because } ] . from , we have that merged with , , and respectively in figure [ fig : fg4matching ] ( i ) , ( ii ) , and ( iii ) .hence , considering the order of },b_{[2]}, ...,b_{[n]} ] , of from the three possible matchings in figure [ fig : fg4matching ] , } ] with the master subcycle .besides , since the two -subcycles in each also are merged with in any matching of figure [ fig : fg4matching ] , we can obtain a complementary cycle traversing all nodes in .we need to check .in fact , we show that as follows . it suffices to show that for any where is the transition matching for . denote .we can prove that for every matching in figure [ fig : fg4matching ] .case ( i ) is shown as follows , and the others are similar : from the above computation , one should notice that if is matched with a sub - wedge larger than and is matched with a sub - wedge less than for , then includes .the converse , i.e. , showing the existence of a chc for the instance of de4 with implies the presence of a hc in , is rather complicated .the key relies on the following three claims . 1 .( _ bipartite _ ) there are no transition edges in between any pairs of upper ( resp . , lower ) nodes in .( _ block _ ) there are no transition edges in between two blocks in .( _ matching _ ) there is only one of , , and merged with the master subcycle in each .( recall that each node is adjacent to , , in , and hence the statement implies the presence of a hc in . ) claim 1 : : ( see ) given two transition matchings and between and , there exists a sequence of exchanges which transforms to .claim 2 : : if is a transition matching between and and involves two edges and crossing each other , then for .( claim 2 can be proved by easily checking . )it is very important to notice that claim 2 can be adapted even when may not be a chc .the transition matching where is matched with for every ( every transition edge is visually vertical ) is denoted by , i.e. , .note that if each edge in is between and , we can obtain by repeatedly using claim 2 in the order from the leftmost node to the rightmost node of , similar to the technique in the proof of claim 1 . . supposing that there exits transition edges between pairs of upper nodes in , then there must exist transition edges between pairs of lower nodes in , by pigeonhole principle .select one of the upper ( resp ., lower ) transition edges , say , ( resp . , say ) .consider .then . hence , .by the same technique , we can find where each edge in is between and such that , which is impossible . .by statement ( s-1 ) , each edge in the transition matching of is between and .suppose there exists at least one transition edge between two blocks .assume there are blocks , , with transition edges across two blocks .let .consider is the transition edge between and for , and .then there must exist a transition edge connecting to one of the lower nodes of , say , by pigeonhole principle , and we say the edge where and are respectively from and for and .note that must cross because and are in , i.e. , and . besides , we have and because two end points of edge belong to different blocks . consider .then for .that is , , which is a contradiction . .recall that in every involves subcycles , , , , , , from the leftmost to the rightmost .if there exists a chc for the instance , each -subcycle in has to be merged with some subcycle in the same by statements ( s-1 ) and ( s-2 ) . is at least 5 due to the merging of -subcycles from the following four cases ( here it suffice to discuss the merging of -subcycles with their adjacent subcycles because in others cases are larger ) : 1 . merged with and merged with : 2 . merged with and merged with : 3 . merged with and merged with : 4 . merged with and merged with : recall that there are subcycles in . hence we require at least times of merging subcycles to ensure these subcycles to be merged as a chc . since we have discussed that two -subcycles have to be merged in each ( i.e. , the total times of merging -subcycles are ) , we require at least more times of merging subcycles to obtain a chc .in fact , the times of merging subcycles is because each contributes once of merging subcycles . as a result , statement ( s-3 )is proved if we can show that after merging two -subcycles in each , the third merging subcycles in is to merge one of , , and with . * if , then . *if and the transition matching of is one of the matchings in figure [ fig : fg4matching ] , then . * if and the transition matching of is not any of the matchings in figure [ fig : fg4matching ] , then . * if , then . * if , then . *if , then .if the above statements on hold , then statement ( s-3 ) hold .the reason is as follows .remind that we need times of merging subcycles to be a chc .therefore , if there exists a transition matching of with for some ( i.e. , there are exactly two times of merging subcycles in ) , then there must exists a for some with .then , which is impossible because this results in the total larger than . .note that the transition matching of every can be viewed as a permutation of ( a mapping from to ) , and hence different ordering or different times of merging subcycles lead to a permutation with different factors , e.g , the permutation for figure [ fig : fg4matching](i ) is .if we let be a nontrivial factor of the permutation for , then by equation ( [ e - lemma - perm-1 ] ) in lemma [ lemma - perm ] . herewe concern the value induced by ( which is denoted by ) because it can be viewed as a lower bound of .if a factor includes but excludes , then we say that has a _ lack _ at . we observe that if the permutation for has a lack , then we can find a permutation for consisting of the factors without any lacks such that in which the number of factors of is the same as that of and the size of each factor is also the same .the reason is as follows .assume that has a factor with a lack at ( i.e. , ) and the minimum number appearing in the factor is .let be almost the same as except the factor in is modified as a factor without any lacks involving but excluding in . then by equation ( [ e - lemma - perm-1 ] ) in lemma [ lemma - perm ] , . in the similar way , we can find a permutation with factors without any lacks . in light of the above, it suffices to consider the permutation for consisting of the factors without any lacks when discussing the lower bound of .thus , in the following , when we say that the permutation for has a factor , this implies that has no lacks , so lemma [ lemma - perm ] can be applied to .now we are ready to prove the statements on .the statement of holds because is increased by at least 5 when two -subcycles have to be merged in each . as for the statement of , note that merging six subcycles implies a permutation with a factor of size seven .thus , by equation ( [ e - lemma - perm-1 ] ) in lemma [ lemma - perm ] , , as required .let for the convenience of the following discussion . as for the statement of , the permutation involves two nontrivial factors after five times of merging subcycles .note that one of the two factors has size at least four , and hence the factor contributes by equation ( [ e - lemma - perm-2 ] ) in lemma [ lemma - perm ] .therefore , by equation ( [ e - lemma - perm-1 ] ) in lemma [ lemma - perm ] , for some .( note that suggests that and are in different factors . ) since , hence , as required .as for the statement of , by equation ( [ e - lemma - perm-1 ] ) , for some and .discuss all possible cases of pair as follows .consider one of is 2 or 3 .we assume that , and the other case is similar .hence , .since and there exists a factor with size at least three in this case , by equation ( [ e - lemma - perm-2 ] ) , as required .the remaining cases are , and .consider one of is 1 or 6 .we assume that , and the other case is similar .hence . since and there exists a factor with size at least four or two factors with size at least three in this case , by equation ( [ e - lemma - perm-2 ] ) , as required .last , consider , namely , and ( resp. , and ) are in different factors .hence , can not be matched with nor , i.e. , subcycle can not be merged with adjacent subcycles , . since merging with the smallest cost in this case , and the other two times of merging subcycles must induce cost more than 2 , hence is at least 9 . as for the two statements of , by equation ( [ e - de4proof - if ] ) , in the case when is one of the matchings in figure [ fig : fg4matching ] is exactly seven , as required .then we consider the case when is not in figure [ fig : fg4matching ] in the following . by equation ( [ e - lemma - perm-1 ] ) , for some and since it is necessary to merge -subcycles , which contributes at least 5 .it suffices to consider the cases when , which may violate our required .that is , may be 1 , 2 , 3 or 6 . by considering four possible cases of merging -subcycles , one may easily check that whatever is , must be either larger than 7 or in figure [ fig : fg4matching ] .
in a _ balloon drawing _ of a tree , all the children under the same parent are placed on the circumference of the circle centered at their parent , and the radius of the circle centered at each node along any path from the root reflects the number of descendants associated with the node . among various styles of tree drawings reported in the literature , the balloon drawing enjoys a desirable feature of displaying tree structures in a rather balanced fashion . for each internal node in a balloon drawing , the ray from the node to each of its children divides the wedge accommodating the subtree rooted at the child into two sub - wedges . depending on whether the two sub - wedge angles are required to be identical or not , a balloon drawing can further be divided into two types : _ even sub - wedge _ and _ uneven sub - wedge _ types . in the most general case , for any internal node in the tree there are two dimensions of freedom that affect the quality of a balloon drawing : ( 1 ) altering the order in which the children of the node appear in the drawing , and ( 2 ) for the subtree rooted at each child of the node , flipping the two sub - wedges of the subtree . in this paper , we give a comprehensive complexity analysis for optimizing balloon drawings of rooted trees with respect to _ angular resolution _ , _ aspect ratio _ and _ standard deviation of angles _ under various drawing cases depending on whether the tree is of even or uneven sub - wedge type and whether ( 1 ) and ( 2 ) above are allowed . it turns out that some are np - complete while others can be solved in polynomial time . we also derive approximation algorithms for those that are intractable in general . tree drawing , graph drawing , graph algorithms
there are numerous new programming languages , libraries , and techniques that have been developed over the past few years to either simplify the process of developing parallel programs or provide additional functionality that traditional parallel programming techniques ( such as mpi or openmp ) do not provide .these include programming extensions such as co - array fortran ( caf ) , upc , and new languages such as chapel , openmpd or xmp however , these approaches often have not has a focus of optimising the parallelisation overheads ( the cost of the communications ) associated with distributed memory parallelisation , and have limited appeal for the many scientific applications which are already parallelised with an existing parallelisation approach ( primarily mpi ) and have very large code bases which would be prohibitively expensive to re - implement in a new parallel programming language .the challenge of optimising parallel communications is becoming increasingly important as we approach exascale - type high performance computers ( hpc ) , where it is looking increasingly likely the that ratio between the computational power of a node of the computer and the relative performance of the network is going to make communications increasingly expensive when compared to the cost of calculations .furthermore , the rise of multi - core and many - core computing on the desktop , and the related drop in single core performance , means that many more developers are going to need to exploit parallel programming to utilise the computational resources they have access to where they could have relied on increases in serial performance of the hardware they were using to maintain program performance in the past .therefore , we have devised a new parallel programming approach , called managed data message passing ( mdmp) , which is based on the mpi library but provides a new method for parallelising programs .mdmp follows the directives based approach , favoured by openmp and other parallel programming techniques , which are translated into mdmp library function calls or code snippets , which in turn utilise communication library calls ( such as mpi ) to provide the actual parallel communication functionality . using a directives based approach enables us to reduce the complexity , and therefore development code , of writing parallel programs , especially for the novice hpc programmer . however , the novel aspect of mdmp is that it allows users to specify the communication patterns required in the program but devolves the responsibility for scheduling and carrying out the communications to the mdmp functionality .mdmp instruments data accesses for data being communicated to optimise when communications happen and therefore better overlap communication and computation than is easily possible with traditional mpi programming .furthermore , by taking the directive approach mdmp can be incrementally added to a program that is already parallelised with mpi , replacing or extending parts of the existing mpi parallelisation without requiring any changes to the rest of the code .users can start by replacing one part of the current communication in the code , evaluate the performance impacts , and replace further communications as required . in this paperwe outline work others have undertaken in creating new parallel programming techniques , and optimisation communications .we describe the basic issues we are looking to tackle with mdmp , and go on to describe the basic feature and functionality of mdmp , outlining the performance benefits and costs of such an approach , and highlighting the scenarios where mdmp can provide reduced communication costs for the types of communication patterns seen in some scientific simulation codes using our prototype implementation of mdmp ( which implements mdmp as library calls rather than directives ) .recent evaluation of the common programming languages used in large scale parallel simulation code has found the majority are still implemented using mpi , with a minority also including a hybrid parallelisation through the addition of openmp ( or in a small number of cases shmem ) alongside the mpi functionality .this highlights to us the key requirement for any new parallel programming language or technique of being easily integrated with existing mpi - based parallel codes .there are a wide range of studies evaluating the performance of a range of different parallel languages , including partitioned global address space ( pgas ) languages , on different application domains and hardware .these show that there are many approaches that can provide performance improvements for parallel programs , compared to standard parallelisation techniques on a given architecture or set of architectures .existing parallel programming languages or models for distributed memory system provide various features to describe parallel programs and to execute them efficiently .for instance , xmp provides features similar to both caf and hpf , allowing users to use either global or local view programming models , and providing easy to program functionality through the use of compiler directives for parallel functionality .likewise , openmpd provided easy to program directives based parallelisation for message passing functionality , extending an openmp like approach to a distributed memory supercomputer. however , both of these approaches generally require the re - writing of existing codes , or parts of existing codes , into a new languages , which we argue is prohibitively expensive for most existing computational simulation applications and therefore has limited the take - up of these different parallel programming languages or techniques by end user applications . furthermore ,both only target parts of the problem we are aiming to tackle , namely improving programmability and optimising performance .xmp and openmpd both aim to make parallel programming simpler , but have not direct features for optimising communications in the program ( although they can enable users to implement different communication methods and therefore choose the most efficient method for themselves ) . pgas languages , and other new languages , may provide lower cost communications or new models of communications to enable different algorithms to be used for a given problem , or may provide simpler programming model , but none seems to offer both as a solution for parallel programming . also , crucially , they do not expect to work with existing mpi programs , negating the proposed benefits for the largest part of current hpc usage. there has also been significant work undertaken looking at optimising communications in mpi programs .a number of authors have looked at compiler based optimisations to provide automatic overlapping of communications and computation in existing parallel programs .these approaches have shown that performance improvements can be obtained , generally evaluated against kernel benchmarks such as the nas parallel benchmarks , by transforming user specified blocking communication code to non - blocking communication functionality , and using static compiler analysis to determine where the communications can be started and finished . furthermore , other authors have looked at communication patterns or models in mpi based parallel programs and suggested code transformations that could be undertaken to improve communication and computation overlap .however , these approaches are what we would class as _ coarse - grained _ communication optimisation .they use only static compiler analysis to identify the communication patterns , and identify the outer bounds of where communications can occur to try and start and finish bulk non - blocking operations in the optimal places .they do not address the fundamental separation of communication and computation into different phases that such codes generally employ .our work , outlined in this paper , is looking at _ fine - grained _ communication optimisations , where individual communication calls are _ intermingled _ with computation to truly mix communication and computation .the has also been work on both offline and runtime identification and optimisation of mpi communications , primarily for collective communication , or other auto - tuning techniques such optimising mpi library variables or individual library routines .all these approaches have informed the way we have constructed mdmp .we belief that the work we have undertaken is unique as it brings together attempts to provide simple message passing programming which fine - grained communication optimisation along with the potential for runtime auto - tuning of communication patterns into a single parallel programming tool .whilst there is a very wide range of communication and computational patterns in parallel programs , a large proportion of common parallel applications use regular domain decomposition techniques coupled with _ halo _communications to exploit parallel resources . as shown in figure[fig : commpattern ] , which is a representation of a jacobi - style stencil based simulation method , many simulations undertake a set of calculations that iterate over a n - dimensional array , with a set of communications to neighbouring processes every iteration of the simulation .[ fig : commpattern ] the core computational kernel of a simple jacobi style simulation , as illustrated in the previous paragraph , can be implemented as shown in figure [ code : original ] ( undertaking a 2d simulation ) . .... for ( iter=1;iter<=maxiter ; iter++ ) { mpi_irecv(&old[0][1 ] , np , mpi_float , prev , 1 , mpi_comm_world , & requests[0 ] ) ; mpi_irecv(&old[mp+1][1 ] , np , mpi_float , next , 2 , mpi_comm_world , & requests[1 ] ) ; mpi_isend(&old[mp][1 ] , np , mpi_float , next , 1 , mpi_comm_world , & requests[2 ] ) ; mpi_isend(&old[1][1 ] , np , mpi_float , prev , 2 , mpi_comm_world , & requests[3 ] ) ; mpi_waitall(4 , requests , statuses ) ; for ( i=1;i < mp+1;i++ ) { for ( j=1;j < np+1;j++ ) { new[i][j]=0.25*(old[i-1][j]+old[i+1][j]+ old[i][j-1]+old[i][j+1 ] - edge[i][j ] ) ; } } for ( i=1;i < mp+1;i++ ) { for ( j=1;j < np+1;j++ ) { old[i][j]=new[i][j ] ; } } } .... [ code : original ] it is evident from the above code that , whilst it has been optimised to use non - blocking communications , the communication and computation parts of the simulation are performed separately , with no opportunity to overlap communications and computations . in practicethis means that the application will only be using the communication network to send and receive data in short bursts , leaving it idle whilst computation is being performed .many large scale hpc resources are used by a large number of running applications at any one time , which may help to ensure that the overall usage of the interconnect is high , even though individual applications often utilise it in a _ burstyhowever , that still will not be true of the part of the network dedicated to the individual application , only to the load of the network overall .furthermore , when considering the very largest hpc resources in the world , and including the proposed exascale resources , there are often only a handful of applications utilising the resource at any one time .therefore , enabling applications to effectively utilise the network , especially the _ spare _ resources that the current separated communication and computation patterns engender , is likely to be beneficial to overall application performance and resource utilisation ( provided that the cost of doing this is not significant ) .it is possible to split the sends and receives in the previous example and place them around the computation rather than just before the computation , using the non - blocking functionality , to further ensure that more optimal communications are occurring .however , this is still does not allow for overlapping communication and computation because the computational is still occurring in a single block , with communications outside this block .for developers to ensure that communications and computations are truly mixed would require further code modifications , as shown in the code example in figure [ code : optmpi ] ( which implements a strategy of sending data as soon as it has been computed ) . .... for ( iter=1;iter<=maxiter ; iter++ ) { requestnum = 0 ; for ( j=0;j < np;j++ ) { mpi_irecv(&tempprev[j ] , 1 , mpi_float , prev , 1 , mpi_comm_world , & requests[requestnum ] ) ; requestnum++ ; mpi_irecv(&tempnext[j ] , 1 , mpi_float , next , 2 , mpi_comm_world , & requests[requestnum ] ) ; requestnum++ ; } for ( i=1;i <mp+1;i++ ) { for ( j=1;j <np+1;j++ ) { new[i][j]=0.25*(old[i-1][j]+old[i+1][j]+ old[i][j-1]+old[i][j+1 ] - edge[i][j ] ) ; if(i = = mp ) { mpi_isend(&new[i][j ] , 1 , mpi_float , next , 1 , mpi_comm_world , & requests[requestnum ] ) ; requestnum++ ; } else if(i = = 1 ) { mpi_isend(&new[i][j ] , 1 , mpi_float , prev , 2 , mpi_comm_world , & requests[requestnum ] ) ; requestnum++ ; } } } for ( i=1;i <mp+1;i++ ) { for ( j=1;j < np+1;j++ ) { old[i][j]=new[i][j ] ; } } mpi_waitall(requestnum , requests , statuses ) ; if(prev ! = mpi_proc_null ) { for ( j=1;j < np+1;j++ ) { old[0][j ] = tempprev[j-1 ] ; old[mp+1][j ] = tempnext[j-1 ] ; } } if(next != mpi_proc_null ) { for ( j=1;j < np+1;j++ ) { old[mp+1][j ] = tempnext[j-1 ] ; } } } .... [ code : optmpi ] whilst the code implemented in figure [ code : optmpi ] will enable the mixing of communication and computation , ensuring that data is sent as soon as it is ready to be communicated and potentially ensuring better utilisation of the communication network , it has come at the cost of considerable code _ mutilation _, requiring developers to undertake significant code optimisations . as well as the damage to the readability and maintainability of the code that this causes , it also means that a code has been significantly changed for a potentially architecturally dependent optimisation , i.e. an optimisation that may be beneficial on one or more current hpc systems but may not be beneficial on other or future hpc systems .we are proposing mdmp as a mechanism for implementing such optimisations without the requirement to significantly change users codes , or the need to tailor codes to a specific platform , as the mdmp functionality can implement communications in the most optimal form for the application and hardware currently being used .to re - iterate the challenges for mdmp that we have previously discussed , mdmp is designed to address the following issues : * work with existing mpi based codes * provide framework for optimisation communications * simplify parallel development mdmp uses a directives based approach , relying on the compiler to implement the actual message passing functionality based on the users instructions .compiler directives are used , primarily , to address the third point above , namely ease of use .we provide functionality that can be easily enabled and disabled in an application , hides some of the complexities of current mpi programming ( such as providing message tags , error variables , communicators , etc ... ) that often complicate development for new users of mpi , and also provides some flexibility to the user over the type and level of message optimisation used .the mdmp directives are translated into code snippets and library calls by the mdmp - enabled compiler , either directly in the equivalent non - blocking mpi calls ( which simply mimics the communication that would have been implemented directly by the user ) or to further optimised mpi communications or other another communication library as appropriate on the particular hardware being used .this enables mdmp to target different communication libraries transparently to the developer for a given hpc system .also , crucially the ability to target mpi communications means that mdmp functionality can be added to existing mpi - parallelised programs , either as additional functionality or to replace existing mpi functionality , without requiring the program to be completely changed into a new programming language or utilise a new message - passing ( or other ) communication library . however , simply using directives for programming message passing will not optimise the communication that are undertaken by a program .therefore , mdmp provides not only directives to specify the communications to be undertaken in the program but also directives to specify _communication regions_. communication regions define the areas of code where the data that is to be sent and received is worked on , and where communications occur , so that mdmp can , at runtime , examine the data access patterns and undertake communications at the optimal time to intermingle communications and computations and therefore better utilise the communication network .the optimisation of communications is based on runtime functionality that monitors the reads and writes of data that has been specified as communication data ( data that will be sent or received ) . as any data monitoring entails some runtime overheads the communication region specifies the scope of the data monitoring to ensure itis only performed where required ( i.e. where communications are occurring ) .any data that is specified by the users and being involved in send or receives is tracked so each read and writing in a communication region is recorded and the number of reads and writes that have occurred when the send or receive happens is evaluated .this data , the number of reads and writes that have occurred for a particular piece of data when it comes to be sent or written over by a receive , can then be used in any subsequent iterations of the computation to launch the communication of that data once it is ready to be communicated .communications are triggered for any given piece of data as follows : * last write occurs ( sends ) * last read and/or write occurs ( receives ) using this functionality we can implement a communication pattern that intermingles communication and computation for the example code shown in figure [ code : original ] , as shown in figure [ code : mdmp ] . .... # pragma commregion for ( iter=1;iter<=maxiter ; iter++ ) { # pragma recv(old[0][0 ] , np , prev ) # pragma recv(old[mp+1][1 ] , np , next ) # pragma send(old[mp][1 ] , np , next ) # pragma send(old[1][1 ] , np , prev ) for ( i=1;i < mp+1;i++ ) { for ( j=1;j < np+1;j++ ) { new[i][j]=0.25*(old[i-1][j]+old[i+1][j]+ old[i][j-1]+old[i][j+1 ] - edge[i][j ] ) ; } } for ( i=1;i < mp+1;i++ ) { for ( j=1;j < np+1;j++ ) { old[i][j]=new[i][j ] ; } } } # pragma commregionfinished .... [ code : mdmp ] when compiled with an mdmp - enabled compiled , the code in figure [ code : mdmp ] will be processed by the compiler and non - blocking sends and receives inserted where the ` send ` and ` recv ` directives are placed .the compile then looks through the code associated with the communicating region ( between ` commregion ` and ` commregionfinished ` ) and replaces any variable reads or writes linked to those sends and receives by mdmp code which will perform the reads and writes and also record those reads and writes occurring. compiler based code analysis for data accesses will be straightforward for many applications , however we recognise that there will be a number of scenarios , such as when pointers are heavily used in c or fortran , or possibly where pre - processing or function pointers or conditional function calls are used , where it will not be possible for the compiler to access where the data accesses for a particular ` send ` or ` recv ` occur . in that situation mdmp will revert to simply inserting the basic mpi function calls required to undertake the specified communication and not perform the optimise message functionality . whilst this negates the possibility of optimising the communications , it will not add any overheads to the program compared to the standard mpi performance a developer would experience , and it does still leave scope for the mdmp functionality to target communication libraries other than mpi to enable optimisation for users would requiring them to modify their code , if such functionality is available .furthermore , whilst we are not investigating such functionality at the moment , the design of mdmp means that it can also undertake auto - tuning or other runtime activities to optimise communication performance for users beyond the intermingling communication optimisations we have already discussed . for instance , mdmp could implement additional helper threads that enable progression of communications whilst the main program is undertaking calculations , albeit at the cost of utilising a computational core for that purpose .it could also evaluate different communication optimisations at runtime to auto - tune the performance of the program whilst it is running . a difference between the mpi functionality that a developer would add to a code like the one we have been considering and the functionality that mdmp implements is that where intermingling of communications is undertaken mdmp will be sending lots of single element ( or small numbers of elements ) messages between processes rather than a single message with all the data in it . in general, mpi performs best when small numbers of large messages are used , rather than large numbers of small messages .this is because in the case that large numbers of small message are sent the communication costs are dominated by the latency costs of each message , whereas using small numbers of large messages reduces the overall number of message latencies that are incurred .we recognise the fact the the mdmp functionality may not be optimal in terms of the overall message costs associated with communications but we are assuming that this penalty will be negated by the benefits associated with more consistent use of the network and less concentrated nature of the communication and computation patterns .however , this is something that is investigated in our performance analysis of mdmp , and as with other previously discussed potential problems with mdmp if it is impacting performance the optimised message passing can be disabled at compile time .we also recognise that mdmp functionality does not come without a cost to the performance of the program .mdmp is adding additional computational requirements above those specified in the user program , and also requires additional memory to store data associated with the communications ( such as the counters that record the reads and writes to variables ) .the premise behind the optimised message passing functionality we are aiming for is that communications are much more expensive than computations for an application on a modern hpc machine , and this relationship is likely to get worse for future hpc machines .if this is the case then adding additional computational requirements can be acceptable provided the communication costs are reduced through this addition of extra computation .we have evaluated the performance impact of mdmp , and the communication verses computation trade - off on current hpc architectures where mdmp becomes beneficial , through benchmarking of our software , described in the next section .however , we are still working on minimising the memory requirements for mdmp , as this will be important to ensure mdmp is usage on current and future hpc systems .furthermore , we should remember that mdmp can be used as a simpler programming alternative to mpi with the optimised message passing functionality turned of at compile time , thereby removing all of these overheads if they are not beneficial for a given application of hpc platform .we evaluated the performance of the mdmp compared to standard c and mpi codes .we undertook our evaluation using a range of common large scale hpc platforms and a set of simple , _ kernel _ style , benchmarks .we have only evaluated the functionality using 2 nodes on each system , primarily testing the communications between a pair of communicating processors , one on each node .we used three different large scale hpc machines to benchmark performance .the first was a * cray xe 6 * , hector , is the uk national supercomputing service consists of 2816 nodes , each containing two 16-core 2.3 ghz _interlagos _ amd opteron processors per node , giving a total of 32 cores per node , with 1 gb of memory per core .this configuration provides a machine with 90,112 cores in total , 90 tb of main memory , and a peak performance of over 800 tflop / s .we used the pgi fortran compile on hector , compiled with the ` -fastsse ` optimisation flag .the seconds was a * bullx b510 * , helios , which is based on intel xeon processors .a node contains 2 intel xeon e5 - 2680 2.7 ghz processors giving 16-cores and 64 gb memory .helios is composed of 4410 nodes , providing a total of 70,560 cores and a peak performance of over 1.2 pflop / s .the network is built using infiniband qdr non - blocking technology and is arranged using a fat - tree topology .we used the intel fortran compiler on helios , compiling with the ` -o2 ` optimisation flag .the final resource was a * bluegene / q * , juqueen at forschungszentrum juelich .juqueen is a ibm bluegene / q system based on the ibm power architecture .there are 28 racks composed of 28,672 nodes giving a total of 458,752 compute cores and a peak performance of 5.9 pflop / s .each node has an ibm powerpc a2 processor running at 1.6 ghz and containing 16 smt cores , each capable of running 4 threads , and 16 gb of sdram - ddr3 memory .ibm s fortran compile , xlf90 , was used on juqueen , compiling using the ` -o2 ` optimisation flag .we have been evaluating mdmp functionality using a number of different benchmarks .initially we tested the performance impact of instrumenting data reads and writes on a non - communicating code , the streams benchmark , with the results presented in the first subsection below .after this we evaluated the communications performance of mdmp verses communication implemented directly with mpi , the results of these evaluations are in the second subsection below .each of the streams benchmark tests were repeated 10 times and an average runtime calculated . for the communications benchmarks each operation was run 100 times , andeach benchmark as repeated 3 times with the average time taken .whilst we have , in previous sections in this paper , outlined the principles of mdmp and how it designed to work , we do not yet have a full compiler based implementation of this functionality .we have designed and implemented the runtime functionality that any compiler would added to a code when encountering mdmp pragmas , but have not yet implemented the compiler functionality .therefore , for this performance evaluation we are using benchmarks where the mdmp functionality has be implemented directly in the benchmark .we have implemented two versions of mdmp ; the first version implements all the required functionality within function calls to the mdmp library .this includes data stores and lookups for all the data marked as being communicated within an mdmp communicating region .the second version implements exactly the same functionality but uses pre - processor macros to insert the required code directly into the source code , thereby removing the need for function calls at every point mdmp is used . in the benchmark results thisis named _ optimised mdmp _work is currently ongoing to implement a compiler based solution , utilising the llvm compiler infrastructure , to enable us to target all the main hpc computer languages with a single , full reference implementation .streams is often used to evaluate the memory bandwidth of computer hardware , and therefore was chosen as it will highlight any impact on the memory access and update efficiencies of computations when mdmp is added to a code .the performance of the streams benchmark was evaluated on all three of the hardware platforms we had available to us , although we are only presenting the results in the following tables from the cray xe6 because , although the performance of the benchmark varies between machines , the relative performance difference between the original implementation of streams and our mdmp implementations does not change significantly between these platforms .we are reporting the results from a single process running on an otherwise empty node on the cray xe6 .table [ tab : mdmpstreambenchcommregion ] ( where int stands for integer and db stands for double ) outlines the performance of the two versions of mdmp verses the original streams code when the fully communicating functionality of mdmp is enabled and we have forced the mdmp library to treat each variable in the arrays being processed as if they were being communicated ( i.e. we are fully tracking all the reads and writes to these array entries even though no communications are occurring ) . [tab : mdmpstreambenchcommregion ] we can see from table [ tab : mdmpstreambenchcommregion ] that the mdmp functionality does have a significant impact on the overall runtime of all the benchmarks , adding around an order of magnitude increase to the runtime of the benchmark .we would expect any benchmark like this , where no communications are involved and therefore there is no optimisation for mdmp to perform , to be detrimentally impacted by the additional functionality added in the mdmp implementation .however , we can see that the optimised implementation of mdmp does not have as significant an impact as the original mdmp implementation . asthis is the first optimisation we have done to the mdmp functionality we are hopeful there is further scope for optimising the performance of mdmp and reducing the computational impact of the mdmp functionality .furthermore , it is worth re - iterating that in this benchmark we are forcing mdmp to mark and track all the data in the streams benchmark as if it is to be communicated .mdmp is not designed to be beneficial for scenarios such as this , it is primarily designed to be useful in scenarios where a small amount of data ( compared to the overall amount computed upon ) is sent each iteration . in a scenario such as the one this benchmark mimics ( where all the data used in the computation is also communicated ) mdmp should simply compiled down to the basic mpi calls as they would be more efficient in this scenario .mdmp is designed to enable users to try different communication strategies , such as simply using the plain mpi calls , or trying to very the amount communication and computation intermingling , which enables users to experiment and evaluate which will give them the best performance for their application and use case .indeed , such functionality could also be built into the runtime of mdmp , enabling auto - tuning of the choice of communication optimisation on the fly .we also ran the same benchmark with the code marked as outside a communicating region . in this scenario , whilst the mdmp functionality has be enabled for all the variables in the calculation , the absence of a communicating region disables , at runtime , any data tracking associated with the variables .table [ tab : mdmpstreambenchnocommregion ] presents the results for this benchmark .we can see that the cost of the mdmp functionality has been substantially reduce , and indeed if we used the optimised functionality where the mdmp function calls have been removed and replaced with pre - processed code the mdmp performance is extremely close to the plain benchmark codes performance .this confirms that the mdmp functionality can be constructed in such a way as not to have a significant adverse impact on the key computational kernels of a code outside the places that communications are occurring .[ tab : mdmpstreambenchnocommregion ] however , we can see from the results that if communications are present there is a significant performance impact on the data that is tracked by the mdmp functionality .our assumption is that the computational cost associated by mdmp can be more than offset by the reduction in communication costs for a program , but clearly this is dependent on the ratio between communications and computations for a given kernel , and the ratio of relative costs ( in terms of overall runtime ) of a communication verses a computation .we evaluate the performance impact verses the communication cost savings in the next subsection , where we analyse some communication benchmarks . we have constructed four simple benchmarks to evaluate mdmp against mpi .the first is a * pingpong * benchmark where a process sends a message to another process who copies the received data from the receive buffer into it s send buffer and sends it back to the first process , who performs the same copying process and sends it back again .this pattern is repeated many times and the time for the communications are recorded .the benchmark can send a range of message sizes . for the referencempi benchmark only a single message is sent each iteration of the benchmark containing the fully amount of data to be sent . for the mdmp versionthe ` send ` and ` recv ` functionality specifies the single message to be sent and received , and performs the send and receive on the first iteration of the benchmark but on subsequent iterations of the benchmark the mdmp functionality identifies when each element of the message data is ready to be sent ( through tracking the data copying process between the send and receive buffers ) and sends individual elements when they are ready to go .this will mean that for a run of the benchmark using a message of 1000 elements in size the mpi version will send one message between processes whereas the mdmp version will send 1000 message ( apart from on the first iteration where it will only send one message ) .the second benchmark , called * selectivepingpong * alters the basic pingpong benchmark we have already described by performing the same functionality but only sending a portion of the overall data owned by a process in the messages .it is possible to vary both the overall size of data each process has , and the amount of that data that is sent , for instance you could have each process having an array that is 100 elements long but only the first 10 and last 10 elements are sent in the pingpong messages .this benchmark is designed to investigate the performance impact of varying the overall data in a computation and the amount that is being communicated via mdmp .the third benchmark , called * delaypingpong * , also alters the basic pingpong benchmark by adding a delay in the loop that copies the data from the receive buffer to the send buffer .this delay is variable and is designed to simulate some level of computational work being undertaken during what would be the main computational loop for a computational kernel using mdmp .the delay is performed by a routine which iterates through a loop adding an integer to a double a specified number of times ( delay elements ) .the final benchmark , * selectivedelaypingpong * , combines the second and third benchmarks meaning the pingpong process can contain both user defined delay in the data copy loop and a selective amount of data to be transferred .figure [ fig : combinedpingpong ] demonstrates the cost of mdmp compared to plain mpi where there is no scope for communication and computational overlaps .the runtime for mdmp increases more or less linearly as the size of the data to be transferred increases , whereas the runtime for mpi stays relatively constant. however , if we examine figure [ fig : combinedpingpongdelay ] we can see that mdmp begins to see some benefits over mpi when the delay added to the data copy routine is increased .the juqueen and hector mdmp is faster than mpi when the delay elements are around 1000 and 800 elements respectively , although for helios mpi is always faster than mdmp ( albeit with a smaller gap in performance between the two methods ) .if not all the data that is copied between buffers is sent , as in the case of the selectivepingpong benchmark shown in figure [ fig : combinedpingpongselective1024 ] , then in comparison to the normal pingpong benchmark the overall difference in performance is reduced between mpi and mdmp although mdmp is still more costly than mpi . finally , the combined benchmark , results shown in [ fig : combinedpingpongselective1024 ] where 1024 overall data elements are processed and either 1 or 32 elements are sent with variable amounts of delays , highlight where mdmp can improve performance . when only one element is being sent then all it requiresis 16 floating point adds between communications ( 16 delay elements ) delay elements as there is a delay per array element ] to enable mdmp to optimise communications .if 32 elements are being sent then around 32 floating point adds are required to enabling the communication hiding that mdmp enables to provide a performance benefit .whilst these benchmarks are beneficial in enabling us to evaluate mdmp performance we recognise that a more realistic benchmark that evaluates mdmp performance against real kernel computations would also be useful as it would enable us to evaluate the overall impact of mdmp on cache , memory , and processor usage for real applications .we are in the process of undertaking such benchmarks at the moment but unfortunately do not have these results in time for this paper submission .we have outlined a novel approach of message passing programming on distributed memory hpc architectures and demonstrated that , given a reasonable level of computations to the communications to be performed , mdmp can reduce the overall cost of communications and improve application performance .we are aware the mdmp presents performance risks for parallel programs , including impacting cache and memory usage , and consuming additional memory .however , we belief the ability to enable and disable mdmp optimisations , and the potential benefits to ease of use and programmability from mdmp , make this approach a sensible one to investigate future message passing programming .we are currently working on a full compiler implementation of mdmp , including a formal mdmp language definition , and more involved benchmarks to evaluate mdmp in much more detail .part of this work was supported by an e - science grant from chalmers university .f. blagojevi , p. hargrove , c. iancu , and k. yelick .hybrid pgas runtime support for multicore nodes . in _ proceedings of the fourth conference on partitioned global address space programming model _ , pgas 10 , pages 3:13:10 , new york , ny , usa , 2010 .acm .a. faraj , x. yuan , and d. lowenthal .star - mpi : self tuned adaptive routines for mpi collective operations . in _ proceedings of the 20th annual international conference on supercomputing _ , ics 06 , pages 199208 , new york , ny , usa , 2006 .l. fishgold , a. danalis , l. pollock , and m. swany .an automated approach to improve communication - computation overlap in clusters . in _ proceedings of the 20th international conference on parallel anddistributed processing _ , ipdps06 , pages 290290 , washington , dc , usa , 2006 .ieee computer society .t. hoefler and t. schneider . runtime detection and optimization of collective communication patterns . in _ proceedings of the 21st international conference on parallel architectures and compilation techniques _ , pact 12 , pages 263272 , new york , ny , usa , 2012 .acm . c. hu , y. shao , j. wang , and j. li .automatic transformation for overlapping communication and computation .in j. cao , m. li , m .- y .wu , and j. chen , editors , _ network and parallel computing _ ,volume 5245 of _ lecture notes in computer science _ , pages 210220 .springer berlin heidelberg , 2008 . c. iancu , w. chen , and k. yelick .performance portable optimizations for loops containing communication operations . in _ proceedings of the 22nd annual international conference on supercomputing_ , ics 08 , pages 266276 , new york , ny , usa , 2008 .h. jin , r. hood , and p. mehrotra . a practical study of upc using the nas parallel benchmarks . in _ proceedings of the third conference on partitioned global address space programing models, pgas 09 , pages 8:18:7 , new york , ny , usa , 2009 .a. knapfer , d. kranzlmaller , and w. nagel .detection of collective mpi operation patterns . in d.kranzlmaller , p. kacsuk , and j. dongarra , editors , _ recent advances in parallel virtual machine and message passing interface _ ,volume 3241 of _ lecture notes in computer science _ , pages 259267 .springer berlin heidelberg , 2004 . c. lattner and v. adve .llvm : a compilation framework for lifelong program analysis & transformation . in _ proceedings of the international symposium on code generation and optimization : feedback - directed and runtime optimization _ , cgo 04 , pages 75 , washington , dc , usa , 2004 .ieee computer society .j. lee , m. sato , and t. boku .openmpd : a directive - based data parallel language extension for distributed memory systems . in _ parallel processing - workshops , 2008 . icpp - w 08 .international conference on _ , pages 121128 , 2008 .y. li , j. dongarra , and s. tomov . a note on auto - tuning gemm for gpus . in _ proceedings of the 9th international conference on computational science: part i _ , iccs 09 , pages 884892 , berlin , heidelberg , 2009 .springer - verlag .a. mller and r. rhl .extending high performance fortran for the support of unstructured computations . in _ proceedings of the 9th international conference on supercomputing_ , ics 95 , pages 127136 , new york , ny , usa , 1995 .acm .s. pellegrini , t. fahringer , h. jordan , and h. moritsch .automatic tuning of mpi runtime parameter settings by using machine learning . in _ proceedings of the 7th acm international conference on computing frontiers _, cf 10 , pages 115116 , new york , ny , usa , 2010 .h. shan , f. blagojevi , s .- j .min , p. hargrove , h. jin , k. fuerlinger , a. koniges , and n. j. wright . a programming model performance study using the nas parallel benchmarks ., 18(3 - 4):153167 , aug . 2010 .t. wen , j. su , p. colella , k. yelick , and n. keen .an adaptive mesh refinement benchmark for modern parallel programming languages . in _ proceedings of the 2007 acm / ieee conference on supercomputing_ , sc 07 , pages 40:140:12 , new york , ny , usa , 2007 .
mdmp is a new parallel programming approach that aims to provide users with an easy way to add parallelism to programs , optimise the message passing costs of traditional scientific simulation algorithms , and enable existing mpi - based parallel programs to be optimised and extended without requiring the whole code to be re - written from scratch . mdmp utilises a directives based approach to enable users to specify what communications should take place in the code , and then implements those communications for the user in an optimal manner using both the information provided by the user and data collected from instrumenting the code and gathering information on the data to be communicated . in this paper we present the basic concepts and functionality of mdmp and discuss the performance that can be achieved using our prototype implementation of mdmp on some simple benchmark cases .
the identification and control of nonlinear system have been widely studied in recent years . before the design of a controller , it is necessary to achieve system identification . in general, the process of system identification can be decomposed into two steps : the selection of an appropriate identification model ( system structure ) and an estimation of the model s parameters , of which , the parameter estimation plays a relatively more important role since a specific class of models that can best describe the real system can usually be derived by mechanism analysis of industrial processes .+ as for techniques of parameter estimation , approaches such as least - squares method , instrumental variable method , correlative function method , and maximum - likelihood method are widely used . especially for the least - squares method , it has been successfully utilized to identify the parameters in static and dynamic systems .however , most of these techniques have some fundamental issues , including their dependence on unrealistic assumptions such as unimodal performance and differentiability of the performance function , and they are easily getting trapped into local optimum , because these methods are in essence local search techniques based on gradient . for example , the least - squares method is only suitable for the model structure possessing some linear property .once the model structure exhibits nonlinear performance , this approach often fails in finding a global optimum and becomes ineffective .+ fortunately , the modern intelligent optimization algorithms , such as genetic algorithm ( ga ) , particle swarm optimization ( pso ) , are global search techniques based not on gradient , and they have been successfully applied in various optimization problems even with multimodal property . as a matter of fact , some intelligent optimization algorithms have been utilized in the field of nonlinear system identification and control . in , estimation of bar parameters with binary - coded genetic algorithm was studied , and it was verified that the gas can produce better results than most deterministic methods . genetic algorithm based parameter identification of a hysteretic brushless exciter modelwas proposed in . in ,real - coded genetic algorithms were applied for nonlinear system identification and controller tuning , and the simulation examples demonstrated the effectiveness of the ga based approaches .then , in , parameter estimation and control of nonlinear system based on adaptive particle swarm optimization were presented , and examples confirmed the validity of the method . further more , in , identification of jiles - atherton model parameters using particle swarm optimization , and in , parameters identification for pem fuel - cell mechanism model based on effective informed adaptive particle swarm optimization were put forwarded subsequently .all of these indicate that intelligent optimization techniques are alternatives for traditional methods including gradient descent , quasi - newton , and nelde - mead s simplex methods .+ although ga and pso are alternative approaches for the problem , they always encounter premature convergence and their convergence rates are not so satisfactory when dealing with some complex or multimodal functions .state transition algorithm ( sta ) is a novel optimization method based on the concept of state and state transition recently , which originates from the thought of state space representation and space transformation . in sta , four special transformation operators are designed , and they represent different search functions in space , which makes sta easy to understand and convenient to implement . for continuous function optimization problems , sta has exhibited comparable search ability compared with other intelligent optimization algorithms .+ in this paper , the sta is firstly introduced to identify the optimal parameters of nonlinear system .then , we will discuss the off - line pid controller design by adopting sta according to the estimated model .the pid control is popular due to its ease of use , good stability and simple realization .the key issue for pid controller design is the accurate and efficient tuning of pid control gains : proportional gain , integral gain and derivative gain . for adjusting pid controller parameters efficiently ,many methods were proposed .the ziegler - nichols method is an experimental one that is widely used ; however , this method needs certain prior knowledge on a plant model . once tuning the controller by ziegler - nichols method , a good but not optimum system response will be gained . on the other hand , many artificial intelligence techniques such as neural networks , fuzzy systems and neural - fuzzy logic have been widely applied to the appropriate tuning of pid controller gains . besides these methods , modern intelligent optimization algorithms , such as ga and pso , have also received much attention , and they are used to find the optimal parameters of pid controller .+ the goal of this paper is to introduce a novel method sta for both parameter estimation and control of nonlinear systems . in order to evaluate the performance of the sta, experiments are carried out to testify the validity of the proposed methodology , the results of which have confirmed that sta is an efficient method . compared with other intelligent optimization algorithms , the simulation examples have demonstrated that the sta has superior features in terms of search ability , convergence rate and stability .to transform a specified problem into the standard form of optimization problem is called optimization modeling , which is the basis for parameter identification and system control .the standard optimization problems should consist of objective function and decision variables , while optimization algorithms are used to find a global optimal solution to the objective function restricted to some additional constraints . in this paper ,the following class of discrete nonlinear systems described by the state space model is considered : where , is the state vector , is the input , is the output , and are unknown parameter vectors that will be identified , and and are nonlinear functions . without loss of generality , let ] be the estimated rearranging vector .+ the basic thought of system identification is to compare the real system responses with the estimated system responses .moreover , to accurately estimate the , some assumptions on the nonlinear systems are required : + ( 1 ) the system output must be available for measurement .+ ( 2 ) system parameters must be connected with the system output .+ to deal with the problem of parameter estimation , a specified problem should be formulated as an optimization problem . in this study , the decision variables are the estimated parameter vector , while the objective function is chosen as the following mean squared errors(mse ) : ^ 2,\ ] ] where , is the length of sampling data , ] are real and estimated values at time , respectively . + it is obvious to find that the mse is the function of variable vector , and then , the optimization problem will be solved by optimization algorithms which will minimize the mse value so that the real nonlinear system is actually estimated .the block diagram of the nonlinear system parameter estimation is given in fig.[fig1 ] .when the nonlinear system model is estimated , an off - line pid controller is then designed to guarantee the stability and other performances of the system .the reason why the pid controller is adopted is that it is the most widely used controller for application in industrial processes .the continuous form of a pid controller can be described as follows : ,\ ] ] where , is the error signal between the desired and actual outputs , is the control force , are the proportional gain , integral time constant and derivative time constant , respectively . by using the following approximations : where , is the sampling period , then ( 5 ) can be rewritten as \},\ ] ] which is called the place type , and in most case , the increment style as described following is more practical : +\frac{t}{t_i}e(k)+\frac{t_d}{t}[e(k)-2e(k-1)+e(k-2)]\}\\ & = u(k-1)+ k_p[e(k)-e(k-1 ) ] + k_ie(k)+ k_d[e(k)-2e(k-1)+e(k-2 ) ] , \end{array}\ ] ] where , and are integral gain and derivative gain , respectively .+ [ ] the block diagram of the design of an off - line pid controller is illustrated in fig.[fig2 ] , where is the reference output , and is the system output at the sampling point .optimization algorithms are used to adjust the pid controller parameters such as and . in the same way ,mean squared errors will be defined as the objective function ^ 2.\ ] ]let s consider the following unconstrained optimization problem : where , is a objective function . in a numerical way, the iterative method is adopted to solve the problem , the essence of which is to update the solution found so far .when thinking in a state and state transition way , a solution can be regarded as a state , and the updating of a solution can be considered as a state transition process .+ based on the thought stated above , the form of state transition algorithm can be described as follows , where , stands for a state , corresponding to a solution of a optimization problem ; and are state transition matrices , which are usually transformation operators ; is the function of variable and historical states ; is the objective function or evaluation function .+ using various types of space transformation for reference , four special state transformation operators are designed to solve continuous function optimization problems .+ ( 1 ) rotation transformation where , , is a positive constant , called rotation factor ; is a random matrix with its elements belonging to the range of [ -1 , 1 ] and is 2-norm of a vector .it has proved that the rotation transformation has the function of searching in a hypersphere . + ( 2 )translation transformation + , is a positive constant , called translation factor ; is a random variable with its elements belonging to the range of [ 0,1 ] .it has illustrated the translation transformation has the function of searching along a line from to at the starting point , with the maximum length of .+ ( 3 ) expansion transformation where , is a positive constant , called expansion factor ; is a random diagonal matrix with its elements obeying the gaussian distribution .it has also stated the expansion transformation has the function of expanding the elements in to the range of [ - , + ] , searching in the whole space .+ ( 4 ) axesion transformation where , is a positive constant , called axesion factor ; is a random diagonal matrix with its elements obeying the gaussian distribution and only one random index has nonzero value . as illustrated in , the axesion transformation is aiming to search along the axes .+ when using these transformation operators into practice , an important parameter called search enforcement(se ) is introduced to describe the times of certain transformation .+ the main procedures of the version of state transition algorithm in can be outlined in the following pseudocode .where , _ fc _ is a constant coefficient used for lessening the , and the translation operator will only be performed when a better solution is obtained .for both the identification of nonlinear system and the design of an off - line pid controller , when the optimization algorithms are utilized , they help to minimize the mean squared errors ( mse ) . in other words ,the mse or the evaluation of the objective function will guide the search of the algorithms .different from methods based on gradient , the termination criterion of intelligent optimization algorithms usually are not the precision of the gradient but a prespecified maximum number of iterations . + for comparison , the maximum iterations , population size or search enforcement are the same , and they are fixed at 100 and 30 , respectively . to be more specific , in pso , , and will decrease in a linear way from 0.9 to 0.4 , as suggested in . in sta , the rotation factor will decrease in an exponential way with base from to , and translation factor , expansion factor , axesion factor are all constant at 1 . for ga , we use the matlab genetic algorithm toolbox v1.2 from http://www.sheffield.ac.uk/acse/research/ecrg/getgat.html . in this paper , the following two instances are studied .+ + * example 1 .* an unstable nonlinear system is described by where , are to be estimated .the real parameters of the nonlinear system are assumed to be =[0.5,0.3,1.8,0.9] ] , + considering the randomness of the stochastic optimization algorithms , 30 independent trials are run . in the meanwhile , some statistics , such as _ best _ ( the minimum ) , _ mean _ , _ worst _ ( the maximum ) , _ st.dev _ ( standard deviation ) , are used to evaluate the performance of the algorithms .+ ccccc _ algorithms _ & & & & + ga & 0.4981 & 0.2995 & 1.7946 & 0.8946 + pso & 0.5000 & 0.3000 & 1.8000 & 0.9000 + sta & 0.5000 & 0.3000 & 1.8000 & 0.9000 + ccccc _ algorithms _ & _ best _ & _ mean _ & _ worst _ & _ st.dev _ + ga & 9.6674e-07 & 1.2572e-04 & 4.3492e-04 & 1.4433e-04 + pso & 1.3938e-12 & 2.8000e-03 & 2.7700e-02 & 8.4000e-03 + sta & 5.2364e-12 & 5.2367e-11 & 1.3729e-10 & 3.5120e-11 + table [ tab1 ] lists the best estimated parameters gained by ga , pso and sta , from which , we can find that only pso and sta can achieve the accurate parameters of the real system , and the results obtained by ga are a little far from the real parameters .then , from table [ tab2 ] , it indicates that sta is the most stable algorithm for the problem because the _ mean _ and _ st.dev _ of sta are the smallest .it can also be found the results gained by ga and pso are not so satisfactory since the _ best _ is deviated from the _ mean _ seriously .fig.[fig3 ] illustrates the optimization processes of parameter estimation by using sta compared with other two algorithms in a middle run .it is easy to find that the convergence rate of sta are much faster than that of ga and pso , with no more than 30 iterations , and the changes of parameters with sta are also steadier than other two algorithms .+ then , with the estimated model , an off - line pid controller for this system is designed by using sta , too . as a optimizer , sta is to minimize the mean squared error between the plant output and the desired output . in the experiment ,relative variables are given by + , k_i \in [ 0,1 ] , k_d \in [ 0,1] ] as a vector of estimated parameters and set the control input in this study .the relative variables used in optimization algorithms are given as follows + , t \in [ 0,20 ] , \tau \in [ 0,20] ] , + as shown in table [ tab7 ] , the ga and sta can find a pi controller for the time - delay system , since under the pi controller , the mse is smaller than that of pid controller obtained by pso .table [ tab8 ] indicates that sta has the capacity to find the minimum mse in a much higher probability .fig.[fig6 ] depicts the convergence of pid controller parameters and with ga , pso and sta , respectively , and it is shown that sta can find the optimal parameters in a much faster way .cccc _ algorithms _ & & & + ga & 1.0000 & 0.3196 & 0 + pso & 1.0000 & 0.3393 & 0.2837 + sta & 1.0000 & 0.3196 & 0 + ccccc _ algorithms _ & best & mean & worst & st.dev + ga & 6.3256e-2 & 6.3256e-2 & 6.3256e-2 & 1.0518e-7 + pso & 6.3258e-2 & 6.3274e-2 & 6.3279e-2 & 6.6540e-6 + sta & 6.3256e-2 & 6.3256e-2 & 6.3256e-2 & 4.0097e-15 + + +in this paper , a new optimization algorithm named sta is applied to solve the problems of parameter estimation and controller design for nonlinear systems . as a optimizer, sta is used to achieve the accurate model , and then it is adopted to obtain the optimal off - line pid controller .the experimental results have confirmed the validity of proposed algorithm . by comparison with ga and pso , it is found that sta has stronger global search ability and is more stable in statistics . with regard to the convergence rate, it is also discovered that sta is much faster than its competitors . as a novel optimization method ,these applications of sta show that it is a promising alternative approach for system identification and control .the work was supported by the national science found for distinguished young scholars of china ( grant no .61025015 ) , the foundation for innovative research groups of the national natural science foundation of china ( grant no . 61321003 ) and the china scholarship council .minrui fei , dajun du and kang li , a fast model identification method for networked control system , applied mathematics and computation , 205(2)(2008 ) , 658667 .pingkang li , kang li , a recursive algorithm for nonlinear model identification , applied mathematics and computation , 205(2)(2008 ) , 511516 .jing chen , xianling lu , rui ding , parameter identification of systems with preload nonlinearities based on the finite impulse response model and negative gradient search , applied mathematics and computation , 219(5)(2012 ) , 24982505 .junhong li , rui ding , parameter estimation methods for nonlinear systems , applied mathematics and computation , 219(9)(2013 ) , 42784287 .alireza alfi , hamidreza modares , system identification and control using adaptive particle swarm optimization , applied mathematical modelling , 35(2011 ) 1210 - 1221 .hamidreza modares , alireza alfi , mohammad - bagher naghibi sistani , parameter estimation of bilinear systems based on an adaptive particle swarm optimization , engineering applications of artificial intelligence , 23(2010 ) 1105 - 1111 .wei - der chang , nonlinear system identification and control using a real - coded genetic algorithm , applied mathematical modelling , 31(2007 ) 541 - 550 .astrom , b. wittenmark , adaptive control , addison - wesley , massachusetts , 1995 .alfi alireza , pso with adaptive mutation and inertia weight and its application in parameter estimation of dynamic systems , acta atuomatica sinica , 37(5)(2011 ) , 541 - 549 .goldberg d e , genetic algorithm in search , optimization and machine learning , nj : addison wesley , 1989 .holland , j.h , adaptation in natural and artificial systems , michigan : the university of michigan press , 1975 .eberhart , r.c . and kennedy , j. , a new optimizer using particles swarm theory , in proceedings of sixth international symposium on micro machine and human science , 1995 , pp.39 - 43 .yuhui shi , russell c.eberhart .empirical study of particle swarm optimization , in proceedings of ieee international congress on evolutionary computation , ( 3)(1999):591 - 600 .murat ihsan kmrc , nedim tutkun , ismail hakki zler , adem akpinar , estimation of the beach bar parameters using the genetic algorithms , applied mathematics and computation , 195(2008),49 - 60 .dionysios c. aliprantis , scott d. sudhoff , brian t. kuhn , genetic algorithm - based parameter identification of a hysteretic brushless exciter model , ieee transaction on energy conversion , 21(1)(2006 ) , 148 - 154 .k. valarmathi , d. devaraj , t.k .radhakrishnan , real - coded genetic algorithm for system identification and controller tuning , applied mathematic modelling , 33(2009 ) 3392 - 3401 .romain marion , riccardo scorretti , nicolas siauve , marie - ange raulet and laurent krhenbhl , identification of jiles - atherton model parameters using particle swarm optimization , ieee transaction on magnetics , 44(6)(2008 ) , 894 - 897 .qi li , weirong chen , youyi wang , shukui liu and junbo jia , parameter identification for pem fuel - cell mechanism model based on effective informed adaptive particle swarm optimization , ieee transations on industrial electronics , 58(6)(2011 ) , 2410 - 2419 .r. eberhart , y. shi , comparison between genetic algorithms and particle swarm optimization , annual conference on evolutionary programming , san diego , 1998 .m. clerc , j. kennedy , the particle swarm - explosion , stability , and convergence in a multidimensional complex space , ieee transactions on evolutionary computaion , 6(1)(2002 ) , 58 - 73 .zhou , c.h .yang and w.h .gui , initial version of state transition algorithm , in the 2nd international conference on digital manufacturing and automation , 2011 , pp.644647 .zhou , c.h .yang and w.h .gui , a new transformation into state transition algorithm for finding the global minimum , in the 2nd international conference on intelligent control and information processing , 2011 , pp.674678 .zhou , c.h .yang and w.h .gui , state transition algorithm , journal of industrial and management optimization , 2012 , 8(4 ) : 10391056 .zhou , d.y .gao , c.h .yang , a comparative study of state transition algorithm with harmony search and artificial bee colony , advances in intelligent systems and computing , 212(2013 ) , 651659 .shinskey , process control system : application , design and tuning , mcgraw - hill , 1996 .seng , m.b .khalid , r.yusof , tuning of a neuro - fuzzy controller by genetic algorithm , ieee transactions on system , man and cybernetics(b ) , 29(1999 ) , 226 - 236 .
by transforming identification and control for nonlinear system into optimization problems , a novel optimization method named state transition algorithm ( sta ) is introduced to solve the problems . in the proposed sta , a solution to a optimization problem is considered as a state , and the updating of a solution equates to a state transition , which makes it easy to understand and convenient to implement . first , the sta is applied to identify the optimal parameters of the estimated system with previously known structure . with the accurate estimated model , an off - line pid controller is then designed optimally by using the sta as well . experimental results have demonstrated the validity of the methodology , and comparisons to sta with other optimization algorithms have testified that sta is a promising alternative method for system identification and control due to its stronger search ability , faster convergence rate and more stable performance . nonlinear system identification ; pid controller ; state transition algorithm ; optimization algorithms
the wang - landau ( wl ) algorithm , introduced in 2001 , has received much attention and has been applied to a wide range of problems . in most of these investigations ,the authors have applied the wl algorithm to systems with discrete energy levels . however , relatively fewer papers have so far appeared on lattice models with continuous energy spectrum .techniques , in general , to improve the algorithm for different problems have also been proposed .the review illustrates the versatile applications of the wl algorithm in protein folding , fluid simulations , systems with first order phase transitions and other systems with rough energy terrain .some authors find its applications in performing numerical integration .the wl algorithm allows us to calculate the density of states ( dos ) as a function of energy or the joint density of states ( jdos ) as a function of energy and a second variable . for a macroscopic system , the dos ( where , being the bin index ) is a large number and it is convenient to work with its logarithm .since the dos is independent of temperature and contains complete information about the system , the task is to determine it as accurately as possible .the next step involves the determination of partition function ( , boltzmann constant has been set to unity ) at any temperature ( ) by the standard boltzmann reweighting procedure .once the partition function is known , the model is essentially `` solved '' since most thermodynamic quantities at any temperature can be calculated from it .the algorithm is implemented by performing an one - dimensional random walk that produces a `` flat '' histogram in the energy space . for a continuous model , one needs to use a discretization scheme to divide the energy range of interest into a number of bins which label the macrostates of the system . in the wl algorithm ,these macrostates are sampled with a probability which is proportional to the reciprocal of the current dos .the estimate for the dos is improved at each step of the random walk using a carefully controlled modification factor to produce a result that converges to the true dos quickly .a histogram record of all states visited is maintained throughout the simulation .when corresponding to a certain macrostate is modified as , the corresponding is modified as . in the original proposal of wl algorithm ,an iteration is said to be complete when the histogram satisfies a certain `` flatness '' condition .this means that , for all values of , has attained ( or some other preset value ) of the average histogram . in the following iteration, is reduced in some fashion , the s are reset to zero and the process is continued till is as small as or . since the history of the entire sampling process determines the dos , the wl algorithm is non - markovian besides being multicanonical in nature .in course of the random walk in a wl simulation , the fluctuations of energy histogram , for a given modification factor , initially grows with time and then saturates to a certain value .zhou and bhatt carried out a mathematical analysis of the wl algorithm .they provided a proof of the convergence of the iterative procedure and have shown that the fluctuations in histogram , proportional to for a given , cause statistical errors which can be reduced by averaging over multiple simulations .they have also shown that the correlation between adjacent records in the histogram introduces a systematic error which is reduced at smaller .the prediction in ref . has been numerically verified by different authors independently .although to obtain a flat histogram is the initial motivation behind the wl algorithm , ref . concluded that flatness is not a necessary criterion to achieve convergence and suggested that one should instead focus on the fluctuations of the histogram rather than the `` flatness '' .they had shown that visits on each macroscopic state is enough to guarantee the convergence .in fact , fluctuations in the histogram is intrinsic to wl algorithm .these fluctuations lead to a statistical error in the dos which scales as , for a given .the iterative wl algorithm partially reduces this statistical fluctuations by decreasing monotonically .however ref . clearly illustrates that even if is reduced to a very small value according to the original prescription , the statistical error stops to decrease at a certain point . in practice therealways exists a systematic error in the simulation which is a function of and the correlation between adjacent records in the histogram . ref . observed that this systematic error decreases when either or the correlation decreases . in this context , we refer to the work of morozov and lin who presented a study on the estimations of accuracy and convergence of the wang - landau algorithm on a two level system with a significant efficiency improvement in .the wl algorithm compares and , i.e , dos before and after an attempted move , but it does not require to be close to .this is why ref . suggested the use of cluster algorithms that allow `` nonlocal '' moves in the parameter space . the ref . rightly pointed out that the update schemes for the underlying model certainly have an effect on the outcome . in the present paperwe suggest a method for the spin update scheme of a lattice model with continuous energy spectrum , which reduces the autocorrelation time by an appreciable amount compared to the conventional spin update scheme .the suggested spin update method to obtain a less correlated configuration has also the advantage that this method is free from tuning any adjustable parameter .the method is described in section [ ct ] .we also investigate the growth of the histogram fluctuations in the one - dimensional lebwohl - lasher ( ll ) model , described in section [ model ] , to check if the nature of the dependence of the maximum of the histogram fluctuations on the modification factor is model independent or not .we mention in passing that ref . suggested the model - independent nature of the maximum of the histogram fluctuations by performing simulations on two discrete ising models and concluded that many more simulations on different models are needed to confirm this universality nature . confirmed this universality behavior for two continuous lattice spin models with spin dimensionality two .we have found that for the present model ( spin dimensionality three ) , the fluctuations in the energy histogram , after an initial increase , saturates to a value which is inversely proportional to and confirm that this feature is generic to the wl algorithm . in the second part of the work ,we have carried out the wl simulation with the proposed spin update scheme to estimate the canonical averages of various thermodynamic quantities for lattices of reasonably large size where minimum number of visits to each macrostate are .results obtained from our simulation are compared with the exact results available for the model .the rest of the paper is arranged as follows . in section [ model ] , we have described the model .the computational techniques are discussed in section [ ct ] .section [ rd ] presents our results and discussions .section [ conclu ] draws the conclusions .for the purpose of investigation , we have chosen an one - dimensional array of three - dimensional spins ( , where is the space dimensionality and is the spin dimensionality ) interacting with nearest neighbors ( nn ) via a potential where is the second legendre polynomial and is the angle between the nearest neighbor spins and ( the coupling constant in the interaction has been set to unity ) .the spins are three - dimensional and headless , i.e , the system has the as well as the local symmetry , characteristic of a nematic liquid crystal .the model , known as the lebwohl - lasher ( ll ) model , is the lattice version of the maier - saupe ( ms ) model which describes a nematic liquid crystal in the mean field approximation .being a low - dimensional model with nn interaction , the ll model does not exhibit any finite temperature phase transition .this model has been solved exactly by vuillermot and romerio in , using a group theoretical method .the results obtained in are quoted below .the partition function for the -particle system is given by ^n(\widetilde k^{1/2 } ) \label{eqn2}\ ] ] where is a dimensionless quantity . is the dawson function given by the dimensionless internal energy , entropy and the specific heat are given by \label{eqn5}\end{gathered}\ ] ] ^{-1 } \left ( \widetilde k^{1/2}\right ) \\-\frac{1}{2 } \widetilde k d^{-2}\left(\widetilde k^{1/2}\right ) \label{eqn6}\end{gathered}\ ] ] we decided to choose this model to test the performance of wl algorithm using the suggested spin update scheme so that a comparison can be made with the exact results available for the model .in the first part of this section , we will describe the computational techniques used to determine the fluctuations in the energy histogram . in the later part of this section, we will discuss the method for the new spin update scheme .let us first explain the notations and symbols relevant to the present work .the saturation value of the energy histogram fluctuation in the iteration is represented by .let be the modification factor for the iteration .one usually starts with a modification factor and uses a sequence of decreasing s ( ) defined in some manner .one monte carlo ( mc ) sweep is taken to be completed when the number of attempted single spin moves equals the number of spins in the system .the error in the dos after the iteration is directly related to for , the saturation values of the fluctuations . in the wl algorithmthe logarithm of the dos after iterations is given by where is the accumulated histogram count for the energy bin during the iteration . in order to get an idea of the fluctuations in the histogram and its growth with the number of mc sweeps, we subtract the minimum of the histogram count which occurs in the histogram after the mc sweep has been completed during the iteration , i.e. , we consider the quantity it may be noted that does not refer to any particular bin and may occur in any of the visited bins . the quantity is now summed over all bins to give . is thus a measure of the fluctuations which occurs in the mc sweep during iteration and is a sort of average over all macrostates or bins . fluctuates with because of statistical errors and its mean value taken over is nothing but .the error of the logarithm of the dos , summed over all energy levels or bins , after the completion of iterations is therefore given by eq .( [ eqn10 ] ) means that the error depends only on the fluctuations in histogram and the sequence of modification factors .when the values of are predetermined , the fluctuations in histogram , i.e. , , becomes the only determining factor for the error .for this reason the observable , defined by eq .( [ eqn9 ] ) , is considered to be a good measure of the fluctutations in histogram .however , we point out that because of the summation over the index in eq .( [ eqn9 ] ) , the nature of the distribution of the errors over the energy bins is not reflected in the summed quantity .what we get instead is an error which has been summed over all the energy bins .since the predicted value of the error is of the order of , one expects that the histogram saturation value , for the iteration , should be proportional to .now we discuss the method to generate a subsequent less - correlated spin configuration . in the conventional spin update method for a continuous lattice spin model, the orientation of each spin is stored in terms of the direction cosines . to generate a new configuration ( microstate ) , a spin is selected at random and each direction cosine of it is updated as for ( ) where the parameter `` p '' denotes the amplitude of the random angular displacements , chosen such that approximately half of the configurations are accepted and half rejected and is a random number between to .we have seen for a number of continuous lattice spin models that the results for the thermodynamic quantities become very sensitive to the value of the parameter `` p '' .`` p '' is generally taken such that and the choice of `` p '' also depends on the systems we are working on .the reason for taking is that small values of `` p '' correspond to small changes in the direction of the spin , i.e. , the energy cost of an attempted move will be small . however , this is not the only form of update , nor is it known whether this is the most efficient form .the thing is , there is quite a lot of flexibility about the choice of the new state for the spins .a good discussion of it may be found in ref . . in the present work ,we propose a novel protocol to generate a less - correlated spin configuration in the following manner .we take a random unit vector and a spin update is defined as where is the dot product of and .this represents a reflection with respect to the hyperplane orthogonal to and this is an idempotent operation .the idea came from wolff .one may think of a linear transformation such that .this linear transformation has the property i.e. , idempotent and \cdot[r(\vec r)\vec s_2]=\vec s_1 \cdot \vec s_2\ ] ] i.e. , the hamiltonian ( [ eqn1 ] ) is invariant under global r transformations .this spin update method reduces the autocorrelation time to a considerable amount and consequently systematic error decreases .moreover , defining a spin update in that way , the algorithm becomes free from tuning any adjustable parameter even while simulating a lattice spin model with continuous energy spectrum .this spin update method has resulted in efficient simulation of continuous lattice spin models with symmetry .the energy of the ll model is a continuous variable and it can have any value between to where is the system size . to discretizethe system , we have chosen an energy range ( ) and divided this energy range into a number of bins ( macrostates ) each having a width , say .in the present work , the bin width is taken to be .[ cols="^,^ " , ] [ table1 ]to summarize , we have tested the performance of the wl algorithm in a continuous lattice spin model , namely , the ll model which describes a nematic liquid crystal in the mean field approximation . the results obtained from our simulationare compared with the exact results available for this model .it has been observed that the results obtained tally accurately with the exact results .we focus on the fluctuations of histogram and replace the `` flatness '' criterion with that of minimum histogram .we have found that in this continuous lattice model , the fluctuations in the energy histogram , after an initial accumulation stage , saturates to a value that is proportional to where is the modification factor in the wl algorithm and confirm that this behavior is generic to the wl algorithm .we also present a novel method for spin update scheme to obtain a subsequent configuration which is less - correlated than the previous method .the proposed spin update scheme makes the wl `` driver '' to move from one sampling point to the next faster . as a result ,the autocorrelation time between successive moves decreases and the convergence becomes faster .it may be noted that the wl algorithm only asks for the next sampling point ( say ) with probability distribution where and are the exact and the estimated dos respectively .a previous study suggested that -fold way updates yields better performance in flat - histogram sampling .however , dayal argued that the performance is limited by the added expense of the cpu time needed to implement the -fold way updates .the proposed method is simple to implement and has also the merit that it makes us free from tuning any adjustable parameter while simulating a continuous lattice spin model .although the method has been applied to a liquid crystalline system in the present work , the method can , in general , be applied to any lattice spin model with continuous energy spectrum .this method has resulted in efficient simulation of continuous models with symmetry .finally , we stress that the focus in this paper is to test the performance of the wl algorithm in continuous lattice spin models with the proposed spin update scheme .we hope that this spin update method will be of general interest in the area of research in monte carlo simulations of continuous lattice spin models .i wish to thank prof .s. k. roy for fruitful discussions and critical reading of the manuscript .this work is supported by the ugc dr .d. s. kothari post doctoral fellowship under grant no .f-2/2006(bsr)/13 - 398/2011(bsr ) .part of the computations of this work has been done using the computer facilities of the tcmp division of saha institute of nuclear physics , kolkata , india .i thankfully acknowledge the unanimous referee for a number of suggestions in improving the manuscript .99 f. wang and d. p. landau , phys .lett . * 86 * , 2050 ( 2001 ) ; phys .e * 64 * , 056101 ( 2001 ) . c. yamaguchi and y. okabe , j. phys .a * 34 * , 8781 ( 2001 ) .y. okabe , y. tomita and c. yamaguchi , comput .commun . * 146 * , 63 ( 2002 ) .m. s. shell , p. g. debenedetti and a. z. panagiotopoulos , j. chem .phys * 119 * , 9406 ( 2003 ) .n. rathore and j. j. de pablo , j. chem .phys * 116 * , 7225 ( 2002 ) ; n. rathore , t. a. knotts and j. j. de pablo , _ ibid . _ * 118 * , 4285 ( 2001 ) . t. s. jain and j. j. de pablo , j. chem .phys * 116 * , 7238 ( 2002 ) .q. yan , r. faller and j. j. de pablo , j. chem .phys * 116 * , 8745 ( 2002 ) ; e. b. kim , r. faller , q. yan , n. l. abbott and j. j. de pablo , _ ibid . _* 117 * , 7781 ( 2002 ) .d. jayasri , v. s. s. sastry and k. p. n. murthy , phys .e * 72 * , 036702 ( 2005 ) .m. chopra and j. j. de pablo , j. chem .phys * 124 * , 114102 ( 2006 ) ; e. a. mastny and j. j. de pablo , _ ibid . _ * 122 * , ( 2005 ) .y. w. li , t. wust , d. p. landau and h. q. lin , comput .. commun . * 177 * , 524 ( 2007 ) . t. wust and d. p. landau , phys .lett . * 102 * , 178101 ( 2009 ) .d. t. seaton , t. wust and d. p. landau , phys .e * 81 * , 011802 ( 2010 ) .p. poulain , f. calvo , r. antoine , m. broyer and p. dugourd , phys .e * 73 * , 056704 ( 2006 ) . c. zhou , t. c. schulthess , s. torbrugge and d. p. landau , phys .lett . * 96 * , 120201 ( 2006 ) .k. mukhopadhyay , n. ghoshal and s. k. roy , phys .a * 372 * , 3369 ( 2008 ) .s. bhar and s. k. roy , comput .commun . * 180 * , 699 ( 2009 ) .s. sinha and s. k. roy , phys .a * 373 * , 308 ( 2009 ) . c. yamaguchi and n. kawashima , phys .e * 65 * , 056710 ( 2002 ) .b. j. schulz , k. binder and m. muller , int .c , * 13 * , 477 ( 2002 ) .b. j. schulz , k. binder , m. muller and d. p. landau , phys .e , * 67 * , 067102 ( 2003 ) .b. a. berg , comput .phys . commun . *153 * , 397 ( 2003 ) .q. yan and j. j. de pablo , phys .lett . * 90 * , 035701 ( 2003 ) ; m. s. shell , p. g. debenedetti and a. z. panagiotopoulos , j. phys .b * 108 * , 19748 ( 2004 ) .s. trebst , d. a. huse and m. troyer , phys .e * 70 * , 056701 ( 2004 ) .p. dayal , s. trebst , s. wessel , d. wurtz , m. troyer , s. sabhapandit and s. n. coppersmith , phys .lett . * 92 * , 097201 ( 2004 ) .p. virnau , m. muller , l. g. macdowell , k. binder , j. chem .phys * 121 * , 2169 ( 2004 ) . c. zhou and r. n. bhatt , phys .e * 72 * , 025701(r ) ( 2005 ) .a. troster and c. dellago , phys .066705 ( 2005 ) . h. k. lee , y. okabe and d. p. landau , comput .. commun . * 175 * , 36 ( 2006 ) .d. earl and m. deem , j. phys .b * 109 * , 6701 ( 2005 ) .a. n. morozov and s. h. lin , phys .e * 76 * , 026701 ( 2007 ) .r. e. belardinelli and v. d. pereyra , phys .e * 75 * , 046701 ( 2007 ) ; j. chem .* 127 * , 184105 ( 2007 ) . c. zhou and j. su , phys .e * 78 * , 046705 ( 2008 ) .d. p. landau , s. h. tsai and m. exler , am .* 72 * , 1294 ( 2004 ) .a. n. morozov and s. h. lin , j. chem .phys * 130 * , 074903 ( 2009 ) .p. a. lebwohl and g. lasher , phys .rev . a * 6 * , 426 ( 1972 ) .w. maier and a. saupe , z. naturforsch .a * 13 * , 564 ( 1958 ) ; _ ibid . _ * 14 * , 882 ( 1959 ) ; _ ibid . _* 15 * , 287 ( 1960 ) .p. a. vuillermot and m. v. romerio , j. phys .c * 6 * , 2922 ( 1973 ) ; commun .phys . * 41 * , 281 ( 1975 ) .m. abramowitz and i. stegun , a handbook of mathematical functions , dover , new york , 1970 .c. zannoni ( chapter 9 ) in the molecular physics of liquid crystals edited by g. r. luckhurst and g. w. gray , academic press , 1979 .monte carlo methods in statistical physics , edited by m. e. j. newman and g. t. barkema , ( clarendon , oxford , 1999 ) .u. wolff , phys .62 * , 361 ( 1989 ) ; nucl . phys .b * 322 * , 759 ( 1989 ) .s. sinha and s. k. roy , phys .e * 81 * , 041120 ( 2010 ) .s. sinha , phys . rev .e * 84 * , 010102(r ) ( 2011 ) .n. madras and a. d. sokal , j. stat .phys . * 50 * , 109 ( 1988 ) .
we present a study on the performance of wang - landau algorithm in a lattice model of liquid crystals which is a continuous lattice spin model . we propose a novel method of the spin update scheme in a continuous lattice spin model . the proposed scheme reduces the autocorrelation time of the simulation and results in faster convergence . + keywords : monte carlo methods , computational techniques , phase transitions
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ how much grit do you think you ve got ?+ can you quit a thing that you like a lot ? + _ `` on quitting '' by edgar guest _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ how do people achieve mastery ? what distinguishes high achievers from average performers ?performance generally improves with practice , as demonstrated on a variety of tasks in the laboratory setting and in the field , suggesting that with enough practice even mediocre performers can approach the mastery of successful individuals . however , not all practice is equally effective in helping achieve mastery . deliberate practice , which emphasizes quality , not quantity of practice , improves performance most .the search for individual traits responsible for variations in the capacity for deliberate practice uncovered _ grit _ , a trait related to psychological constructs , such as persistence , resilience , and self - control , which enables individuals to persevere in their efforts to achieve their goals .grit may explain the self - discipline to continue practicing , even when faced with temporary setbacks , such as a short - term drop in performance .recent proliferation of behavioral data collected `` in the wild '' enables longitudinal studies to explore and validate these findings .we carry out an empirical analysis of online game play to quantify individual traits associated with success .the data we study consists of records of over 850k players of a game called axon .following , who first studied this data , we operationalize performance as player s score , and practice as playing rounds of the game .like other behavioral data , axon data presents analytic challenges .it is extremely _ noisy _ : requiring aggregating variables over the population .it is also _ heterogeneous _ : composed of differently - behaving subgroups varying in size according to the pareto distribution . as a result, the trends observed in aggregated data may be quite different from those of the underlying subgroups . to address this effect , known as simpson s paradox , we disaggregate data by user skill and activity .we segment player activity into sessions , where a session is a sequence of games without an extended break .this allows us to compare sessions according to intensity .after disaggregating data , we can more accurately measure the relationship between performance and practice . while performance generally improves with practice , we find that players tend to quit after an abnormally high score , suggesting significant rewards in casual games may instead encourage players to leave .interestingly , we find that players who are less likely to quit after a score drop tend to become more successful later .quitting is not as strongly correlated with skill , suggesting that it is perseverance to poor outcomes , i.e. , grit , that contributes to player success . to identify a plausible mechanism of game play , we train an -machine , a type of a hidden markov model , on the data , and find models that maximizes the accuracy of predicting players performance .we find that players are most predictable when we model how their behavior is affected by changes in score from their previous game , instead of , for example , the change from their mean score .this model leads to insights not just in how players leave the game but the dynamics of performance as well .the sheer size of behavioral data opens new avenues for the study of individual variability of human behavior .the empirical methods and models described in this paper can help game designers create more engaging games that keep people playing longer .investigation of factors associated with disengagement and quitting could reduce churn rate , increase retention , and improve user experience in social media and mobile applications .more interestingly , the methods proposed here could pave the way to individual assessment and performance prediction from observed behavior .axon is built to improve skill through engaged learning , therefore it should come as no surprise that scores , a proxy for skill , increase with the number of games played , a proxy for practice , in agreement with studies of skill acquisition in laboratory and the field . in recent years , a more nuanced view of learning has emerged , one that emphasizes deliberate practice as a way to improve performance .psychologists identified individual traits thought to be responsible for variations in the capacity for deliberate practice , such as _ grit _ , which allows individuals to persevere in their efforts to achieve their long - term goals in the face of obstacles and challenges . while grit specifically refers to the ability to sustain efforts and passion for goals over extended periods of time , it is closely related to other psychological constructs such as persistence , resilience , conscientiousness , and self - control , which have been linked to achievement .these traits may explain why some individuals have the self - discipline to continue practicing , even when faced with temporary setbacks , such as a drop in performance .while grit , as other psychological traits , is usually measured through surveys , identifying proxies of these traits , which can be computed from the observed individual behavior , has many practical benefits . to better understand why users choose to persevere or quit , it is important to understand the psychology of motivation , especially the peak - end effect , in which the individual s peak or last experience most affects their recall and motivation. early work on goal - setting theory , e.g. , suggests that moderate challenges encourage people to continue with a task , while extremely easy or difficult tasks reduce motivation . other research, however , has found that the peak and end experiences change users perception of the task , which may then change their motivation to continue .other works have looked at how people play games , and use computers generally .for example , providing rewards increases the motivation for users to continue a task , although the peak - end effect can sometimes affect user assessment of game difficulty and fun , or otherwise affect user judgments .our paper hypothesizes that the peak - end effect may similarly help axon game play .stafford & dewar empirically studied the impact of practice on performance using axon game data .they examined the effects of _ practice amount _ and _ practice spacing _ on performance , and found that on average , game scores increased over consecutive plays , but there were significant differences between the higher scoring and lower scoring players .the best performers had higher average scores than worse performers starting from the first games they played , and their score advantage grew with practice , in agreement with our paper s results .stafford & dewar also found that the longer the time period between the players first and last games , the higher their scores are .specifically , comparing players who played their first ten games within a 24-hour period with `` rested players '' who split their ten games over a longer period , they found that rested players had higher average scores than the former group .they concluded that breaking up practice and resting , i.e. , distributing practice , may benefit subsequent performance .we show that after accounting for temporal structure of game play , this effect mostly disappears .our work differs significantly from , and expands on , work by stafford & dewar .first , we split games into sessions of higher activity . as we show , this provides significant insight into the motivations users have to continue to play , such as stopping after a big score increase .second , we find that scores increase significantly between sessions , but not between games , if we do not split them up into sessions , therefore the `` rested player '' result from stafford & dewar may be due to peak score in the last game of a session and not rests between games .furthermore , we find players who do particularly well in the game appear to exhibit grit , by refusing to quit when they perform poorly . finally , we model users using a theoretically optimal and minimal hmm called an -machine .it has been used recently to predict future user activity on social media , but is , to the best of our knowledge , a novel modeling framework in the field of computational psychology .the axon game ( http://axon.wellcomeapps.com ) is a casual single player online game , where the player controls the growth of an axon ( figure [ fig : axon - screenshots ] ) .the game does not have levels of difficulty or time limits .performance is characterized by a score that represents the length of the axon .stochasticity is introduced in the game by `` power - ups '' ( figure [ fig : axon - screenshots]b ) , which can significantly boost the score .stafford & dewar published the axon game data at https://github.com/tomstafford/axongame .the data , collected between 14 march and 13 may , 2012 , contains records of over 3 m games played by more than 854k players .each record contains the score and time of the game ( with hourly resolution ) , and a `` machine identifier '' , an anonymized identifier derived from the web browser from where the game was accessed . following stafford & dewar , we assume that each machine identifier corresponds to a unique player . this need not be true , for instance , for shared computers or when a single player plays on multiple devices , but the size of the data is able to account for the noise produced by this phenomenon .the code used for our study is available at https://github.com/agarwalt/axongame .the vast majority of people played only a few games : 92% played fewer than eight games , with 28k playing more than 12 games .people who play few games may be systematically different from dedicated players who play many games ; consequently , aggregating games across both groups can lead to simpson s paradox . to address this challenge ,we segment each player s activity into sessions , where a session is a sequence of games without a long break ( two hours or longer ) between consecutive games .people who play few games may be systematically different from dedicated players who play many games ; consequently , aggregating games across both groups can lead to simpson s paradox . to address this challenge , we segment each player s activity into sessions , where a session is a sequence of games without a long break between consecutive games ( figure [ fig : players - vs - sessions - duration - between - sessions]a ) .we use two hours as break threshold , but results do not change substantively when a different threshold , such as six hours , is used . due to the long - tailed distribution of break time between consecutive games ,changing the threshold affects only a small number of sessions .segmenting player activity allows us to compare people who behave similarly , i.e. , those who play similar number of games , rather than pool people with different behaviors together .figure [ fig : players - vs - sessions - duration - between - sessions](b ) shows the distribution of the number of sessions .most players ( about 90% ) have only one session , with the remaining 85k players who play more than one session .daily and weekly peaks are present in the distribution of breaks between consecutive sessions ( figure [ fig : players - vs - sessions - duration - between - sessions]c ) . of the 990k total sessions ,more than 90% last less than 1 hour ( figure [ fig : duration - of - sessions - games - per - session]b ) , and 242k sessions played by 218k players have more than three games .( figure [ fig : duration - of - sessions - games - per - session]a ) .stafford & dewar found that the best and worst players , i.e. , those who had achieved biggest and smallest personal high scores , respectively , differed in performance from the very first games they played . to partially control for this variability , we segment players by skill .we distinguish between two types of skill , which we refer to as _ talent _ and _ success_. talent is the initial skill of a player , which we operationalize as the median score of the first three games the individual ever played .success is measured by the median of the three highest scores , or as many games as are available if less than three , after removing the first three , therefore we can only discuss the success of players who play four or more games .we follow tradition to use a player s best performance as a measure of success , rather than , for instance , an average value over time .other measures of success , such as the median score over all triplets of consecutive games , are strongly correlated with our chosen criterion and do not change our conclusions .further , using the median mitigates the effects of outliers .we model game play activity , including quitting the game and player s performance ( scores ) , using an -machine , a type of hidden markov model ( hmm ) with two useful properties .first , it is optimally predictive , meaning it produces the least uncertainly ( entropy ) about the future behavior of players .second , it is a minimally complex unifilar hmm , meaning that the model requires the fewest number of effective states , if transitions occur deterministically between states after each successive game ( for a review of -machines , see ) . in this paper, we fit an -machine to our data using the causal state splitting reconstruction ( cssr ) algorithm .this algorithm groups past behaviors together into a single effective state if they make similar predictions .importantly , this model is able to predict both score changes and the probability a user will quit the game , a property not possible with , for example , autoregressive models .the -machine , however , requires a few important assumptions to achieve these surprising feats .first , the model assumes a discrete time process that can be described with an alphabet . for continuous data, this means binning a real output into countable sets .note that discretization results in information loss , and the machine will strongly depend on how the data is binned , but we discuss methods to address this issue in the next section .we define time " to be the game index , while the discrete alphabet is defined below .next , an -machine assumes that the underlying sequence is stationary .this is clearly not true with respect to the absolute score ( figure [ fig : score - vs - index - quartiles ] ) , which tends to increase on average .one way to approximate a stationary sequence , however , is to take the score difference between consecutive games .we find that the difference is roughly independent of game index , as expected of a stationary distribution , until the final game before a user quits a session , when scores increase dramatically , while the quit rate itself is not strongly game index dependent .we therefore create sequences of binned score differences ; e.g. , ( 0 to 7000 points better then the previous game ) , ( over 7000 points better than the previous game ) , ( quit ) ; which we model with an -machine .finally , the -machine assumes that we know the joint probabilities of the entire past and future of a sequence , but this is simply not practical .instead we allow for some error to enter the model , given a realistic amount of data . while binning data based on score differences leads to a roughly stationary sequence distribution , other potential sequences , such as binning by the difference between the current and mean scores , may be better modeled by the -machine , and lead to more accurate predictions .further , the sequences may be temporally correlated ; past games may strongly affect future behavior , and we must pick a value of that is sufficiently long to produce accurate predictions , but not too long due to data limitations ( and eventually computational cost ) . in short , we must have a methodology for testing the goodness of the model .the intuitive way to do this is to train the -machine on a portion of data and test it on remaining portion , and determine whether we correctly discovered the next letter of our sequence using the model . by creating a cost function when our model creates an incorrect prediction, we can determine what values of or the alphabet improve predictions . because our alphabet size , this becomes a multiclass classification problem .we can simplify this as a set of binary classification problems , however , by predicting whether or not the next letter in the sequence is , where is an arbitrary letter in our alphabet , where we train our model on of the data and test on the final .we apply a standard tool in binary classification problems : the roc curve , which tells us how often we correctly versus incorrectly predict a user quits , given a particular thresholding of the probabilities , e.g. , , a user quits , otherwise they do not .we then calculate the area under the roc curve ( auc ) , which is equivalent to the wilcoxon rank - sum test .because of this equivalence , the limit variation in the auc has a known form , however , we use bootstrapping of the testing data to non - parametrically find the errors of auc values . finally , we take the mean of auc values across all , weighted by the frequency of across all aucs , in order to measure the model s overall efficacy in predicting players scores or when they leave the game .how much mastery of the game do players achieve ? as defined earlier , a player s _ success _ is the median score of his or her three best games after the first three games are removed ( or as many as are available if less than three ) .a player s _ talent _ , or initial skill in the game , is the median score of the first three games played .overall , the correlation between success and talent for all players is ( ) . among players with the same skill ,talent is only weakly correlated with success . for the initially least skilled players ( bottom quartile by talent ) ,the correlation between success and talent is ( ) .the correlation for the second and third quartiles is and , respectively ( ) .the correlation is highest for players who are most skilled initially ( top quartile by talent ) , ( ) . how does practice repeated game play affect performance ?figure [ fig : score - vs - index - quartiles ] shows the evolution of performance ( average score ) over the course of a session among players of similar skill ( grouped by talent ) .lines represent sessions of different length , from 4 games to 15 games played in the session .there were 242k such sessions ( out of the total 990k ) , representing 1.4 m games , or approximately half of the total 3 m games .the figures reveal interesting trends .first , performance generally increases with the number of games played , reflecting the benefits of practice .second , eventual performance depends on skill : the most talented players ( top quartile ) have a better score , on average , on their very first game of a session than the least talented players ( bottom quartile ) have after practice . while the plot reflects performance averaged over all player sessions , these differences ,also noted by stafford & dewar , remain strong when only the player s first session is considered ( data not shown ) .finally , the very last game of a session has an abnormally high score , on average . aside from thislast game , performance curves for sessions of different lengths within the same population overlap , suggesting that we properly captured the underlying behavior .to check robustness and rule out simpson s paradox , we repeat the analysis on randomized data , where the indices of the games within each session are shuffled .figure [ fig : score - vs - index - quartiles - shuffled - indices ] shows the resulting performance curves : performance no longer depends on the order the games within the session are played .this lends support for the claim that players performance improves with practice .however , we can not rule out alternate , if unlikely , explanations , such as the game designed to become progressively easier to play .another notable observation about fig .[ fig : score - vs - index - quartiles - shuffled - indices ] is that performance curves are stacked , with shorter sessions falling bellow longer sessions in terms of average scores .this suggests that players perform worse , on average , during shorter sessions than during longer ones .the high score in the very last game in the session partly explains the performance boost that stafford & dewar attributed to practice spacing .they found that players who split their first ten games over a time period longer than a day had higher scores on average than players who played their first ten games within the same 24-hour time period . we explain their observation differently .spacing the games over a time period longer than a day means that the player had to play the games over at least two sessions .those who played their first ten games on the same day may have played multiple sessions , but are more likely to have played just one session .therefore , the higher average performance of the first group may be skewed by the high score of last game of the session compared to the second group . to explicitly measure the impact of practice spacing on performance , we plot the average change in performance as a function of the length of the break between two consecutive games and two consecutive sessions ( fig . [fig : distributed - practice - games - and - sessions - temp ] ) . change in performance between two games is simply the score difference between them .change in performance between two sessions is the difference between the median score of the first three games of the next session and median score of the last three games of the previous session , where we exclude the last game of the session .taking breaks between sessions does indeed lead to higher game scores ( at the 95% confidence level ) .however , the length of the break improves performance weakly for breaks less than a week ; longer breaks result in smaller average improvement. why does the last game of a session have a much higher score ( on average ) ?do players simply choose to stop playing , thus ending the session , after receiving an abnormally high score ? to investigate this hypothesis , we empirically measure the probability to stop playing given the person played games .we assume that this decision is based on a player s performance relative to his or her previous games .the relative performance can be measured as the difference from the mean or median score so far in the session , difference from the previous game s score , etc . , but based on prediction accuracy , score difference from the previous game best models player behavior .figure [ fig : probability - stopping ] shows the quitting probability versus score difference from the previous game , , for different populations of players when split by talent .the quitting rate is simply the number of users who quit at score difference , divided by the number who ever reach over a given range of game indices . for 10k 15k , players are more likely to stop playing ( figure [ fig : score - vs - index - quartiles ] ) , even though large does not correlate directly with any single game feature , such as power - ups .however , a concerted use of power - ups in succession can result in an increase of more than 10k points . surprisingly , for , the quitting rate is not strongly dependent on ( figure [ fig : probability - stopping ] ) .should the designers of such games , then , avoid adding game elements which `` satisfy '' a player and potentially cause them to lose motivation to play ? answering this requires controlled experiments and is beyond the scope of this study .why do some people quit while others continue to play even when doing poorly ( i.e. , obtaining a worse score ) ? these _ persistent _ players may possess a trait psychologists call _ grit _ , which has been linked to high achievement and success . to investigate the impact of persistence on performance , we first need to quantify persistence , which we operationalize as the probability to stop playing after underperforming , i.e. , obtaining a score less than the previous game s score .figure [ fig : quitting - vs - quartiles ] shows the average persistence or probability to quit playing after getting a worse score for different quartiles of players as split by success or talent .interestingly , we see a relationship between performance and persistence only in subpopulations of players segmented by success : the more successful players ( those who achieve higher best scores ) are less likely to stop playing after a setback , i.e. , receiving a worse score .in contrast , the relationship does not appear to be very strong when players are split by talent ( their initial skill ) .moreover , these trends do not hold in the probability to quit playing after receiving a better score in a game ( inset in figure [ fig : quitting - vs - quartiles ] ) .this suggests that successful players are not simply ones who play longer ; rather it is their ability to persevere despite lower scores that distinguishes them from less successful players .thus , consistent with psychology research on grit , persistence is associated with high performance and success , and not talent .furthermore , successful players do not simply play longer ; rather their ability to persevere despite lower scores distinguishes them from the less successful players .finally we ask : what mechanism best explains whether a player will quit or improve his or her score ? to help answer this question , we create -machines that model game play : how well users perform , and how likely they are to quit based on past performance . our goal is to find the simplest , most parsimonious model with the highest predictive power .we choose a model with an alphabet of size four : one symbol denotes the state in which players quit the game ( q ) , while the remaining symbols denote performance that is poor " , good " , and very good " .we look at alternate ways to capture performance : 1 .score difference from the previous game , 2 .difference from a player s median score , and , 3 .difference from a player s mean score . in each case, binning the data into the states good " or very good " can be optimized by varying the bin size to maximize model s performance .in addition , the -machine can remember up to past games .due to the decay in the number of games any player plays , we vary only between one and three . to evaluate the model s performance ,we create roc curves from the predictions of each state , find the corresponding area under the curve ( auc ) , and then take the mean weighted by the frequency of each symbol across all aucs , following work by provost & domingos . to maximize the amount of training data the -machine uses , we use of the data for training and reserve for testing .we bootstrap the testing data to determine the confidence intervals of the auc values .we trained the models separately on each quartile of players , split by talent .the results , shown in figure [ fig : auc - weighted - sum - all - quartiles ] , suggest that score difference from a player s previous game ( ) leads to the best overall model , with an average auc of roughly 0.64 . moreover, binning the data using thresholds , or for each respective quartile maximizes prediction accuracy .thus , a player can have poor " ( ) , good " ( ) , and very good " performance ( ) .we further test the importance of the past states , and find that a longer produces a higher auc for sequences made using the score difference between consecutive games . to do so , we test and train on the 4th game index onward ( 255k out of 990k sessions ) to make the roc curves of different comparable however , when we study the roc curve , we find that the points nearly coincide ( see fig . [fig : roc - increasing - history - lengths ] for typical examples of roc plots on the axon game dataset ) , and much of the loss or gain in the auc appears to be explained by lower models having fewer states and therefore fewer points on the curve .the near collapse of these curves may suggest that the most recent game has the strongest effect on user behavior .the number of states for longer models is also much larger ( 4 states for compared to 9 and 21 for and , respectively , in figure [ fig : roc - increasing - history - lengths ] ) .therefore , we focus on the model . of ( a ) ,( b ) , ( c ) or more points , or ( d ) if they quit . ] -machines for players in the ( a ) first , ( b ) second , ( c ) third , and ( d ) fourth quartiles by talent .thicker arrows represent higher transition probabilities , where a transition to a state occurs when the state s associated sequence appears in the data stream .transitions with probability less than 0.1 are omitted for clarity . ]the best overall models are shown in figure [ fig : state - diagram ] , where the thickness of each line is proportional to the transition probability to a new state , such as poor " or `` good . '' not only does the probability of quitting increase when the score increases over a few hundred to a few thousand points , in agreement with figure [ fig : probability - stopping ] , but also that users transition in unexpected ways between states before they eventually quit . for example , players who perform poorly in a game ( the change in score is negative ) are very likely to perform well ( reach the highest scoring state ) in the next game .similarly , there is an unexpected probability to transition from a very good " state in the last game to a poor " state in the next one , which suggests that players undergo periods of score volatility . finally , the transition rates from negative to positive states are greater than the opposite transition rates in several quartiles , which suggests that players tend to improve over time . because this is an empirical study ,there are a number of potential confounding factors , which may affect our conclusions .previous work , e.g. , has found that the high payout intervals ( similar to frequent changes in scores ) , high self - reported skill , and even light or sound effects contribute to the motivation to play .similarly , we do not measure axon s perceived arousal , pleasure or dominance , which may explain why some users quickly quit playing and others do not regardless of their score .finally , we can not determine to what degree exhaustion contributes to user behavior .for example , sessions of intense game play may exhaust gamers , and rest between sessions may not improve learning but instead allow users to improve mood , or replenish glucose levels , and therefore improve their score .rest , independent of learning , is known to affect behavior in cognitively demanding tasks .future work is necessary to correct for these effects .we empirically investigated factors affecting performance in an online game using digital traces of activity of many players .the massive size of the data enabled us to investigate sources of individual variability in practice and performance .skilled ( or talented ) players , who score high already in their first games , are more successful overall .however , continued practice improves the scores of all players .we identified a factor , related to grit , which captures the likelihood the player will keep practicing , i.e. , playing the game , even when performing poorly .the more likely the player is to continue playing after a drop in performance , the more successful he or she eventually becomes .however , the ability to persevere and continue practicing is not related to player s initial skill .we modeled this behavior using an -machine and found that the model in which players based their decisions on how well they did compared to their previous game best predicted whether they will continue playing and their performance . surprisingly , when players did very well compared to their last game , they were highly likely to quit , but when they performed poorly , their quitting probability remained low .our analysis relied on identifying and accounting for the sources of heterogeneity in game play data . unless this is done , analysis can fall prey to simpson s paradox , in which false trends can be observed when aggregating over heterogeneous populations .initial skill , or talent , is a major source of behavioral heterogeneity .players who score well on their first games continue to improve and outperform the poorest players. a significant source of heterogeneity is the temporal structure of game play : players have periods , or sessions , of continuous activity with breaks in between . after accounting for sessions , a clearer picture of performanceemerges . while empirical analysis of behavioral datacan not replace controlled experiments , the sheer size of the data allows for the study of individual variability that is not possible with the smaller laboratory experiments .such data can be used to explore alternate hypotheses about behavior , which can then be validated in the laboratory setting . moreover ,the types of quantitative methods explored in this paper could be used to predict performance and for psychological and cognitive assessment of individuals from their observed behavior .future human - computer interfaces could continuously observe and predict our behavior , and adapt so as to optimize our performance .this work was supported in part by nsf ( # sma-1360058 ) , aro ( # w911nf-15 - 1 - 0142 ) , and the usc viterbi - india and isi summer internship programs .farzan , r. ; dimicco , j. m. ; millen , d. r. ; dugan , c. ; geyer , w. ; and brownholtz , e. a. 2008 .results from deploying a participation incentive mechanism within the enterprise . in _ chi _ , 563572 .new york , ny , usa : acm . shalizi , c. r. , and klinkner , k. l. 2004 .blind construction of optimal nonlinear recursive predictors for discrete sequences . in chickering , m. , and halpern , j. y. , eds ., _ uai _ , 504511 .arlington , virginia : auai press .
we study the relationship between performance and practice by analyzing the activity of many players of a casual online game . we find significant heterogeneity in the improvement of player performance , given by score , and address this by dividing players into similar skill levels and segmenting each player s activity into sessions , i.e. , sequence of game rounds without an extended break . after disaggregating data , we find that performance improves with practice across all skill levels . more interestingly , players are more likely to end their session after an especially large improvement , leading to a peak score in their very last game of a session . in addition , success is strongly correlated with a lower quitting rate when the score drops , and only weakly correlated with skill , in line with psychological findings about the value of persistence and `` grit '' : successful players are those who persist in their practice despite lower scores . finally , we train an -machine , a type of hidden markov model , and find a plausible mechanism of game play that can predict player performance and quitting the game . our work raises the possibility of real - time assessment and behavior prediction that can be used to optimize human performance .
for an ordinal three - category classification problem , the assessment of the performance of a diagnostic test is achieved by the analysis of the receiver operating characteristic ( roc ) surface , which generalizes the roc curve for binary diagnostic outcomes .the volume under the roc surface ( vus ) is a summary index , usually employed for measuring the overall diagnostic accuracy of the test . under correct ordering, values of vus vary from 1/6 , suggesting the test is no better than chance alone , to 1 , which implies a perfect test , i.e. a test that perfectly discriminates among the three categories .the theoretical construction of the roc surface and vus was introduced for the first time by . in medical studies , the evaluation of the discriminatory ability of a diagnostic testis typically obtained by making inference about its roc surface and vus , based on data from some suitable sample of patients ( or units ) .when the disease status of each patient can be exactly assessed by means of a gold standard ( gs ) test , a set of methods exist to estimate the roc surface and vus of the test in evaluation .see , , and , among others . in practice , however , disease status verification via gs test could be unavailable for all units in the sample , due to the expensiveness and/or invasiveness of the gs test .thus , often , only a subset of patients undergoes disease verification . in such situations , the implementation of the methods discussed in the above mentioned paperscould only be performed on the verified subjects , typically yielding biased estimates of roc surface and vus .this bias is known as verification bias . in order to correct for verification bias, the researchers often assume that the selection for disease verification does not depend on the disease status , given the test results and other observed covariates , i.e. , they assume that the true disease status , when missing , is missing at random ( mar , ) . under this assumption , there exist few methods to get bias corrected inference in roc surface analysis . proposed a nonparametric likelihood based approach to obtain bias corrected estimators for roc surface and vus of an ordinal diagnostic test . in case of continuous diagnostic tests, discussed several solutions based on imputation and re weighting methods , and proposed four verification bias corrected estimators of the roc surface and vus : full imputation ( fi ) , mean score imputation ( msi ) , inverse probability weighting ( ipw ) and semi parametric efficient ( spe ) estimators .however , in some studies the decision to send a subject to verification may be directly based on the presumed subject s disease status , or , more generally , the selection mechanism may depend on some unobserved covariates related to disease ; in these cases , the mar assumption does not hold and the missing data mechanism is called nonignorable ( ni ) . for two - class problems , methods to deal with ni verification bias have been developed , for instance , in .however , the issue of correcting for ni verification bias in roc surface analysis is very scarcely considered in the statistical literature .this motivated us to develop bias corrected methods for continuous diagnostic tests with three class disease status , under a ni missing data mechanism .in particular , in this paper we adopt parametric regression models for the disease and the verification processes , extending the selection model of to match the case of three class disease status. then , we use likelihood - based estimators of model parameters to derive four estimators of the vus .consistency and asymptotic normality of the proposed estimators are proved .estimation of their variance is also discussed .the rest of the paper is organized as follows . in section 2, we set the working model and discuss its identifiability .in section 3 we present our proposed bias - corrected vus estimators , along with theoretical results about consistency and asymptotic normality .moreover , variance estimation is also addressed .the results of a simulation study are presented in section 4 .concluding remarks are left to section 5 .suppose we need to evaluate the predictive ability of a new continuous diagnostic test in a context where the disease status of a patient can be described by three ordered categories , `` non diseased '' , `` intermediate '' and `` diseased '' , say .consider a sample of subjects and let , and denote the test result , the disease status and a vector of covariates for each subject , respectively . in this framework , can be modeled as a trinomial random vector , such that is a bernoulli random variable having mean where .hence , represents the probability that a generic subject , classified according to its disease status , belong to the class .we are interested in estimating the vus of the test , say , which is defined as ( ) or , equivalently , where the indices , , refer to three different subjects , and is the indicator function . when the disease status is available for all subjects , a natural nonparametric estimator of is given by however , in many situations not all subjects undergo the verification process , and hence , the disease status is missing in a subset of patients in the study .let be the verification status for the -th subject : if is observed and otherwise .we define the observed data as the set , .when the true disease status is subject to ni missingness , estimators working under the mar assumption can not be applied tout court .our goal is to adjust fi , msi , ipw and spe estimators discussed in to the framework of ni missingness . to deal with ni missing data mechanism , in what follows we extend parametric models adopted in for the two class problem to the three class case .more precisely , with three disease categories , we fix the model for the verification process as follows where and are defined in the previous section , is , in general , an arbitrary working function , and is a set of parameters . here , is the non - ignorable parameter : the missing data mechanism is mar if ; ni , otherwise . as for the disease model ,we employ the multinomial logistic regression for the whole sample , i.e. , where is an arbitrary working function , and is a set of parameters , for . the parameters , with , can be estimated jointly by using a likelihood based approach .it is worth noting that , under ( [ veri : model:2 ] ) , an application of bayes rule gives that therefore , so that , according to ( [ inter : lambda_1 ] ) and ( [ inter : lambda_2 ] ) , and can also be interpreted as log - odds ratios of belonging to class 1 ( instead of class 3 ) and to class 2 ( instead of class 3 ) , respectively , for a verified subject compared to an unverified subject with the same test result and covariates . as in , in our model , for simplicity , we take and , which is a natural choice in practice . for fixed and , the observed distributionis fully determined by the three probabilities , and .it is easy to show that similarly , we have that with and .then , and .it follows that the log - likelihood function can be written as : the estimates , , and can be obtained by maximizing or by solving the score equations where .the above equations are obtained by using the following results ( here is a pair in the set ) , and \frac{\partial}{\partial \tau^\top_{\rho_2 } } \rho_{2i } & = u_i\rho_{2i}(1-\rho_{2i } ) ; & \qquad \frac{\partial}{\partial \tau^\top_{\rho_1 } } \rho_{2i } & = - u_i \rho_{1i}\rho_{2i}. \end{array } \nonumber \ ] ] in this section , we verify that the working model based on ( [ veri : model:2 ] ) , with , and ( [ dise : model ] ) , with , is identifiable . since the log likelihood ( [ lg - like ] ) is fully determined by the three probabilities , and , we have to show that such probabilities are uniquely determined by the parameters for all possible and . for the sake of simplicity , in the remainder of this sectionthe auxiliary covariate is omitted ( actually , we can always view as fixed while varying ) .let be the set of parameters . for given ,we can write let , and , for each .the above expressions , which refer to the quantities characterizing the the log likelihood function ( [ lg - like ] ) , can be rewritten as now , assume that there are two distinct points and ( ) in the parameter space , such that the following equations ( with obvious notation ) hold : for all . by using ( [ iden : rho3 ] ) , the equations ( [ iden : rho1 ] ) and ( [ iden : rho2 ] ) are equivalent to respectively . in ( [ exper : rho1 ] ) and ( [ exper : rho2 ] ) the left hand sides are straight lines .thus , in order to ( [ exper : rho1 ] ) and ( [ exper : rho2 ] ) hold for all , the right hand sides must be constants . if these constants were 0 (because ) , then ( [ iden : rho3 ] ) would no longer hold for and all . alternatively , the right hand sides of ( [ exper : rho1 ] ) and ( [ exper : rho2 ] ) are non - zero constants if .then , as a consequence , ( [ iden : rho3 ] ) still is valid , for and all , eventually if and .this allows us to state that : if , with , then the considered model( with the particular choice for the functions and ) is identifiable , i.e. , the joint probabilities , and are determined by a unique set of parameters . of course , this claim can be easily extended to handle the presence of a covariate vector , .let , for and .it is easy to see , for instance , that hence , we can get , in particular , clearly , we also may consider quantities as then , we observe that similarly , we have so that ( [ org : vus ] ) can be rewritten as equation ( [ org : vus2 ] ) suggests how to build estimators of vus when some disease labels are missing in the sample : we can use suitable estimates to replace the s in ( [ nonp : vus ] ) .therefore , a fi estimator of vus is simply where ( and are the estimated disease probabilities obtained from the disease model ( [ dise : model ] ) . since =\rho_{ki}$ ] , an alternative fi estimator of vus could be obtained by replacing s in ( [ nonp : vus ] ) with the estimates .unlike fi approach , msi estimator only replace the disease status by the estimate for unverified subjects .define and let be the estimated version with replaced by , and here , , and .such estimates are derived from the verification model ( [ veri : model:2 ] ) .then , the msi estimator of vus is in the ipw approach , instead , each observation in the subset of verified units is weighted by the inverse of the probability that the unit was selected for verification .thus , the ipw estimator of vus is clearly , the estimates also arise from the selection model ( [ veri : model:2 ] ) .the last estimator is the pseudo doubly robust ( pdr ) estimator .we define an estimated version , , is obtained by entering the estimates and in the expression above .then , the pdr estimator of vus is the pdr estimator has the same nature as the spe estimator discussed in under mar assumption .however , under ni missing data mechanism it no longer has the doubly robust property .in fact , correct specification of both the verification model and the disease model is required for the pdr estimator to be consistent .note that all vus estimators basically require maximum likelihood estimates of the parameters , and of the working models ( [ veri : model:2 ] ) and ( [ dise : model ] ) .let be the nuisance parameter .observe that the proposed vus estimators can be found as solutions of appropriate estimating equations ( solved along with the score equations ) .the estimating functions for fi , msi , ipw and pdr estimators have generic term ( corresponding to a generic triplet of sample units ) , respectively , in the following , we will use the general notation , where the star stands for fi , msi , ipw and pdr .recall that the nuisance parameter is estimated by maximizing the log likelihood function ( [ lg - like ] ) .let be the subject s contribution to the score function , and the fisher information matrix for . to give general theoretical results , we assume standard regularity conditions , which ensure consistency and asymptotic normality of the maximum likelihood estimator .let be the true vus value , and the true value of .we also assume that : 1 . the u process is stochastically equicontinuous , where and 2 . is differentiable in , and ; 3 . and converges uniformly ( in probability ) to and , respectively .then , we prove consistency and asymptotic normality of the proposed estimators . [ thrm:1 ] suppose that conditions ( c1)(c3 ) hold , along with standard regularity conditions for the likelihood function ( as those given by ) . under the verification model ( [ veri : model:2 ] ) and the disease model ( [ dise : model ] ) , .we can show that ( see the appendix ) .then , and , by condition ( c2 ) and an application of implicit function theorem , there exists a neighborhood of in which a continuously differentiable function , , is uniquely defined such that and .since the maximum likelihood estimator is consistent , i.e. , , we have that . on the other hand , and condition ( c3 ) implies that .thus , .next we establish the asymptotic normality of the estimators .[ thrm:2 ] suppose the conditions in theorem [ thrm:1 ] are satisfied .if the verification model ( [ veri : model:2 ] ) and the disease model ( [ dise : model ] ) hold , then where the star indicates fi , msi , ipw , pdr , and is a suitable value .we have since , we get \nonumber \\ & & + \ : \sqrt{n}\left\ { e_*(\hat{\mu}_{*},\hat{\xi } ) - e_*(\mu_0,\xi_0)\right\ } + \sqrt{n}g_{*}(\mu_0,\xi_0 ) \nonumber.\end{aligned}\ ] ] condition ( c1 ) implies that the first term in right hand side of the last identity is . using the taylor expansion , we have it is straightforward to show that by standard results on the limit distribution of u - statistics ( * ? ? ?* theorem 12.3 , chap .12 ) , where is the projection of onto the set of all statistics of the form , for and . for the maximum likelihood estimator , we can write ^{-1}\sum_{i=1}^{n}\mathcal{s}_i(\xi_0 ) + o_p(1 ) = \frac{1}{\sqrt{n}}\mathcal{i}(\xi)^{-1 } \sum_{i=1}^{n}\mathcal{s}_i(\xi_0 ) + o_p(1 ) . \nonumber \ ] ] hence , from ( [ taylor ] ) , \label{qi } \\ & = & o_p(1 ) + \frac{1}{\sqrt{n}}\sum_{i=1}^{n}q_{i,*}(\mu_0,\xi_0 ) = o_p(1 ) + \frac{1}{\sqrt{n } } q_*(\mu_0,\xi_0 ) .\nonumber\end{aligned}\ ] ] note that the observed data are i.i.d , then are also i.i.d .in addition , we easily show that \nonumber.\end{aligned}\ ] ] therefore , , and by the central limit theorem .it follows that where it is worth noting that the assumed regularity conditions for the likelihood and condition ( c1)(c3 ) hold in our working model , which is based on ( [ veri : model:2 ] ) , with , and ( [ dise : model ] ) , with . under condition ( c3 ), a consistent estimator of can be obtained as where are the estimates of the disease probabilities , for .specifically , , , and . according to ( [ qi ] ) , we have that in addition , for fixed , we also have that therefore , the quantity could be obtained as the hessian matrix of the log likelihood function at . in order to compute , we have to get the derivatives , , , , and . in section[ sec : para_est ] , we obtain \frac{\partial}{\partial \lambda_1 } \pi_{01i}(\lambda , \tau_\pi ) & = 0 ; & \frac{\partial}{\partial \lambda_2 } \pi_{01i}(\lambda , \tau_\pi ) & = \pi_{01i}(1 - \pi_{01i } ) ; \\ [ 16pt ] \frac{\partial}{\partial \lambda_1 } \pi_{00i}(\lambda , \tau_\pi ) & = 0 ; & \frac{\partial}{\partial \lambda_2 } \pi_{00i}(\lambda , \tau_\pi ) & = 0 .\end{array } \nonumber\ ] ] and where belongs to the set .also , we have \frac{\partial}{\partial \tau^\top_{\rho_2 } } \rho_{2i}(\tau_\rho ) & = u_i\rho_{2i}(1 - \rho_{2i } ) ; & \frac{\partial}{\partial \tau^\top_{\rho_1 } } \rho_{2i}(\tau_\rho ) & = - u_i \rho_{1i}\rho_{2i}. \end{array } \nonumber\ ] ] moreover , with .then , recall that after some algebra , we get \nonumber , \\\frac{\partial}{\partial \lambda_2 } \rho_{1(0)i}(\xi ) & = & \frac{1}{z^2 } \rho_{1i}\rho_{2i}\pi_{01i}(1 - \pi_{01i } ) ( 1 - \pi_{10i } ) \nonumber , \\\frac{\partial}{\partial \tau_\pi^\top } \rho_{1(0)i}(\xi ) & = & -\frac{u_i}{z^2 } \rho_{1i}(1 - \pi_{10i } ) \left\ { \rho_{2i}(1 - \pi_{01i})(\pi_{10i } - \pi_{01i } ) + \rho_{3i}(1 - \pi_{00i})(\pi_{10i } - \pi_{00i})\right\ } \nonumber , \\ \frac{\partial}{\partial \tau_{\rho_1}^\top } \rho_{1(0)i}(\xi ) & = & \frac{u_i}{z^2 } \rho_{1i } ( 1 - \pi_{10i } ) \left\ { \rho_{2i}(1 - \pi_{01i } ) + \rho_{3i}(1 - \pi_{00i } ) \right\ } \nonumber ,\\ \frac{\partial}{\partial \tau_{\rho_2}^\top } \rho_{1(0)i}(\xi ) & = & -\frac{u_i}{z^2 } \rho_{1i}\rho_{2i } ( 1 - \pi_{10i } ) ( 1 - \pi_{01i } ) \nonumber.\end{aligned}\ ] ] finally , we set , and get \nonumber , \\ \frac{\partial}{\partial \tau_\pi^\top } \rho_{2(0)i}(\xi ) & = & -\frac{u_i}{z^2 } \rho_{2i}(1 - \pi_{01i } ) \left\ { \rho_{1i}(1 - \pi_{10i})(\pi_{01i } - \pi_{10i } ) + \rho_{3i}(1 - \pi_{00i})(\pi_{01i } - \pi_{00i})\right\ } \nonumber , \\ \frac{\partial}{\partial \tau_{\rho_1}^\top } \rho_{2(0)i}(\xi ) & = & -\frac{u_i}{z^2 } \rho_{1i}\rho_{2i } ( 1 - \pi_{10i } ) ( 1 - \pi_{01i } ) \nonumber , \\\frac{\partial}{\partial \tau_{\rho_2}^\top } \rho_{2(0)i}(\xi ) & = & \frac{u_i}{z^2 } \rho_{2i } ( 1 - \pi_{01i } ) \left\ { \rho_{1i}(1 - \pi_{10i } ) + \rho_{3i}(1 - \pi_{00i } ) \right\ } \nonumber.\end{aligned}\ ] ] the derivative can be computed by using the fact that .in this section , we provide empirical evidence , through simulation experiments , on the behavior of the proposed vus estimators in finite samples .the number of replications in each simulation experiment is set to be 1000 . in the study, we consider two scenarios which correspond to quite different values of the true vus . for both scenarios , we fix three sample sizes : 250 , 500 and 1500 .in the first scenario , for each unit , we generate the test result and a covariate from a bivariate normal distribution , the disease status is generated according to model ( [ dise : model ] ) with and .then , the verification label is obtained according to model ( [ veri : model:2 ] ) with and . under such data generating process , , , , andthe verification rate is roughly .the true vus value is . in the second scenario ,we generate the test result and the covariate from independent normal distributions . specifically , and .the disease status is generated according to model ( [ dise : model ] ) with and .then , is obtained according to model ( [ veri : model:2 ] ) with and . under thissetting , , , , and the verification rate is roughly .the true vus value is .table [ tab : result1 ] contains monte carlo means , monte carlo standard deviations and estimated standard deviations for the proposed vus estimators ( fi , msi , ipw , pdr ) in the two considered scenarios , at the chosen sample sizes .the table also reports the empirical coverages of the 95% confidence intervals for the vus , obtained through the normal approximation approach applied to each estimator . to make a comparison , table [ tab : result1 ] also gives the results for the semiparametric efficient estimator ( spe ) discussed in , whose realizations are obtained , in all experiments , under the mar assumption , i.e. , by setting in model ( [ veri : model:2 ] ) .the comparison allows us to evaluate the possible impact of an incorrect hypothesis mar on the most robust estimator among those , fi , msi , ipw and spe , which are built to work under ignorable missing data mechanism ( see ) .[ table 1 about here ] overall , simulation results are consistent with our theoretical findings and show the usefulness of the proposed estimators , which also arises from the comparison with the spe estimator used improperly .the results also show a good behavior of the estimated standard deviations , which are generally close to the corresponding monte carlo values .in general , fi and msi estimators seem to be more efficient than ipw and pdr estimators . however , for all estimators , acceptable bias levels and sufficiently accurate associated confidence intervals seem to require a large sample size ( at least 500 , and , prudently , even higher ) . this issue of poor accuracy has already been noted by several authors , including , in the context of two - class classification problems . in our experience , the trouble appears to arise because of a bad behavior of the maximum likelihood estimates in the verification and disease models .if the sample size is not large enough , the data do not contain enough information to effectively estimate the parameters , and .it seems particularly difficult to get good estimates of nonignorable parameters .table [ tab : mle ] , giving the monte carlo means for the maximum likelihood estimators of the elements of and for the three considered sample sizes , allows us to look at the bias of the estimators .more importantly , figure 1 and figure 2 ( which refer to scenario i and ii , respectively ) graphically depict values of the estimates of and obtained in the thousand replications , for each sample size .the plots clearly show the great variability of the maximum likelihood estimates at lower sample sizes , with many values dramatically different from the corresponding target values . with larger sample size, this phenomenon almost completely vanishes , the maximum likelihood estimators behave pretty well , with a positive impact on the behavior of the vus estimators . [table 2 about here ] [ figure 1 about here ] [ figure 2 about here ]in this paper , we have proposed four bias corrected estimators of vus under ni missing data mechanism .the estimators are obtained by a likelihood based approach , which uses the verification model ( [ veri : model:2 ] ) together with the disease model ( [ dise : model ] ) .the identifiability of the joint model is proved , and hence , the nuisance parameters can be estimated by maximizing the log likelihood function or solving the score equations .consistency and asymptotic normality of the proposed fi , msi , ipw and pdr estimators are established , and variance estimation is discussed .the proposed vus estimators are pretty easy to implement and require the use of some numerical routine to maximize the log likelihood function ( or to solve the score equations ) .our simulation results show their usefulness , whilst confirming the evidence emerging in the two class case , according to which a reasonable large sample size is necessary to make sufficiently accurate inference . in practice , among fi , msi , ipw and pdr estimators , we would reccommend fi and msi estimators thanks to their greater efficiency .the poor accuracy problem seems to be related to an intrinsic difficulty of the maximum likelihood method in providing accurate estimates of the parameters of the disease and verification models , in particular of the nonignorable parameters .overcoming this drawback is a stimulating challenge and deserves further investigation .99 baker , s. g. ( 1995 ) .evaluating multiple diagnostic tests with partial verification ._ biometrics_. * 51 * , 330 - 337 .chi , y. y. and zhou , x. h. ( 2008 ) .receiver operating characteristic surfaces in the presence of verification bias ._ j. r. stat .soc . ser .c. appl . stat._. * 57 * , 1 - 23 .fluss , r. , reiser , b. , and faraggi , d. ( 2012 ) .adjusting roc curve for covariates in the presence of verification bias ._ j. statist .plann . inference_.* 142 * , 1 - 11 .fluss , r. , reiser , b. , faraggi , d. , and rotnitzky , a. ( 2009 ) .estimation of the roc curve under verification biasj._. * 51 * , 475 - 490 .kang , l. and tian , l. ( 2013 ) .estimation of the volume under the roc surface with three ordinal diagnostic categories. _ comput .statist . data anal._. * 62 * , 39 - 51 .li , j. and zhou , x. h. ( 2009 ) .nonparametric and semiparametric estimation of the three way receiver operating characteristic surface. _ j. statist .plann . inference_. * 139 * , 4133 - 4142 .little , r. j. and rubin , d. b. ( 2002 ) ._ statistical analysis with missing data_. john wiley & sons .liu , d. and zhou , x. h. ( 2010 ) . a model for adjusting for nonignorable verification bias in estimation of the roc curve and its area with likelihood - based approach ._ biometrics_. * 66 * , 1119 - 1128 .nakas , c. t. and yiannoutsos , c. t. ( 2004 ) .ordered multiple - class roc analysis with continuous measurements .med._. * 23 * , 3437 - 3449 .newey , w. k. and mcfadden , d. ( 1994 ) . large sample estimation and hypothesis testing ._ handbook of econometrics_. * 4 * , 21112245 .rotnitzky , a. , faraggi , d. , and schisterman , e. ( 2006 ) . doubly robust estimation of the area under the receiver - operating characteristic curve in the presence of verification bias ._ j. amer .statist . assoc._. * 101 * , 1276 - 1288 .scurfield , b. k. ( 1996 ) .multiple - event forced - choice tasks in the theory of signal detectability . _ j. math .psych._. * 40 * , 253 - 269 . to duc , k. , chiogna , m. , and adimari , g. ( 2016 ) .bias - corrected methods for estimating the receiver operating characteristic surface of continuous diagnostic tests .j. stat._. in press .van der vaart , a. w. ( 2000 ) ._ asymptotic statistics_. cambridge university press .xiong , c. , van belle , g. , miller , j. p. , and morris , j. c. ( 2006 ) . measuring and estimating diagnostic accuracy when there are three ordinal diagnostic groups .med._. * 25 * , 1251 - 1273 .zhou , x. h. and castelluccio , p. ( 2003 ) .nonparametric analysis for the roc areas of two diagnostic tests in the presence of nonignorable verification bias ._ j. statist .plann . inference_. * 115 * , 193 - 213 .zhou , x. h. and castelluccio , p. ( 2004 ) .adjusting for non - ignorable verification bias in clinical studies for alzheimer s disease ._ stat . med._.* 23 * , 231 - 230 .zhou , x. h. and rodenberg , c. a. ( 1998 ) .estimating an roc curve in the presence of nonignorable verification bias .simulation comput._. * 27 * , 273 - 285 .here , we show that the estimating functions are unbiased under the working disease and verification models . recall that . * fi estimator .we have hence , from ( [ org : vus2 ] ) . *msi estimator .consider .we have \nonumber \\ & = & { \mathrm{pr}}(v_i = 1|t_i , a_i){\mathbb{e}}\left(d_{ki}|v_i = 1 , t_i , a_i\right ) \nonumber\\ & & + \ : { \mathrm{pr}}(v_i = 0|t_i , a_i){\mathbb{e}}\left(\rho_{k(0)i}(\xi_0)|v_i = 0 , t_i , a_i \right ) \nonumber \\ & = & { \mathrm{pr}}(v_i = 1|t_i , a_i){\mathrm{pr}}(d_{ki } = 1|v_i = 1 , t_i , a_i ) \nonumber\\ & & + \ : { \mathrm{pr}}(v_i = 0|t_i , a_i){\mathrm{pr}}(d_{ki } = 1|v_i = 0 , t_i , a_i ) \nonumber\\ & = & { \mathrm{pr}}(d_{ki } = 1|t_i , a_i ) = \rho_{ki } \nonumber.\end{aligned}\ ] ] therefore , \nonumber \\ & = & { \mathbb{e}}\left\ { \rho_{1i}\rho_{2\ell } \rho_{3r}(i_{i\ell r } - \mu_0 ) \right\}. \nonumber\end{aligned}\ ] ] * ipw estimator . in this case , thus , * pdr estimator .\nonumber \\ & = & { \mathbb{e}}\bigg\{d_{ki } { \mathbb{e}}\left(\frac{v_i}{\pi_i(\xi_0 ) } \bigg | d_{1i } , d_{2i } , t_i , a_i\right ) \nonumber \\ & & - \ : \rho_{k(0)i}(\xi_0 ) { \mathbb{e}}\left(\frac{v_i}{\pi_i(\xi_0 ) } - 1 \bigg | d_{1i } , d_{2i } , t_i , a_i\right ) \bigg| t_i , a_i \bigg\ } \nonumber \\ & = & { \mathbb{e}}(d_{ki } | t_i , a_i ) = \rho_{ki } \nonumber.\end{aligned}\ ] ] hence , \nonumber \\ & = & { \mathbb{e}}\left\ { \rho_{1i}\rho_{2\ell } \rho_{3r}(i_{i\ell r } - \mu_0 ) \right\}. \nonumber\end{aligned}\ ] ] .monte carlo means ( mcmean ) , relative bias ( bias ) , monte carlo standard deviations ( mcds ) and estimated standard deviations ( esd ) for the proposed vus estimators , and the spe estimator under mar assumption .cp denotes monte carlo coverages for the 95% confidence intervals , obtained through the normal approximation approach applied to each estimator . [cols="^,^,<,>,>,>,^,^",options="header " , ]
the volume under the receiver operating characteristic surface ( vus ) is useful for measuring the overall accuracy of a diagnostic test when the possible disease status belongs to one of three ordered categories . in medical studies , the vus of a new test is typically estimated through a sample of measurements obtained by some suitable sample of patients . however , in many cases , only a subset of such patients has the true disease status assessed by a gold standard test . in this paper , for a continuous - scale diagnostic test , we propose four estimators of the vus which accommodate for nonignorable missingness of the disease status . the estimators are based on a parametric model which jointly describes both the disease and the verification process . identifiability of the model is discussed . consistency and asymptotic normality of the proposed estimators are shown , and variance estimation is discussed . the finite - sample behavior is investigated by means of simulation experiments . diagnostic test , nonignorable missing data mechanism , roc analysis . = 18pt
power system is the name given to a collection of devices that generate , transmit , and distribute energy to consuming units such as residential buildings , factories , and street lighting .abusing language , we use the terms power and energy interchangeably , as typically done in the power systems literature . excluding a small portion of generating units , such as solar cells and fuel cells, we can think of power generators in a power system as electromechanical systems .natural sources , such as the chemical energy trapped in fossil fuels , are used to generate mechanical energy , which is then converted into electrical energy .when power systems are working in normal operating conditions , i.e. , in _ steady - state _ , the generators satisfy two main conditions : their rotors rotate with the same velocity , which is also known as _ synchronous velocity _ , and the generated voltages are sinusoidal waveforms with the same frequency . keeping the velocity of the generators at the synchronous velocity and the terminal voltages at the desired levels is called _ frequency stability _ and _ voltage stability _ , respectively .when all the generators are rotating with the same velocity , they are synchronized and the relative differences between the rotor angles remain constant .the ability of a power system to recover and maintain this synchronism is called _ rotor angle stability_. _ transient stability _ , as defined in , is the maintenance of rotor angle stability when the power system is subject to large disturbances .these large disturbances are caused by faults on the power system such as the tripping of a transmission line . in industry ,the most common way of checking transient stability of a power system is to run extensive time - domain simulations for important fault scenarios .this way of developing action plans for the maintenance of transient stability is easy and practical _ if _ we know all the important " scenarios that we need to consider .unfortunately , power systems are large - scale systems and the number of possible scenarios is quite large . since an exhaustive search of all of these scenarios is impossible , power engineers need to guess the important cases that they need to analyze .these guesses , as made by humans , are prone to errors .moreover , time - domain simulations do not provide insight for developing control laws that guarantee transient stability .because of these reasons , additional methods are required for transient stability analysis .currently , the methods that do not rely on time - domain simulations can be collected in two different groups : direct methods and automatic learning approaches .the latter , automatic learning approaches , are based on machine learning techniques . in this work ,we do not consider automatic learning approaches and we focus on direct methods .direct methods are based on obtaining lyapunov functions for simple models of power systems . to the best of our knowledgethe origin of the idea can be found in the 1947 paper of magnusson which uses the concept of transient energy " which is the sum of kinetic and potential energies to study the stability of power systems . in 1958 ,aylett , assuming that a two - machine system can be represented by the dynamical equation showed that there exists a separatrix dividing the two - dimensional plane of and into two regions .one of the regions is an invariant set with respect to the two - machine system dynamics , i.e. , if the initial condition is in this set , trajectories stay inside this set for all future time .aylett concluded that in order to check the stability of the system , we only need to check whether the state is in the invariant set or not .aylett also characterized the separatrix that defines the invariant set and extended the results from the two - machine case to the three - machine case in the same monograph .although the term lyapunov function " was not stated explicitly in his work , aylett s work used lyapunov - based ideas .some of the other pioneering works on direct methods include szendy , gless , el - abiad and nagaphan , and willems . the work based on direct methods mainly focused on finding better lyapunov functions that work for more detailed models and provide less conservative results .these lyapunov functions are used to estimate the region of attraction of the stable equilibrium points that correspond to desired operating conditions .the stability of a power system after the clearance of a fault can then be tested by determining if the post - fault state belongs to the desired region of attraction . for further informationwe refer the reader to .there are several problematic issues with direct methods .the first problem is the set of assumptions used to construct these models .the models used for transient stability analysis implicitly assume that the angular velocities of the generators are very close to the synchronous velocity . in other words , it is assumed that the system is very close to desired equilibrium and the models developed based on this assumption are used to analyze the stability of the same equilibrium .the standard answer given to this objection is the following : the models that are used in transient stability studies are used only for the first swing " transients and for these transients the angular velocities of the generators are very close to the synchronous velocity .unfortunately , in real world scenarios large swings need to be considered . citing the post mortem report on the august 14 , 2003 blackout in canada and the northeast of the united states ,`` the large frequency swings that were induced became a principal means by which the blackout spread across a wide area '' .using models based on `` first swing '' assumptions to analyze cases like the august 14 , 2003 blackout does not seem reasonable .the second problem is that the models used for transient stability analysis , again implicitly , pose certain assumptions on the grid .the transmission lines are modeled as impedances and the loads are either modeled as impedances or as constant current sources .these modeling assumptions are used to eliminate the internal nodes of the network via a procedure called kron reduction . the resulting network after kron reduction is a strongly connected network .every generator is connected to every other generator via transmission lines modeled as a series connection of an inductor and a resistor . after this reduction process, the resistances in the reduced grid are neglected .the fundamental reason behind the neglect of the resistances lies in the strong belief , in the power systems community , about the non - existence of lyapunov functions when these resistances are not neglected .this belief stems from the paper which asserts the non - existence of global lyapunov functions for power systems with losses in the reduced power grid model .it is further supported by the fact that the lyapunov functions that the power systems community has developed contain path - dependent terms unless these resistances are neglected .the reader should note that the resistors here represent both the losses on the transmission lines _ and _ the loads .hence this assumption implies that there is no load in the grid ( other than the loads modeled as current injections ) , which is not a reasonable assumption .in addition to these problems that have their origin in neglecting the resistances on the grid , the process of constructing these reduced models , i.e. kron reduction , can only be performed for a very restrictive class of circuits _ unless _ we assume that all the waveforms in the grid are sinusoidal . in other words , in order to perform this reduction process for arbitrary networks , we need to use phasors , which in turn requires that all the waveforms in the grid are sinusoidals and every generator in the power grid is rotating with the same velocity .this assumption is not compatible with the study of transients . despite the long efforts to obtain control laws for power systems with non - negligible transfer conductances , results only appeared in the beginning of the century .for the single machine and the two machine cases , a solution , under restrictive assumptions , is provided in . in the same work ,the existence of globally asymptotically stabilizing controllers for power systems with more than two machines is also proved but no explicit controller is suggested .an extension of the results in to structure preserving models can be found in . to the best of our knowledge , the problem of finding explicit globally asymptotically stabilizing controllers for power systems with non - negligible transfer conductances and more than two generators has only been recently solved in .although a solution has been offered for an important long - lasting problem , the models that are used in are still the traditional models that we want to avoid in our work .there are also some recent related results on synchronization of kuramoto oscillators . if the generators are taken to be _ strongly overdamped _, these synchronization results can be used to analyze the synchronization of power networks .the synchronization conditions obtained in can also be used in certain micro - grid scenarios . in this paper, we provide results that do not require generators to be strongly overdamped .all the previously described methods use classical models for power systems .they are only valid when the generator velocities are very close to the synchronous velocity . in this paper , we abandon these models and use port - hamiltonian systems to model power systems from first principles . as already suggested in , a power system can be represented as the interconnection of individual port - hamiltonian systems .there are several advantages of this approach .first of all , we have a clear understanding of how energy is moving between components .secondly , we do not need to use phasors .thirdly , we do not need to assume all the generator velocities to be close to the synchronous velocity . finally , using the properties of port - hamiltonian systems, we can easily obtain the hamiltonian of the interconnected system , which is a natural candidate for a lyapunov function .a similar framework , based on passivity , is being used in a research project on the synchronization of oscillators with applications to networks of high - power electronic inverters .we first obtain transient stability conditions for generators in isolation from a power system .these conditions show that as long as we have enough dissipation , there will be no loss of synchronization . in port hamiltonian framework is also used to derive sufficient conditions for the stability of a single generator .the techniques used in rely on certain integrability assumptions that require the stator winding resistance to be zero . in contrast , our results hold for non - zero stator resistances .moreover , while in it is assumed that synchronous generators have a single equilibrium , we show in this paper that generators have , in general , 3 equilibria and offer necessary and sufficient conditions on the generator parameters for the existence of a single equilibrium . with the help of useful properties of port - hamiltonian systems , we obtain sufficient conditions for the transient stability of the interconnected power system from the individual transient stability conditions for the generators .in addition to these sufficient conditions , which were also reported in , we provide a deeper discussion on the modeling of synchronous generators and we also explain how to relax the sufficient conditions with the help of facts devices .our results are important contributions for several reasons .firstly , we do not use the previously discussed questionable assumptions . without these assumptions, we can apply our conditions to realistic scenarios including cases with large frequency swings .secondly , our results relate dissipation with transient stability .this transparent relation is hard to see in the classical framework due to shadowing assumptions .thirdly , we exploit compositionality to tame the complexity of analyzing large - scale systems .we propose simple conditions that can be independently checked for each generator without the need to construct a dynamical model for the whole power system .finally , extending our framework to more complex models is easier because we use port - hamiltonian models for the individual components .this flexibility will be helpful to design future control laws for the generators .we denote the diagonal matrix with diagonal elements by and an by matrix of zeros by .the vector is denoted by , where is its element .the by identity matrix is denoted by .we say that is positive semidefinite , denoted by , if for all .if , in addition to this , we also have only if , we call positive definite , denoted by . a matrix is negative semidefinite ( definite ) , denoted by ( ) , if and only if is positive semidefinite ( definite ) .the gradient of a scalar field with respect to a vector is given by note that the gradient is assumed to be a _column _ vector .consider the affine control system where , , is a manifold and is a compact set .the affine control system has a port - hamiltonian representation if there exist smooth functions and satisfying and for all , and there exists a smooth function , which is called the hamiltonian , such that can be written in the form the hamiltonian can be thought of as the total energy of the system .the output of the port - hamiltonian representation of is given by if we take the time derivative of the hamiltonian , we obtain the term in represents the power supplied to the system .therefore , property states that the rate of increase of the hamiltonian is less than the power supplied to the system .we refer to for further details on port - hamiltonian systems .in this section , we derive the equations of motion for a two - pole synchronous generator from first principles .the first step in this derivation is to identify the hamiltonian , the sum of the kinetic and the potential energy , of a single generator .we then derive a stability condition , using the hamiltonian as a lyapunov function , for the synchronous generator when the terminal voltages are known .although we only consider two - pole synchronous machines , the results in this section can easily be generalized to machines with more than two poles .every synchronous generator consists of two parts : rotor and stator . several torques act on the rotor shaft and cause the rotor to rotate around its axis .explicitly , we can write the torque balance equation for the torques acting on the rotor shaft as follows : where is the rotor angle , is the moment of inertia of the rotor shaft , is the damping coefficient , is the applied mechanical torque and is the electrical torque .the angular velocity of the rotor shaft is .the total kinetic energy of the rotor can be expressed as using the definition of the angular velocity , we can write in the form in the classical power systems literature , the torque balance equation is scaled by . defining , , , and dividing both sides of by a constant value called rated power , the following set of mechanical equationsis obtained : in these equations , the parameters and are assumed to be constant , which implies that is either constant or _ slowly _ changing .equations ( [ eqn : mech1 ] ) and ( [ eqn : mech2 ] ) do not require such assumptions on . there are three identical circuits connected to the stator .these circuits are called _ stator windings _ and they are labeled with letters , and .there are also windings connected to the rotor .these winding are called _field windings_. in this work , we consider a synchronous generator with a single field winding . in a cylindrical rotor synchronous generator , which are predominantly used in nuclear and thermal generation units ,the aggregated effect of the field windings can be modeled by a single circuit .hence , the single field winding assumption is reasonable for such generators .we label the single field winding with the letter .the electrical diagram for the phase- stator winding is given in figure [ phaseafig ] .( 4.5,0 ) to [ short , * - ] ( 0,0 ) to [ v , l= ( 0,2 ) to [ r , l= ( 2,2 ) ( 2,2 ) to [ short , - * ] ( 4.5,2 ) ( 4.5,0 ) to [ open , v^>= ( 4.5,2 ) ( 4.1 , 2)to [ short , i_= ( 4,2 ) ; in this diagram , is the flux generated at the phase- winding , is the winding resistance , is the voltage at the terminals of the winding and is the current _ entering _ through the positive pole of the winding terminal .the notation we choose for the current is called the _ motor notation_. one can obtain the _ generator notation _ by replacing with . the diagram for the other phases ( and ) and the field windingcan be obtained by replacing the subscript in the diagram with the corresponding letters . from kirchoff s voltage law, we have for the phase winding .the equations for the phases and can be obtained by replacing subscript with and , respectively . since the stator winding circuits are identical , we have .we can write these equations in the vector form where , , and . in a synchronous generator with a single field winding, we can relate fluxes and currents using the equation where the inductance matrix is obtained from the inductance matrix in by neglecting the saliency terms. we can define the total magnetic energy stored in the windings as and express the electrical equation using as using the total magnetic energy defined in section [ sec : sg : electrical ] , we can explicitly compute the electrical torque in as the hamiltonian for the single generator is the sum of the kinetic and magnetic energies , i.e. , note that does not depend on and and does not depend on . replacing the electrical torque expression in , the equations , and can be written in the form if we define the energy variables we obtain the port - hamiltonian representation of equations - with state , input and : steady state currents and voltages for the phases of the single generator are sinusoidal waveforms . in order to focus on the simpler problem of stability of equilibrium points , we perform a change of coordinates defined by the point - wise linear map with the inverse . in the power systems literature, it is assumed that the generator rotor angles rotate with a speed that is very close to synchronous speed , i.e. , .if we integrate this approximation and assume zero initial conditions , we obtain .when we replace in , the upper -by- matrix of becomes a transformation that maps balanced waveforms with frequency to constant values , also known as _ park s transformation _ . using, we can map -domain currents to -domain currents .note that the field winding current is not affected by the change of coordinates .we define -winding voltages as and -winding fluxes as in a similar fashion .we obtain the hamiltonian in the new coordinates as where is given by equations - can be written in the -domain as where and at the desired steady state operation , the fluxes are constant and is the synchronous velocity .therefore , we can safely disregard and focus on the stability of the equilibria of . from the last row of, we have which can be expressed in terms of currents as : by using the equality that follows from .note that we can always design a control law acting on the field winding terminals by choosing the voltage according to for some and a constant reference value .this controller keeps the field current constant and justifies the following assumption : the field winding current is constant .if we use , and consider the field winding current to be constant , we can express in terms of currents : where and . in this section ,we study the equilibria of a single generator .recall that sinusoidal waveforms in the coordinates are mapped to constant values on the coordinates .therefore , equilibria of - are points rather than sinusoidal trajectories .we can find the equilibrium currents , , and that satisfy - when the voltages across the generator terminals , , , and , are constant and equal to , , and , respectively , by solving the algebraic equations to obtain and the values of are obtained by replacing into the algebraic equation obtained by setting in .this results in a third order polynomial equation in . for any given ,if we choose it is easy to show that one of the solutions of is .therefore , we can always choose a torque value such that for any given steady state inputs and desired synchronous velocity , one of the solutions of the equations is with and given by and , respectively and .note that , in addition to , the equation has two other solutions .for each solution we potentially have an equilibrium point . hence ,in general we have three equilibrium points . by analyzing the coefficients of the polynomial equation it is not difficult to show that the only real solution of is iff where and are obtained by replacing with in and , respectively .inequality is a necessary condition for global asymptotic stability of the equilibrium . in the next section ,we obtain sufficient conditions by identifying constraints on the generator parameters that lead to a global lyapunov function for the equilibrium . in this section , we provide sufficient conditions for the equilibrium point computed in section [ sec : stability : sg ] to be globally asymptotically stable .a natural choice for lyapunov function candidate is the hamiltonian of the single generator .however the minimum of occurs at the origin instead of , where .we shift the minimum of the hamiltonian to by defining a function we call the _ shifted hamiltonian_. explicitly , the shifted hamiltonian is given as we also define the shifted state by .it is easy to check that we have where is the gradient of the hamiltonian with respect to , evaluated at .note that is positive definite and implies , which in turn implies .therefore , in order to prove that is globally asymptotically stable , it is enough to show that .the time derivative of is given by from equation and , we obtain where and we used the equality . from , we know that the term inside parentheses in is equal to zero . hence implies taking the time derivative of the shifted hamiltonian , we get where .note that the last element of is zero since a constant field winding current implies .we can write the first term in the side of as a quadratic function of .explicitly , where we used to eliminate the flux variables . replacing in , we obtain where the eigenvalues of the matrix are , and since we have , i.e. , if is negative definite , then .it is easy to check that if holds , then , which implies that the matrix in is negative definite .hence , if holds , we have , which in turn implies that is globally asymptotically stable. we can summarize the preceding discussion in the following result .[ thm1 ] let be an equilibrium point of the single generator , described by equation when we have and .the equilibrium point is globally asymptotically stable if it is useful to express inequality ( [ smcond ] ) in terms of and currents .we know that the and currents are different from the traditional and currents during the transient stage . however , we have at equilibrium .thus , we can replace ( [ smcond ] ) with .note that we are using motor reference directions . in order to find the generator currents , we need to replace and by and , respectively .however , this change in reference directions does not effect .condition relates the total magnetic energy stored on the generator windings at steady state ( left hand side of ) to the dissipation terms and .this relation gives us a set of admissible steady - state currents in -coordinates ( or alternatively , -coordinates ) that lead to global asymptotical stability .one can verify that if inequality holds , then inequality also holds while the converse is not true .this is to be excepted since global asymptotical stability requires a unique equilibrium . in the single machine infinite bus scenario ,a generator is connected to an infinite bus modeling the power grid as a constant voltage source .the analysis of the single machine in this section , which is based on the assumption that the terminal voltages are constant , can also be seen as the analysis of a single machine connected to an infinite bus . in the classical analysis of this scenario , there are multiple equilibrium points and energy based conditions for local stability are obtained .the analysis in this section shows that in fact a single equilibrium exists , under certain assumptions on the generator parameters , and that _ global _ asymptotical stability is also possible .such conclusions are not possible to obtain using the classical models as they are not detailed enough .we consider a multi - machine power system consisting of generators , loads and a transmission grid connecting the generators and the loads .we distinguish between different generators by labeling each variable in the generator model with the subscript .we make the following assumption about the multi - machine power system .the transmission network can be modeled by an asymptotically stable linear port - hamiltonian system with hamiltonian .[asm : network ] concretely , this assumption states that whenever the inputs to the transmission network are zero , can be used as a quadratic lyapunov function proving global asymptotic stability of the origin .although it may appear strong , we note that it holds in many cases of interest .in particular , it is satisfied whenever we use short or medium length approximate models to describe transmission lines in arbitrary network topologies .furthermore , we discuss in remark [ rmk : weak ] how it can be relaxed .we denote the three - phase voltages across the load terminals and currents entering into the load terminals by and , respectively . here, we use the letter to distinguish the currents and voltages that correspond to a load from the ones that correspond to a generator .the current entering into the load terminals when we set is denoted by .it follows from the linearity assumption on the transmission network that we can perform an affine change of coordinates so that in the new coordinates we have where and are the input and the output of the port - hamiltonian model of the grid in the new coordinates with shifted hamiltonian , , and .equation represents an incremental power balance " , i.e. , a power balance in the shifted variables .intuitively , it states that the net incremental power supplied by the generators and the loads is equal to the net incremental power received by the transmission grid .we make the following assumption regarding loads .each load is described by one of the following models : * a symmetric three - phase circuit with each phase being an asymptotically stable linear electric circuit ; * a constant current source .[ asm : cload ] the proposed load models are quite simple and a subset of the models used in the power systems literature .it has recently been argued that the increase of dc loads , such as computers and appliances , interfacing the grid through power electronics intensifies the nonlinear character of the loads .however , there is no agreement on how such loads should be modeled .in fact , load modeling is still an area of research .the first class of models in assumption [ asm : cload ] contains the well - known constant impedance model in the power systems literature .constant impedance load models are commonly used in transient stability analysis and can be used to study the transient behavior of induction motors . according to the ieee task force on load representation for dynamic performance , more than half of the energy generated in the united states is consumed by induction motors .this observation , together with the fact that these three - phase induction motors can be modeled as three - phase circuits with each phase being a series connection of a resistor , an inductor and a voltage drop justifies the constant impedance model usage in transient stability studies .the -circuit model suggested for induction motors in is also captured by assumption [ asm : cload ] . in , it is also stated that lighting loads behave as resistors in certain operational regions .this observation also suggests the usage of constant impedances for modeling the aggregated behavior of loads .the second load model in assumption [ asm : cload ] is also common in the power systems literature .any asymptotically stable linear electrical circuit has a unique equilibrium and admits a port - hamiltonian representation with hamiltonian .by performing a change of coordinates , we can obtain a port - hamiltonian system for the shifted coordinates with the shifted hamiltonian satisfying : let us now consider constant current loads . if a load draws constant current from the network we have .this implies and the contribution of the constant current load to the incremental power balance is zero .this observation shows that we can neglect constant current loads since they do not contribute to the incremental power balance .therefore , in the remainder of the paper we only consider the first type of loads in assumption [ asm : cload ] .let be the shifted hamiltonian for generator with respect to the equilibrium point , as defined in section [ sec : stability : sg ] . from section [ sec : stability: sg ] , we know that for every generator we have where is a matrix obtained by adding subscript to the elements of the matrix given by . using the definitions above , we select our candidate lyapunov function as where is the shifted hamiltonian of the transmission grid that was introduced in assumption [ asm : network ] , section [ sec : mmmodel ] .our objective is to show that the equilibrium for the generators is globally asymptotically stable .note that every equilibrium shares the same synchronous velocity .hence , asymptotical stability of implies that all the generators converge to the synchronous velocity .in addition to synchronize the generators angular velocity we also need to ensure that the currents flowing through the transmission network converge to preset values respecting several operational constraints such as thermal limits of the transmission lines .this will also be a consequence of asymptotical stability of the equilibrium .when this equilibrium is reached , the voltages and currents at the generator terminals are and , respectively .if we now regard the transmission network and the loads as being described by an asymptotically stable linear system driven by the inputs and , we realize that all the voltages and currents in the transmission network and loads will converge to a unique steady state .we assume that such steady state , uniquely defined by and , satisfies all the operational constraints .taking the time derivative of , we obtain where follows from , and follows from , , and . if holds for , then for every by theorem [ thm1 ] . therefore ,if holds for every , we conclude from that this only shows that is negative semi - definite .since all of the hamiltonians that constitute the total hamiltonian have compact level sets , the level sets of are also compact .hence , we can apply la salle s invariance principle to conclude that all the trajectories converge to the largest invariant set contained in the set defined by the left hand side of is a sum of negative definite quadratic terms ( recall that ) and thus only zero when for all .this implies , hence the generator states globally asymptotically converge to if holds .the preceding discussion is summarized in the following result .[ thm2 ] consider a multi - machine power system with generators described by equations with , and loads satisfying assumption [ asm : cload ] interconnected by a transmission network satisfying assumption [ asm : network ] .let be an equilibrium point for the generators that is consistent with all the equations describing the power system .the equilibrium is globally asymptotically stable if holds for all .theorem [ thm2 ] states that in order to check the stability of the multi - machine system , we only need to check a simple condition for each generator in the system .this makes our result compositional in the sense that the complexity of condition is independent of the size of the network .all these conditions are bound together by and that obviously depend on the whole network .however , the computation of the desired steady state currents needs to be performed for reasons other than transient stability and thus are assumed to be readily available .[ rmk : weak ] we note that if assumption [ asm : network ] is weakened from asymptotic stability to stability of the transmission network , the equilibrium is still globally asymptotically stable .however , the voltages and currents in the transmission network are no longer uniquely determined and may violate the operational constraints . inequality is a sufficient condition for asymptotic stability .typically , the stator winding resistance for each generator is small and inequality is only satisfied for small steady state currents .however , inequality can be enforced by actively controlling the voltage at the generator terminals using a static synchronous series compensator ( sssc ) , a facts device that is typically used for series compensation of real and reactive power . using a sssc we can introduce voltage drops of , , and at the generator terminals without altering the current .the turn - on and turn - off times for the thyristors in a sccc are at the level of microseconds , small enough to enforce a voltage drop that is a piece - wise constant approximation of , , and .the approximation error can always be reduced by increasing the number of converter valves in the sssc . by repeating the stability analysis in this section , while taking into consideration this new voltage drop , we arrive at the relaxed condition for global asymptotic stability : since the power throughput of facts devices is in the order of megawatts we can choose a value for that is several order of magnitude larger than . therefore , the relaxed inequality ( [ mmconservative2 ] ) allows for large steady - state currents and is widely applicable to realistic examples .in this section we apply our results to the two - generator single - load scenario depicted in figure [ mmdiagram ] .( 0,1.5 ) node[ground ] ( 0,1.5 ) to [ si= ( 0,3 ) ( 6,1.5 ) node[ground ] ( 6,1.5 ) to [ si= ( 6,3 ) ( 6,3 ) to [ short , - * ] ( 6,3 ) ( 0,3 ) to [ tl=,- * ] ( 3,3 ) ( 3,3 ) to [ tl=,- * ] ( 6,3 ) ( 3,3 ) to [ european resistor , l_= ( 3,1.5 ) ( 3,1.5 ) node[ground ] ; the generators are connected to the load via transmission lines with impedances .the load impedance is .we use the generator parameters provided in ( * ? ? ?* table 7.3 ) . using the provided values ,the damping coefficients are selected as mvas and mvas as was done in ( * ? ? ?* example 7.1 ) .the stator winding resistances for the generators are taken to be . using the values for and provided in (* table 7.3 ) , we obtain h and from the equations . since the parameters and can not be obtained from ( * ? ? ?* table 7.3 ) we assume and .the steady state phase- voltages are kv , , kv , and kv .the steady state phase- currents satisfying the circuit constraints are a , a , , and a. the mechanical torques and field winding currents are selected so as to be consistent with these steady - state values .inequality reduces to if the stator winding resistance is zero . since the -axis steady state current is negative for both generators , ( [ bcond ] ) is satisfied and the equilibrium is unique .we now investigate global asymptotic stability for this example .condition does not hold since the winding resistances for the generators are zero and this leads to for . in order to use the relaxed condition ,we connect a static synchronous series compensator ( sccc ) in series with the generator terminals providing the voltage drops , , and in phases , , and , respectively .condition holds for generator if : replacing the generator parameters into this inequality , we obtain m and m . we choose to satisfy these inequalities and provide enough damping .we numerically simulated the dynamics of the circuit in figure [ mmdiagram ] to obtain the transient behavior following the occurrence of a fault . without conjecturing anything about the nature of the fault or the pre - fault circuit, we simply assumed that the initial condition for the frequency of the generators lies in the set $ ] .for a generator current with steady state value , we assumed that the initial condition for the current lies in the set . with this assumption about the initial states , we performed numerical simulations for 25 randomly chosen initial state vectors .these simulations indicate that the generator states converge to the steady - state values as expected .we present in figure [ fig_sim ] a typical trajectory corresponding to initial conditions a , a , a , , rad , and rad .the reader can appreciate how the states and the value of the total shifted hamiltonian converge to the desired values .-axis currents on upper left , -axis currents on upper right , frequencies on lower left ) and the value of the total shifted hamiltonian ( on lower right ) . ]this paper shows that transient stability analysis can be performed without using the hard - to - justify assumptions described in section [ intro ] and found in the classical literature on power systems .instead , we employed first - principles models and obtained sufficient conditions for global transient stability that are applicable to networks with lossy transmission lines .moreover , the proposed sufficient conditions for transient stability are compositional , i.e. , we only need to check that each generator satisfies a simple inequality relating the steady state currents to the generators mechanical and electrical dissipation . such test is far less expensive than contingency analysis based on numerical simulations . although transient stability is critical , equally important is a careful analysis of transients to ensure that operational limits are never violated .such study is a natural next step in our investigations .another direction for further research is how a careful modeling of transmission networks and loads can contribute to a refined transient stability analysis . in what regardsthe design of controllers , much is to be done on combining the use of facts devices with carefully designed excitation controllers to improve transient performance .s. y. caliskan , p. tabuada , _ kron reduction of power networks with lossy and dynamic transmission lines _ , proceedings of the 51st ieee conference of decision and control , pp .55545559 , december 1013 2012 , maui , hawaii , usa .s. y. caliskan , p. tabuada , _ towards a compositional analysis of multi - machine power systems transient stability _ , proceedings of the 52nd ieee conference of decision and control , pp .39693974 , december 1013 2013 , florence , italy .d. casagrande , a. astolfi , and r. ortega , _ global stabilization of non - globally linearizable triangular systems : application to transient stability of power systems _ , proceeding of the 50th ieee conference on decision and control and european control conference , pp . 331336 , december 12 - 15 2011 , orlando , florida , usa .d. casagrande , a. astolfi , r. ortega , and d. langarica , _ a solution to the problem of transient stability of multimachine power systems _ , proceedings of the 51st ieee conference and decision and control , pp .17031708 , december 10 - 13 2012 , maui , hawaii , usa . w. dib , r. ortega , a. barabanov , and f. lamnabhi - lagarrigue , _ a `` globally '' convergent controller for multi - machine power systems using structure - preserving models _ , ieee transactions on automatic control , vol .21792185 , september 2009 .f. drfler , m. chertkov , and f. bullo , _ synchronization in complex oscillator networks and smart grids _ , proceedings of the national academy of sciences , volume 110 , no20052010 , february 2013 . ieee task force on load representation for dynamic performance , _ standard load models for power flow and dynamic performance simulation _, ieee transactions on power systems .3 , pp . 13021313 , august 1995 .j. v. milanovi , k. yamashita , s. m. villanueva , s. .djokic , and l. m. korunovi , _ international industry practice on power system load modeling _ , ieee transactions on power systems , vol .3 , pp . 30383046 , august 2013 .r. ortega , m. galaz , a. astolfi , y. sun and t. shen , _ transient stabilization of multimachine power systems with nontrivial transfer conductances _ , ieee transactions in automatic control , vol .1 , pp . 6075 , january 2005 .m. pavella , d. ernst , d. ruiz - vega , _ transient stability of power systems : a unified approach to assesment and control _ , kluwer s power electronics and power systems series ( editor : m. a. pai ) , kluwer academic publishers , dordrecht , 2000 .l. a. b. torres , j. hespanha , and j. moehlis , _ synchronization of oscillators coupled through a network with dynamics : a constructive approach with applications to the parallel operation of voltage power supplies _ ,september 2013 , submitted to journal publication , available at http://www.ece.ucsb.edu/ hespanha .s. fiaz , d. zonetti , r. ortega , j. m. a. scherpen , and a. j. van der schaft , _ a port - hamiltonian approach to power network modeling and analysis _ , european journal of control , volume 19 , issue 6 , pp . 477485 , december 2013 .m. j. h. raja , d. w. p. thomas , and m. sumner , _ harmonics attenuation of nonlinear loads due to linear loads _ , 2012 asia - pacific symposium on electromagnetic compatibility ( apemc ) , pp .829832 , 21 - 24 may 2012 , singapore .
during the normal operation of a power system all the voltages and currents are sinusoids with a frequency of 60 hz in america and parts of asia , or of 50hz in the rest of the world . forcing all the currents and voltages to be sinusoids with the right frequency is one of the most important problems in power systems . this problem is known as the transient stability problem in the power systems literature . the classical models used to study transient stability are based on several implicit assumptions that are violated when transients occur . one such assumption is the use of phasors to study transients . while phasors require sinusoidal waveforms to be well defined , there is no guarantee that waveforms will remain sinusoidal during transients . in this paper , we use energy - based models derived from first principles that are not subject to hard - to - justify classical assumptions . in addition to eliminate assumptions that are known not to hold during transient stages , we derive intuitive conditions ensuring the transient stability of power systems with lossy transmission lines . furthermore , the conditions for transient stability are compositional in the sense that one infers transient stability of a large power system by checking simple conditions for individual generators .
bayesian inferenceis a diverse and robust analysis methodology based on bayes theorem , the prior , encodes all knowledge about the parameters before the data have been collected , while the likelihood , defines the probabilistic model of how the data are generated .the evidence , ensures proper normalization while allowing for auxiliary applications such as model comparison .lastly , the posterior , is the refinement of the prior given the information inferred from .all model assumptions are captured by the conditioning hypothesis .while bayes theorem is simple enough to formulate , in practice the individual components are often sufficiently complex that analytic manipulation is not feasible and one must resort to approximation .one of the more successful approximation techniques , markov chain monte carlo ( mcmc ) produces samples directly from the posterior distribution that are often sufficient to characterize even high dimensional distributions .the one manifest limitation of mcmc , however , is the inability to directly calculate the evidence , which , as mackay notes , `` is often the single most important number in the problem '' .nested sampling is an alternative to sampling from the posterior that instead emphasizes the calculation of the evidence . and in two dimensions . parameterizes the contour of constant likelihood while parameterizes translations orthogonal to the contour .[ fig : alpharepar],width=240 ]consider the support of the likelihood above a given bound ( fig [ fig : nestedsampling]a , [ fig : nestedsampling]b ) , and the associated prior mass across that support ( fig [ fig : nestedsampling]c ) , the differential gives the prior mass associated with the likelihood ( fig [ fig : nestedsampling]d ) , where is the dimensional boundary of constant likelihood , introducing the coordinate perpendicular to the likelihood constraint boundary and the coordinates parallel to the constraint ( fig [ fig : alpharepar ] ) , the integral over simply marginalizes and the differential becomes returning to the evidence , by construction the likelihood is invariant to changes in , , and the integral simplifies to where is the likelihood bound resulting in the prior mass .this clever change of variables has reduced the dimensional integration over the parameters to a one dimensional integral over the bounded support of .although this simplified integral is easier to calculate in theory , it is fundamentally limited by the need to compute . numerical integration, however , needs only a set of points and not explicitly . sidestepping ,consider instead the problem of generating the set directly .in particular , consider a stochastic approach beginning with samples drawn from .the sample with the smallest likelihood , , bounds the largest but otherwise nothing can be said of the exact value , , without an explicit , and painful , calculation from the original definition .the cumulative probability of , however , is simply the probability of exceeding the of each sample , where is uniformly distributed , simplifying , the cumulative probability of the largest sample reduces to with the corresponding probability distribution estimating from the probability distribution immediately yields a pair a second pair follows by drawing from the constrained prior or in terms of , samples from this constrained prior yield a new minimum with distributed as making another point estimate gives .generalizing , the samples at each iteration are drawn from a uniform prior restricted by the previous iteration , the distribution of the largest sample , , follows as before , note that this implies that the shrinkage at each iteration , , is identically and independently distributed as moreover , a point estimate for can be written entirely in terms of point estimates for the , more appropriate to the large dynamic ranges encountered in many applications , becomes performing a quick change of variables , the logarithmic shrinkage will be distributed as with the mean and standard deviation taking the mean as the point estimate for each finally gives with the resulting error parameterizing in terms of the shrinkage proves immediately advantageous because the are independent , the errors in the point estimates tend to cancel and the estimate for the grow increasingly more accurate with . at each iteration , then , a pair is given by the point estimate for and the smallest likelihood of the drawn samples .a proper implementation of nested sampling begins with the initial point . at each iteration , samples are drawn from the constrained prior and the sample with the smallest likelihood provides a `` nested '' sample with and ( figure [ fig : nestedsamples ] ) . defines a new constrained prior for the following iteration .note that the remaining samples from the given iteration will already satisfy this new likelihood constraint and qualify as of the samples necessary for the next iteration only one new sample will actually need to be generated .as the algorithm iterates , regions of higher likelihood are reached until the nested samples begin to converge to the maximum likelihood . determining this convergence is tricky , but heuristics have been developed that are quite successful for well behaved likelihoods . once the iterations have terminated , the evidence is numerically integrated using the nested samples .the simplest approach is a first order numerical quadrature : errors from the numerical integration are dominated by the errors from the use of point estimates and , consequently , higher order quadrature offers little improvement beyond the first order approximation .the errors inherent in the point estimates can be reduced by instead marginalizing over the shrinkage distributions .note , however , that in many applications the likelihood will be relatively peaked and most of the prior mass will lie within its tails . will then be heavily weighted towards exponentially small values of where the likelihood constraint falls below the tails and the prior mass rapidly accumulates .likewise , the integrand will be heavily weighted towards exponentially small values of and the dominant contributions from the quadrature will come from later iterations , exactly where the point estimates become more precise .the resulting error in the integration tends to be reasonable , and the added complexity of marginalization offers little improvement. the choice of can also be helpful in improving the accuracy of the integration . for larger the shrinkage distribution narrows and the estimates for the increasingly better .multiple samples at each iteration also prove valuable when the likelihood is multimodal , as the individual samples allow the modes to be sampled simultaneously .lastly , if the yielding the smallest likelihood are stored with each nested sample then posterior expectations can be estimated with the quadrature weights , the remaining obstacle to a fully realized algorithm is the matter of sampling from the prior given the likelihood constraint .sampling from constrained distributions is a notoriously difficult problem , and recent applications of nested sampling have focused on modifying the algorithm in order to make the constrained sampling feasible .hamiltonian monte carlo , however , offers samples directly from the constrained prior and provides an immediate implementation of nested sampling .hamiltonian monte carlo is an efficient method for generating samples from the dimensional probability distribution to a given sample produces a new sample .note that the properties of hamiltonian dynamics , in particular liouville s theorem and conservation of , guarantee that differential probability masses from are conserved by the mapping . as a result, this dynamic evolution serves as a transition matrix with the invariant distribution .moreover , the time reversal symmetry of the equations ensures that the evolution satisfies detailed balance : because is conserved , however , the transitions are not ergodic and the samples do not span the full support of .ergodicity is introduced by adding a gibbs sampling step for the . because the and are independent , sampling from the conditional distribution for is particularly easy in practice the necessary integration of hamilton s equationscan not be performed analytically and one must resort to numerical approximations .unfortunately , any discrete approximation will lack the symmetry necessary for both liouville s theorem and energy conservation to hold , and the exact invariant distribution will no longer be .this can be overcome by treating the evolved sample as a metropolis proposal , accepting proposed samples with probability sampling from is challenging .the simplest approach is to sample from and discard those not satisfying the constraint . for most nontrivial constraints , however ,this approach is extremely inefficient as the majority of the computational effort is spent generating samples that will be immediately discarded .incorporating infinite barriers directly into hamilton s equations is problematic , but physical intuition provides an alternative approach .particles incident on an infinite barrier bounce , the momenta perpendicular to the barrier perfectly reflecting : discrete updates proceed as follows .after each spatial update the constraint is checked and if violated then the normal is computed at the new point and the ensuing momentum update is replaced by reflection ( algo [ algo : bounce ] , fig [ fig : bounce ] ) .note that the spatial update can not be reversed , nor can an interpolation to the constraint boundary be made , without spoiling the time - reversal symmetry of the evolution .the normal for many discontinuous constraints , which are particularly useful for sampling distributions with limited support without resorting to computationally expensive exponential reparameterizations , can be determined by the geometry of the problem . given a seed satisfying the constraint , the resultant markov chain bounces around and avoids the inadmissible regions almost entirely .computational resources are spent on the generation of relevant samples and the sampling proceeds efficiently no matter the scale of the constraint .the chmc samples are then exactly the samples from the constrained prior necessary for the generation of the nested samples .a careful extension of the constraint also allows for the addition of a limited support constraint , making efficient nested sampling with , for example , gamma and beta priors immediately realizable .initially , the independent samples are generated from markov chains seeded at random across the full support of .after each iteration of the algorithm , the markov chain generating the nested sample is discarded and a new chain is seeded with one of the remaining chains . note that this new seed is guaranteed to satisfy the likelihood constraint and the resultant chmc will have no problems bouncing around the constrained distribution to produce the new sample needed for the following iteration .
nested sampling is a powerful approach to bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution . an effective algorithm in its own right , hamiltonian monte carlo is readily adapted to efficiently sample from any smooth , constrained distribution . utilizing this constrained hamiltonian monte carlo , i introduce a general implementation of the nested sampling algorithm . nested sampling with constrained hamiltonian monte carlo michael betancourt _ massachusetts institute of technology , cambridge , ma 02139 _
the kinematics of collisionless plasmas has been studied in a wide variety of fields , such as in laboratory plasma physics , space physics , and astrophysics .evolution of collisionless plasmas and self - consistent electromagnetic fields is fully described by the vlasov - maxwell ( or vlasov - poisson ) equations .thanks to recent development in computational technology , self - consistent numerical simulations of collisionless plasmas have been successfully performed from the first - principle vlasov - maxwell system of equations .there are two numerical methods to solve the vlasov equation .the most popular one is the particle - in - cell ( pic ) method , which approximates the plasma by a finite number of macro - particles .their trajectories calculated from the equation of motion are continuous in space , whereas electromagnetic fields are calculated on grid points in space .the pic method has been used for a wide variety of plasma phenomena , because it gives satisfying results even with a relatively small number of particles . however , the pic method inherently has the large statistical noise due to an approximation of the distribution function by a finite number of particles .this noise only decreases in when the number of particles is increased , making it difficult to study such as particle acceleration and thermal transport processes , in which a small number of high energy particles play an important role . to overcome this problem ,an alternative method free from the statistical noise has been used , in which the vlasov equation is directly discretized on grid points in phase space .the so - called vlasov simulation involves solving the advection equation in multidimensions ( up to six ) .however , it has been widely known that a numerical solution of the advection equation suffers from spurious oscillations and numerical diffusion .a highly accurate scheme is required to preserve characteristics of the vlasov equation ( i.e. , the liouville theorem ) as much as possible .in contrast to the pic simulation , no standard scheme for the vlasov simulation has been established thus far . proposed a splitting scheme , in which the electrostatic vlasov equation is split into two advection equations in one - dimensional configuration and velocity spaces , and then are alternately advanced .both equations reduce to a simple form of the linear advection equation .moreover , their splitting method is equivalent to the second - order symplectic integration so that the conservation of energy is very well . following them , many authors have proposed high order advection schemes and applied them to the vlasov simulation .one of standard time integration methods for the advection equation is the semi - lagrangian method , which advances a physical variable by interpolating its profile between grid points and then following the characteristics backward in time .for example , and employed this method with the cubic b - spline interpolation . developed a semi - lagrangian scheme in a conservative form with an upwind - biased lagrange polynomial interpolation , called the positive and flux conservative ( pfc ) scheme .the pfc preserves mass and positivity .proposed a non - oscillatory type of the pfc scheme .the above and many popular schemes consider the time integration of a single physical variable . on the other hand ,the concept of `` multi - moment '' that treats multiple dependent variables has been proposed . and developed a multi - moment semi - lagrangian scheme , called the constrained interpolation profile ( cip ) scheme .the cip scheme employs the cubic hermite interpolation .difference of the cip scheme from the others is that it treats not only a profile but also its first derivative in space as dependent variables , and their governing equations are solved as coupled equations .the cip scheme provides comparable or better solutions even with a relatively smaller number of grid points compared to the others , although it requires a higher memory cost to store multiple dependent variables . due to its high capability, the cip scheme has been applied not only to the vlasov simulation but also to magnetohydrodynamic simulations .conservative schemes are preferable for the vlasov simulation , because the integration of the distribution function in phase space is equal to the number of particles . and also proposed a conservative form of the cip , called the cip - csl2 ( cip - conservative semi - lagrangian scheme with a second - order polynomial ) .the cip - csl2 scheme treats point values of a profile and its cell - integrated values as dependent variables .the cell - integrated value is advanced in a conservative form to guarantee the conservation of mass .therefore , the cip - csl2 scheme will be more suitable than the cip scheme for the vlasov simulation .various cip - csl type schemes have been proposed thus far ( e.g. , * ? ? ?a non - oscillatory type of the cip - csl scheme was proposed by , and is successfully extended to fluid dynamic equations . developed a conservative eulerian scheme , which is applied to the electrostatic vlasov - poisson simulation .although there has been a number of numerical schemes , they have been applied mainly to the electrostatic vlasov - poisson simulation .there is an increasing interest for the application of electromagnetic vlasov simulations to magnetized plasmas .however , the vlasov simulation of magnetized plasmas is more difficult than the electrostatic one , because no suitable scheme for solving the gyro motion ( solid body rotation in velocity space ) has been established thus far . proposed a rigorous time - splitting method to solve the solid body rotation problem using the one - dimensional pfc scheme , called the backsubstitution .even with this method , however , we find considerable numerical diffusion in a long time calculation that causes non - physical plasma heating during the gyration .for this reason , it is still quite limited to deal with plasma phenomena such as heating and acceleration on the basis of the electromagnetic vlasov simulation . in this paper, we propose a new numerical scheme for the advection equation , specifically designed to solve the vlasov equation in magnetized plasmas .we first argue in section [ sec : concept - mma - scheme ] that it is important to preserve high order moments of the distribution function to reduce numerical diffusion , on the basis of the concept of the conservation of the information entropy .we develop an advection scheme , in which not only point values of a profile but also its zeroth to second order piecewise moments are treated as dependent variables .details of the scheme are described in sections [ sec : mma - one - dimension ] and [ sec : mma - two - dimension ] for one and two dimensions , respectively .benchmark tests of the scheme and its application to electrostatic and electromagnetic vlasov simulations are presented in sections [ sec : numer - tests - mma1d ] and [ sec : numer - tests - mma2d ] .finally , we summarize the paper in section [ sec : conclusion ] .we consider that the conservation of the information entropy is essential to develop a dissipationless scheme for the advection equation .when a profile follows the advection equation , , its entropy function , , also follows the equation , where is the self - information function of , and is regarded as the probability function that rapidly decreases toward .the conventional information entropy function is . defined more general function of the form , .the shape of a profile will be preserved during the advection with constant velocity when its information entropy is conserved in a numerical simulation , where is the position in physical space . however , it is difficult to develop an advection scheme that exactly guarantees the entropy conservation , because is nonlinear in general .then , we approximate it by expanding in the taylor series with respect to , .\nonumber\end{aligned}\ ] ] this equation tells that the entropy can be described as a linear combination of the zeroth to -th order moments , .we then consider that better conservation of the entropy may be achieved by preserving information of high order moments .for example , the entropy of a gaussian profile , /\sqrt{2 \pi}\sigma_x m x x x x x m g g g ] in the velocity space with 32 grid points , and ] in the velocity space with 32 , 64 , or 128 grid points , and ] with 50 grid points in both the and directions .the position is normalized by .the grid size in the configuration space is equal to , and the spatial length is .the time step is .figure [ fig:10](a ) shows the fourier spectrum of obtained from the electromagnetic vlasov simulation integrated until . for comparison, we also perform the electromagnetic pic simulation with the same parameters , which is shown in figure [ fig:10](b ) .the number of particles is 5,000 in each cell so that the total memory usage is comparable between the two simulations .we can clearly identify the electron cyclotron ( bernstein ) modes .we also identify the high - frequency x - mode and z - mode at very low wavenumbers and frequencies close to the r- and l - mode cutoff ( and ) , respectively .as expected , the vlasov simulation provides a result with less noise , because it is free from the statistical noise . at very low wavenumber and frequency in figure [ fig:10](a ) ,the contribution of protons is slightly seen .then , we continue the same simulation for a long time until .the ion bernstein modes and the lower - hybrid waves can be seen in figure [ fig:11 ] . even at this moment ( electrons gyrate more than a hundred times ) , the total energy is conserved very well within an error of , due to the fact that the mma scheme can solve the solid body rotation problem with little numerical diffusion .we next test the one - dimensional harris current sheet configuration , in which a dense plasma is confined at the center of the anti - parallel magnetic field configuration .the initial conditions of the particle distribution function and the magnetic field are given as a double harris current sheet , for utilizing the periodic condition in the configuration space , \nonumber \\& + & \frac{n_s^{+}(x)}{\pi v_{s;th}^2 } \exp\left[-\frac{v_x^2+\left(v_y - u_{s}^{+}\right)^2}{v_{s;th}^2}\right],\;\;\;\left(s = p , e\right),\label{eq:98}\\ n_{s}^{\pm}(x ) & = & n_{0 } { \rm sech}^2 \left(\frac{x \mp l/4}{\lambda}\right),\label{eq:99}\\ u_{s}^{\pm } & = & \pm { { \rm sgn}}\left(q_s\right ) \frac{v_{s;th}^2}{\omega_{gs } \lambda},\label{eq:100}\\ b_{z}(x ) & = & b_0 \left[\tanh \left(\frac{x+l/4}{\lambda}\right)-\tanh \left(\frac{x - l/4}{\lambda}\right)-1\right],\label{eq:101}\end{aligned}\ ] ] where is the diamagnetic drift velocity , is the gyro frequency outside the sheet , is the half thickness of the current sheet , and is the spatial length . the one - dimensional harris current sheet is a steady - state solution of the vlasov - maxwell system , in which the plasma and magnetic pressures balance with each other .therefore , it is a suitable benchmark test for electromagnetic vlasov simulations : an accurate scheme is required for solving the advection and rotation in the presence of spatially - inhomogeneous plasma and magnetic field distributions and thereby keeping the equilibrium .simulation parameters are , , , ( corresponding to a temperature ratio of unity ) , ( proton inertia length ) , and . the thermal velocity is determined so as to satisfy the pressure balance .the position is normalized by .the simulation domain is ] in the configuration space with 1024 or 256 grid points .the time step is .figure [ fig:13](a ) shows the magnetic field and density distributions at ( corresponding to 100 gyrations of electrons outside the sheet ) , obtained from the simulation with . the simulation keeps the equilibrium of the current sheet with small numerical dissipation .the amplitude of numerically - produced electric fields is smaller than .we check the pressure balance along the direction between the magnetic field and plasma , where and are the -diagonal component of the pressure and temperature tensors . in the simulation with the mma scheme, diagonal components of the pressure tensor can be easily estimated from dependent variables , ,\nonumber\end{aligned}\ ] ] where is the bulk velocity in the direction , also estimated from them , figure [ fig:13](b ) shows the pressure distribution .the pressure balance is preserved very well within an error of 0.1 % .the plasma pressure is kept correct within an error of 2 % . for comparison, we also perform the simulation in which the cip - csl2 scheme is applied to the advections in the velocity space as well as the configuration space .the number of grid points in the velocity space is for this simulation so that the total memory usage is comparable to the simulation with the mma scheme .the result is shown in figure [ fig : harris_csl ] .compared to the mma scheme , the cip - csl2 scheme causes the dissipation of the current sheet ( increase of the sheet thickness ) , through the numerical heating of the plasma .the proton and electron pressures increase by 10 % and 40 % around the edge of the sheet , respectively .the total pressure increases by 2 % .figure [ fig:14 ] shows the result from the same simulation with the mma scheme , but with a larger grid size of .the simulation again keeps the equilibrium of the current sheet .the order of a numerical error is comparable to the result with .the result confirms that our electromagnetic vlasov simulation code is stable even with the grid size larger than the debye length .this is an advantage over an explicit pic simulation , as suggested by , e.g. , .we have presented a new numerical scheme for solving the advection equation and the vlasov equation .the present scheme solves not only point values of a profile but also its zeroth to second order piecewise moments as dependent variables , for better conservation of the information entropy .we have developed one- and two - dimensional schemes , and have shown their high capabilities .the present scheme provides quite accurate solutions even with smaller numbers of grid points , although it requires a higher memory cost than other existing schemes .we have shown that , however , the total memory usage of the present scheme is smaller than the others for the same accuracy of solutions applications of the one - dimensional scheme to the electrostatic vlasov simulations ( linear landau damping and two stream instability ) , and the two - dimensional scheme to the electromagnetic vlasov simulations ( perpendicular wave propagation and harris current sheet equilibrium ) have been presented .the two - dimensional scheme allows us to solve the gyro motion and the drift motion for a long time with little numerical heating .since the present scheme treats the zeroth to second order moments and advances them on the basis of their governing equations , the particle momentum and energy as well as mass are conserved very well .this is important for studying plasma phenomena such as convection , heating , and acceleration .although the present scheme correctly solves the moments up to the second order , the entropy is numerically increased in the two stream instability ( section [ sec : two - stre - inst ] ) , by the dissipation of fine structures in velocity space .this is understood that the perturbation of lower order moments decreases and higher order moments contribute to the entropy when the so - called filamentation phenomena proceed ( e.g. , * ? ? ?since the filamentation is inevitable as long as discretizing velocity space , it is essentially impossible to exactly conserve the entropy with finite information .the present scheme is designed specifically to solve the advection and rotation in velocity space in the vlasov equation . in our vlasov simulation code ,the cip - csl2 scheme has been employed to solve the advection in configuration space .we note that , however , another scheme can be applied to the advection in configuration space and be combined with the present scheme .several advection schemes proposed for vlasov simulations are designed to preserve positivity and non - oscillatory property , in order to suppress a non - physical growth of plasma waves caused by numerically - produced positive gradient in velocity space . argued that the positivity - preserving and non - oscillatory are important properties for reliable vlasov simulations . on the other hand, it may cause a non - physical global evolution driven by plasma pressure gradient that is increased by the numerical diffusion in velocity space , even when the plasma should be in an equilibrium state .although the present scheme is not positivity - preserving or non - oscillatory , we have shown that the obtained results are better than the others .we thus consider that preserving high order moments is another important property for vlasov simulations .magnetized plasma phenomena that allow to assume the two dimensionality in velocity space are limited , e.g. , strictly perpendicular shocks and the two - dimensional kelvin - helmholtz instability . even considering the one dimension in configuration space, there are many phenomena that should treat the full three - dimensional velocity space .we are now developing a three - dimensional scheme and its application to the full electromagnetic vlasov simulation , which will enable us to study a wide variety of collisionless plasma phenomena .we would like to thank t. umeda , t. miyoshi , t. sugiyama , and k. kusano for insightful comments on our manuscript , and anonymous referees for carefully reviewing the manuscript .t. m. is supported by a grand - in - aid for young scientists , ( b ) # 21740135 .,\label{eq:46}\\ c_{13;i , j } & = & \frac{1}{\delta x^2 } \left[f_{i , j}+f_{iup , j}-\frac{6{{\rm sgn}}\left(\zeta_{i , j}\right){{\rm sgn}}\left(\eta_{i , j}\right)}{\delta x \delta y^3 } g(y_{j},y_{jup},m_{y;icell , jcell}^m)\right],\label{eq:47}\\ c_{21;i , j } & = & \frac{-1}{\delta y } \left[2f_{i , j}+f_{i , jup}-\frac{9{{\rm sgn}}\left(\zeta_{i , j}\right){{\rm sgn}}\left(\eta_{i , j}\right)}{\delta x^3 \delta y } g(x_{i},x_{iup},m_{x;icell , jcell}^m)\right],\label{eq:48}\\ c_{22;i , j } & = & \frac{1}{\delta x \delta y}\left[4f_{i , j}+2\left(f_{iup , j}+f_{i , jup}\right)+f_{iup , jup } - \frac{9{{\rm sgn}}\left(\zeta_{i , j}\right ) { { \rm sgn}}\left(\eta_{i , j}\right)}{\delta x \delta y}\times \right.\nonumber \\ & & \left . \left\{\left(30\left\{\frac{x_ix_{iup}}{\delta x^2}+\frac{y_j y_{jup}}{\delta y^2}\right\}+2\left\{\frac{x_i}{\delta x}+\frac{y_j}{\delta y}\right\}+13\right)m^0_{icell , jcell } \right.\right.\nonumber \\ & & \left.\left .-4\left(\left\{8x_{iup}+7x_{i}\right\}\frac{m^1_{x;icell , jcell}}{\delta x^2 } + \left\{8y_{jup}+7y_{j}\right\}\frac{m^1_{y;icell , jcell}}{\delta y^2}\right ) \right.\right.\nonumber \\ & & \left .\left.+ 60\left(\frac{m^2_{x;icell , jcell}}{\delta x^2 } + \frac{m^2_{y;icell , jcell}}{\delta y^2}\right)\right\}\right],\label{eq:49}\\ c_{23;i , j } & = & \frac{-1}{\delta x^2 \delta y}\left[2\left(f_{i , j}+f_{iup , j}\right)+f_{i , jup}+f_{iup , jup } - \frac{12{{\rm sgn}}\left(\zeta_{i , j}\right ) { { \rm sgn}}\left(\eta_{i , j}\right)}{\delta x \delta y}\times \right.\nonumber \\ & & \left .\left\{\left(\frac{15 x_i x_{iup}}{\delta x^2}+\frac{6y_{jup}^2 + 4y_{j}y_{jup}+5 y_j^2}{\delta y^2}\right)m^0_{icell , jcell } \right.\right.\nonumber \\ & & \left.\left . -\left(15\left\{x_{iup}+x_{i}\right\}\frac{m^1_{x;icell , jcell}}{\delta x^2 } + 2\left\{8y_{jup}+7y_{j}\right\}\frac{m^1_{y;icell , jcell}}{\delta y^2}\right)\right.\right.\nonumber \\ & & \left.\left .+ 30\left(\frac{m^2_{x;icell , jcell}}{\delta x^2 } + \frac{m^2_{y;icell , jcell}}{\delta y^2}\right)\right\}\right],\label{eq:50}\\ c_{31;i , j } & = & \frac{1}{\delta y^2 } \left[f_{i , j}+f_{i , jup}-\frac{6{{\rm sgn}}\left(\zeta_{i , j}\right){{\rm sgn}}\left(\eta_{i , j}\right)}{\delta x^3 \delta y } g(x_{i},x_{iup},m_{x;icell , jcell}^m)\right],\label{eq:51}\\ c_{32;i , j } & = & \frac{-1}{\delta x \delta y^2}\left[2\left(f_{i , j}+f_{i , jup}\right)+f_{iup , j}+f_{iup , jup } - \frac{12{{\rm sgn}}\left(\zeta_{i , j}\right ) { { \rm sgn}}\left(\eta_{i , j}\right)}{\delta x \delta y}\times \right.\nonumber \\ & & \left . \left\{\left(\frac{6x_{iup}^2 + 4x_{i}x_{iup}+5 x_i^2}{\delta x^2}+\frac{15 y_j y_{jup}}{\delta y^2}\right)m^0_{icell , jcell}\right.\right.\nonumber \\ & & \left.\left .-\left(2\left\{8x_{iup}+7x_{i}\right\}\frac{m^1_{x;icell , jcell}}{\delta x^2 } + 15\left\{y_{jup}+y_{j}\right\}\frac{m^1_{y;icell , jcell}}{\delta y^2}\right)\right.\right.\nonumber \\ & & \left.\left .+ 30\left(\frac{m^2_{x;icell , jcell}}{\delta x^2 } + \frac{m^2_{y;icell , jcell}}{\delta y^2}\right)\right\}\right],\label{eq:52}\\ c_{33;i , j } & = & \frac{1}{\delta x^2 \delta y^2}\left[f_{i , j}+f_{iup , j}+f_{i , jup}+f_{iup , jup } - \frac{4{{\rm sgn}}\left(\zeta_{i , j}\right ) { { \rm sgn}}\left(\eta_{i , j}\right)}{\delta x \delta y}\times \right.\nonumber \\ & & \left .\left\{\left(30\left\{\frac{x_i x_{iup}}{\delta x^2}+\frac{y_j y_{jup}}{\delta y^2}\right\}+11\right)m^0_{icell , jcell } \right.\right.\nonumber \\ & & \left.\left .-30\left(\left\{x_{iup}+x_{i}\right\}\frac{m^1_{x;icell , jcell}}{\delta x^2 } + \left\{y_{jup}+y_{j}\right\}\frac{m^1_{y;icell , jcell}}{\delta y^2}\right)\right.\right.\nonumber \\ & & \left.\left .+ 60\left(\frac{m^2_{x;icell , jcell}}{\delta x^2 } + \frac{m^2_{y;icell , jcell}}{\delta y^2}\right)\right\}\right],\label{eq:53}\end{aligned}\ ] ] where , , and , , t. , hoshino , m. , jan . 2009 .electron shock surfing acceleration in multidimensions : two - dimensional particle - in - cell simulation of collisionless perpendicular shock .astrophysical journal 690 , 244251 . , k. , kishimoto , y. , saito , d. , li , j. , utsumi , t. , dec . 2009 .a numerical method for solving the vlasov - poisson equation based on the conservative ido scheme .journal of computational physics 228 , 89198943 . ,y. , aoki , t. , takizawa , k. , feb .conservative form of interpolated differential operator scheme for compressible and incompressible fluid dynamics .journal of computational physics 227 , 22632285 . ,t. , miwa , j. , matsumoto , y. , nakamura , t. k. m. , togano , k. , fukazawa , k. , shinohara , i. , may 2010 .full electromagnetic vlasov code simulation of the kelvin - helmholtz instability .physics of plasmas 17 ( 5 ) , 052311+ . , t. , togano , k. , ogino , t. , mar .two - dimensional full - electromagnetic vlasov code with conservative scheme and its application to magnetic reconnection .computer physics communications 180 , 365374 . , f. , apr . 2004 .unified formulation for compressible and incompressible flows by using multi - integrated moments i : one - dimensional inviscid compressible flow .journal of computational physics 195 , 629654 ., f. , akoh , r. , , ii , s. , mar .2006 . unified formulation for compressible and incompressible flows by using multi - integrated moments ii : multi - dimensional version for compressible and incompressible flows .journal of computational physics 213 , 3156 . ,t. , ishikawa , t. , wang , p. y. , aoki , t. , kadota , y. , ikeda , f. , sep .1991 . a universal solver for hyperbolic equations by cubic - polynomial interpolation ii .two- and three - dimensional solvers .computer physics communications 66 , 233242 . and ( c , d ) in the two stream instability simulation , with ( a , c ) the mma and ( b , d ) the cip - csl2 schemes .the numbers of grid points are 64 and 128 in the and directions .the time , position , and velocity are normalized by the inverse electron plasma frequency , debye length , and thermal velocity , respectively . ] and in the two stream instability simulation .a blue line is obtained from the mma scheme with 64 grid points in the velocity space .black solid and dashed lines are from the cip - csl2 scheme with 128 and 64 points , respectively .the time , position , and velocity are normalized by the inverse electron plasma frequency , debye length , and thermal velocity , respectively . ] , as a function of the number of grid points in the velocity space .triangles and diamonds are the results obtained from the mma and cip - csl2 schemes. symbols with the same colors are obtained from the simulations with the same memory usage . ] .horizontal and vertical axes are the wavenumber and frequency normalized by the inverse electron gyro radius and the electron gyro frequency .color contour shows the fourier component of ( normalized by its maximum value ) to the power of 0.15 ( for illustration ) . from top to bottom , dashed lines represent the r - mode cutoff , upper hybrid , l - mode cutoff , and lower hybrid frequencies .dot - dashed lines represent the dispersion relation of the light mode in vacuum . ] .horizontal and vertical axes are the wavenumber and frequency normalized by the inverse proton gyro radius and the proton gyro frequency .color contour shows the fourier component of ( normalized by its maximum value ) to the power of 0.15 ( for illustration ) .a dot - dashed line represents the dispersion relation of the alfvn wave . ]
we present a new numerical scheme for solving the advection equation and its application to vlasov simulations . the scheme treats not only point values of a profile but also its zeroth to second order piecewise moments as dependent variables , for better conservation of the information entropy . we have developed one- and two - dimensional schemes and show that they provide quite accurate solutions within reasonable usage of computational resources compared to other existing schemes . the two - dimensional scheme can accurately solve the solid body rotation problem of a gaussian profile for more than hundred rotation periods with little numerical diffusion . this is crucially important for vlasov simulations of magnetized plasmas . applications of the one- and two - dimensional schemes to electrostatic and electromagnetic vlasov simulations are presented with some benchmark tests . advection equation , conservative form , multi - moment , information entropy , vlasov simulations , magnetized plasmas
the nodal domains of a ( real ) wavefunction are regions of equal sign , and are bounded by the nodal lines where the wavefunction vanishes .even a superficial look at the nodal domains of a quantum wavefunction reveals the separable or chaotic nature of the quantum system . in separable systems, one observes a grid of intersecting nodal lines , and consequently a checkerboard - like nodal domain pattern . in ( quantum ) chaotic systems on the other hand , the nodal domains form a highly disordered structure , resembling the geometry found in critical percolation .et al _ argued that also the statistics of the number of nodal domains reflects the fundamental difference between separable and chaotic quantum systems .bogomolny and schmit conjectured that the nodal domain statistics of chaotic wavefunctions in two dimensions can be deduced from the theory of critical percolation .they built a percolation model for the nodal domains which allowed them to calculate exactly the distribution of numbers of domains .its predictions have been confirmed numerically as far as nodal counting and the area distribution of nodal domains are concerned . while quantum wavefunctions display long - range correlations , the critical percolation model assumes that such correlations can be neglected on distances of the order of a wave length .one may thus expect that _ some _ nodal properties in real wavefunctions are not well described by critical percolation .the main motivation of the present study was to investigate the limits of applicability of the short - range percolation model .the object we will address is related to the distribution of shapes of nodal lines in the random wave ensemble . to be precise, we will calculate approximately the probability , that a nodal line matches a given reference line up to a given precision .we will examine the statistics of nodal lines within the monochromatic random wave model , which is a good description of the eigenfunctions of a quantum billiard in the semiclassical limit .the monochromatic random wave ensemble consists of solutions of the helmholtz wave equation for a fixed energy furthermore the random function is picked up from a gaussian distribution , i.e. higher order correlations of can be expressed through the two - point correlation function by virtue of wick s theorem . a convenient representation of is given by the superposition of cylindrical waves with gaussian distributed amplitudes where are the bessel functions of the first kind , and is the position in polar coordinates .the gaussian random variables obey to render real , and have correlations .using the addition theorem for the bessel functions , one finds for the two - point correlation function it displays in fact long - range correlations , which decay with a power law . in order to access the relevance of the long - range correlations , we compare the monochromatic random wave ensemble with another gaussian ensemble of random functions , which , however , does not have long - range correlations , andis characterized by the correlation function for the latter ensemble the applicability of the critical ( short - range ) percolation picture is evident , since the sign of the random function is not significantly correlated for distances .figure [ corrfu ] shows the spatial correlation functions .( solid ) , and ( dotted ) as a function of .,scaledwidth=90.0% ] we now briefly introduce the central object of this article .a detailed derivation will be given in section [ section : diff ] .consider a smooth , closed reference curve in the plane , which is parametrized by its arclength .the integral of the square of the amplitude of a random function along this curve is itself a random variable .it samples the function not only at a discrete set of points , but along a one - dimensional subset of the plane. it should be well suited to detect the long - range correlations of the random field now assume that has a nodal line very close to the given reference line .then will be small in a sense which will be explained later .thus , by calculating the distribution of , its cumulants or moments , one obtains the relative importance of the given reference line . we will perform these computations for a circular reference line both for the random wave ensemble and for the short - range ensemble defined above .we will study in particular the scaling properties of the cumulants of as functions of the radius ( typical size ) of the reference curve .we shall show that they obey a scaling law which distinguishes clearly between the short - range ensemble and the monochromatic random wave ensemble . to understand the significance of these cumulants, we shall consider an _ approximate _ expression for the probability that a nodal line is found inside a strip of width about the reference line .we shall show that this function , when expanded in powers of generates the cumulants .its scaling properties with the size parameters , however , are less sensitive to the correlations assumed for the underlying random functions model . strictly speaking, the function we compute is a measure of the intensity of fluctuations of the field along the line , which are certainly small when a nodal line approximates the reference line , but can also be small if different nodal lines which just avoid crossing , are within from the reference line .since near avoided crossings have low probability ( see ) we believe that the function we compute is closely related to the true probability .the rest of the paper is organized in the following way .the next section describes in detail the new concept which we introduce to the morphological study of nodal lines , that is , the density of line shapes .once this is done , a formal expression for the density , expressed in terms of is provided , and computed explicitly for particular shapes circles ( section [ section : circles ] ) within a reasonable and calculable approximation .these densities are evaluated for random waves , and for the short - range ensemble .we consider two - dimensional , gaussian random fields , and a prescribed ( closed ) reference line .we shall propose a proper definition of the density of nodal lines which match the reference line in a random field ( or equivalently , the probability that a nodal line with a prescribed form shows up in a guassian random field ) . compared to problems , where the density of ( critical , nodal ) points of a gaussian field is calculated ,we enter here a new dimension and consider the density of one - dimensional strings instead of zero - dimensional , point - like objects . in order to obtain a well - defined and finite theory, we have to regularize the theory by dilating the reference curve to a thin tube with constant thickness and compute the probability , that a nodal line is completely inside this tube see figure ( [ almostcircle ] ) .assume now , that a function has a nodal line close to a reference curve , where denotes the arclength .the normal distance of the nodal line from the reference curve can be obtained via linearization yielding the unit vector is normal to the curve . denotes the corresponding normal derivative .the probability , that a nodal line lies in a _tube is , although well defined , not accessible by analytical means . at this pointwe must resort to further approximations , which will eventually lead us to a tractable model , at the cost of losing the rigour of the original object defined above . as a first step we replace the box shaped cross section by a smooth gaussian and consider instead the expectation value where is the line integral along the reference line , and ..,scaledwidth=70.0% ] however , even the computation of this quantity poses unsurmountable difficulties . is a ratio of two ( in general non - independent ) gaussian variables , which is itself non - gaussian . in order to obtain a tractable expression we approximate the integral by a mean - field type expression where the latter step requires isotropy of the distribution of the random field .the final approximation for the shape probability now reads or where is an integral operator with ( symmetric ) kernel and are the corresponding eigenvalues .the operator is the correlation function of the field , restricted to the given curve .it is positive semi - definite and has a _finite _ trace , thus its eigenvalues have an accumulation point at zero .the final expression for the logarithm of probability ( [ finall ] ) is the starting point of our investigation .it should reflect the relevant features of the inaccessible hard - tube probability , and is an interesting object in its own right .it takes into consideration the random field along the whole reference curve .we remark here again that the final approximation for the probability only tests whether is small along the given curve it is not able to resolve nearly avoided intersections which are placed next to the reference curve . is the generating function for the cumulants of the random variable where the expansion parameter is .it is _ also _ the generating function of the traces of powers of the operator . in fact , expanding in terms of , i.e. for large , one finds comparing the two expansions , we see that as mentioned in the introduction our goal is to compare two different gaussian random fields in two dimensions with correlation functions , namely is the correlation function of the monochromatic random wave ensemble with a sharply defined energy . consequently , it displays long - range correlations . is a typical short - range ensemble .note that the are normalized such that this implies an equal nodal line density for the long , and short - range ensemble .we consider now the approximate probability ( [ final ] ) for circles with radius .the kernel of the operator reads for the monochromatic random wave ensemble with correlation function where are angles describing positions on the circle .owing to the rotational invariance of the problem , the eigenfunctions of are .the eigenvalues of the integral operator are therefore the eigenvalues for the short - range ensemble read figure [ spectrum1 ] shows the eigenvalues of for a circle with radius .the spectrum for the random waves has strong fluctuations , whereas the spectrum for the short - range ensemble is a smooth ( almost ) gaussian .it was mentioned before , that the trace obeys for both the short- and the long - range ensemble . as a function of the order for .shown is the random wave case ( symbol ) , and the short - range case ( points are connected to a dotted line).,scaledwidth=90.0% ]we calculate now and its large- expansion for large radii . in the region for large the bessel functionsare well approximated by ( see ) by setting , we obtain in the transition region , we approximate the bessel function in terms of an airy function ai ( see ) we can combine both asymptotic expansions into a scaling law with a universal scaling function shows , that the scaling functions collapse well for three different values of . note that for negative arguments the scaling function is strongly fluctuating . in the subsequent applications , and its powers will be integrated over , and for this purpose , for can be considered as a stochastic function . vanishes exponentially for , , and for . for ( red ) , ( green ) , and for ( blue ) .] the eigenvalues of the operator scale according to the leading behaviour of the cumulant of which is proportional to the trace of the power of , as a function of the radius reads we find to leading order in this scaling behaviour is compared with the corresponding quantity for the short - range correlations , where for all .some remarks are in order .the cumulants show a typical critical behaviour for the random wave case . below the critical power , the large -scaling does not differ from the short - range case . at the critical power ,logarithmic deviations show up , and above , the cumulant displays an anomalous scaling in , different from the non - critical , short - range ensemble .now we return to the shape probability where is the dimensionless width of the tube around the ( here circular ) reference curve .the limit of _ small _ corresponds to the cumulant ( [ moment ] ) for as far as the scaling behaviour is concerned .therefore , can not be considered as a ` good ' quantity to distinguish between the long - range and the short - range cases for both ensembles , . there might be anomalous higher order corrections in the random wave case , which are not considered here .on the other hand , a large- expansion ( which means arbitrarily wide tubes ) yields the sequence of cumulants ( [ moment ] ) for integer which in fact have characteristic scaling properties for .figure [ cumulant ] shows a log log plot of the third cumulant ( ) as a function of the radius for .the slope is 1.34063 ( standard error ) , i.e. confirms the predicted exponent . as a function of the radius for the random wave case . ]in this paper we compared monochromatic gaussian random waves and a short - range ensemble of random fields by investigating the statistics of along a given reference curve .the order cumulants of this random variable obey non - trivial scaling laws with respect to the linear size of the reference curve ( here circles of radius ) in case of the long - range random waves .the second - order cumulant shows logarithmic deviations from the corresponding scaling behaviour of the short - range ensemble .the cumulants of order three and higher have non - trivial exponents .namely , these cumulants scale like , whereas the cumulants for the short - range ensemble scale .the probability that a nodal line lies in a circular tube of given thickness , however , turned out to be a less useful candidate to probe the long - range properties of the random functions .the logarithm of the shape probability for the short- and the long - range case scale in exactly the same manner , which might explain the success of the ad hoc model .we would like to thank n sondergaard , m dennis , j hannay and b gutkin for many useful discussions .this work was supported by the einstein center and the minerva center for non - linear physics at the weizmann institute , the israel science foundation , and the eu research training network ` mathematical aspects of quantum chaos ' .
in this paper we investigate the properties of nodal structures in random wave fields , and in particular we scrutinize their recently proposed connection with short - range percolation models . we propose a measure which shows the difference between monochromatic random waves , which are characterized by long - range correlations , and gaussian fields with short - range correlations , which are naturally assumed to be better modelled by percolation theory . we also study the relevance of the quantities which we compute to the probability that nodal lines are in the vicinity of a given reference line .
in general , the understanding of a given phenomenon relies on our ability to construct a model that describes the relevant data and their corresponding uncertainties . one way to summarize what the data tell us abouta model is to find a probability density function for its parameters .for such a task , standard fitting techniques such as minimization are commonly used to determine this probability density .typically , this process is iterative : once new data are available , a new fit is performed combining the old and new data. we shall refer to this procedure as a _ global fit_. in some cases , the complexity of the model is such that its numerical evaluation makes the fitting procedure time consuming . for practical reasons, it would be desirable to update the probability density by incorporating the information from new data without having to perform a full global fit .such updating can be achieved by a statistical inference procedure , based on bayes theorem , known as the reweighting technique .a particular example , where the reweighting technique is useful , is in the context of global fits for the determination of parton distribution functions ( pdfs ) .modeling and fitting these functions has been the central task of several collaborations , e.g. , cteq , cj , mstw , and nnpdf , among others .but still , there are kinematic regions where the pdfs are relatively unconstrained . given the complexity of the calculations , it is desirable to use the reweighting technique to update our knowledge of the pdfs or to quantify the potential impact of anticipated data sets on the pdfsthe idea of reweighting pdfs was originally proposed in and later discussed by the nnpdf collaboration in . however , there is disagreement about the reweighting procedure , which has led to methods that differ mathematically .the purpose of this paper is to discuss the differences between the reweighting methods . in particular , we investigate the degree to which the reweighting procedures yield results that are consistent with those from global fits .we shall argue that this is the case for the method proposed in .the paper is organized as follows . in sec .[ sec : the reweighting method ] , we describe the basics of the reweighting technique . in sec .[ sec : the nnpdf paradox ] , we will discuss subtleties in the nnpdf arguments . in sec .[ sec : numerical example ] , we will present a simple numerical example to display the differences between the reweighting methods .our conclusions are given in sec .[ sec : conclusions ] .the reweighting of probability densities in order to incorporate the information from new data is merely the recursive application of bayes theorem .suppose a probability density function ( pdf ) of the parameters in a model is known .( to avoid confusion , we shall take pdf " to mean parton distribution function , and pdf " to mean probability density function . ) given new data , bayes theorem states that where , known as _ posterior _ density , is the updated pdf from the _ prior _ density ( or prior for short ) , which can serve as the prior in a subsequent analysis . the quantity called the _ likelihood _ function , represents the conditional probability for a data set given the parameters of the model .the quantity ensures the normalization of the posterior density . with the new data , the expectation value of an observable can be written as , & = \int d^n\alpha \mathcal{p}(\vec{\alpha}|d ) \mathcal{o}(\vec{\alpha})\notag\\ & = \int d^n\alpha \frac{\mathcal{p}(d|\vec{\alpha})}{\mathcal{p}(d ) } \mathcal{p}(\vec{\alpha})\mathcal{o}(\vec{\alpha})\notag\\ & = \frac{1}{n } \sum_k w_k \mathcal{o}(\vec{\alpha}_k ) .\label{eq : e}\end{aligned}\ ] ] in the last line , we have used a monte carlo approximation of the integral in which the parameters are distributed according to the prior . similarly , the variance is given by & = \frac{1}{n } \sum_k w_k ( \mathcal{o}(\vec{\alpha}_k)-\text{e } [ \mathcal{o}])^2 .\label{eq : var}\end{aligned}\ ] ] the quantities are _ weights _ that are proportional to . their normalization is fixed by demanding =1 ] and variance ] and variance $ ] using eqs .( [ eq : e])and ( [ eq : var ] ) with the weights from eq .( [ eq : bayes1 ] ) or eq .( [ eq : bayes2 ] ) for each set .the results are shown in fig .[ fig : example5a ] where a clear disagreement between the two reweighting methods is exhibited .the variances obtained by using the likelihood are greater than the variances obtained from the likelihood and the convergence of the expectation values is much faster for the latter case .this is consistent with the discussion in section [ sec : the reweighting method ] where we argued that the posterior contains less information than .more importantly , reweighting with the likelihood yields a result that is more compatible with that obtained from the global fits than is that obtained using the likelihood .this is illustrated by the dotted and dashed curves being nearly identical while the solid and dashed curves show significant differences . in the light of above, it is important to discuss why the nnpdf collaboration has obtained reweighting results compatible with global fits in even when they have used the likelihood instead of . in their case ,their prior corresponds to pdfs fitted using deep inelastic scattering data ( dis ) and lepton pair production data ( lpp ) . by performing the reweighting and comparing it with a new global fit using the w - lepton asymmetry data, they have proven the consistency of their reweighting method .however , it is also known that pdfs are already reasonably well constrained by the dis and lpp data .this means that the information provided by the w - lepton data is sub - dominant with respect to the dis and lpp data .we have performed a similar exercise as before but this time using the monte carlo sample as the _ prior _ and the data set as the new evidence .this setup aims to mimic the conditions at which nnpdf had studied the reweighting technique : the data set contains more data than and therefore the effects of including the later must be sub - dominant for a global fit as well as the reweighting .the results are shown in fig .[ fig : example5b ] .it is clear that in this situation the reweighting of both methods yield similar results compatible with global fits .one way to quantify the information about the parameters provided by the likelihood , where is either or is to calculate the kullback - leibler ( kl ) divergence ( see appendix [ sec : kullbakc ] .table [ tab : kl ] shows the kl divergences for the reweighting results performed above .the values in the table confirms the loss of information when using as the likelihood instead of in the reweighting procedure . .data sets . [ cols="^,^,^,^,^,^,^,^",options="header " , ] ( black ) from which the prior distribution is obtained and the new evidence ( colored ) that is used for reweighting or appended to to perform a global fit .the data is normalized respect to the `` true '' model .columns 2,3,4 shows expectation values and variances from global fits and reweighting . dashed lines are the results from global fits .black dashed uses only the data while the colored dashed line includes the new evidences .solid and dotted lines are reweighting results of data set using the evidences of data .dotted uses while solid uses ., scaledwidth=85.0% ] . in this case ,set is used to obtain the prior distribution and set is used for reweighting or appended to for a global fit ., scaledwidth=85.0% ]the technique of statistical inference is a useful tool to constrain probability density functions in the presence of new evidence .it is an alternative method to obtain updated distributions without having to perform a global fit by appending the old data and the new data .the nnpdf collaboration has argued that the method proposed in is not adequate and they proposed their own method . in the light of the results presented in this paper ,we conclude that both methods are statistically equivalent in the limit when the prior densities are well constrained by the data and the new evidence do not provide significant information .we have shown using a numerical example that , if the uncertainties in the prior distribution are larger compared to the uncertainties obtained by the inclusion of new data , the method proposed by nnpdf collaboration is less efficient than the method proposed by and the latter yields results that are significantly closer to those obtained from global fits .we thank seth quackenbush for helpful discussions on the subject .this work was supported by doe contract no .de - sc0010102 .the distribution can be obtained by integrating subjected to .mathematically this is simply \,p(y|\vec{\alpha})\,d^ny\notag\\ = & \frac{1}{2\pi i } \int_{-\infty}^\infty d ( i\omega ) \,e^{i\omega \chi^2 } \int e^{-i \omega \chi^2(\vec{y},\vec{t } ) } \,p(y|\vec{\alpha } ) \,d^n y,\notag\\ = & \frac{1}{2\pi i } \int_{-\infty}^\infty d ( i\omega ) \ , e^{i\omega \chi^2 } \int \frac{1}{(2\pi)^{n/2}|\sigma|^{1/2 } } e^{-\frac{1}{2 } ( 2 i \omega + 1 ) \chi^2(y,\vec{\alpha } ) } \,d^n y,\nonumber\\ = & \frac{1}{2^{n/2 } } \frac{1}{2\pi i } \int_{-\infty}^\infty d ( i\omega ) \ , e^{i\omega \chi^2 } \frac{1}{(i \omega + 1/2)^{n/2 } } , \nonumber\\ = & \frac{1}{2^{n/2 } \ , \gamma(n/2 ) } ( \chi^2)^{\frac{1}{2}(n - 2 ) } e^{-\frac{1}{2}\chi^2}.\end{aligned}\ ] ] then we obtain \,\mathcal{p}(\tilde{\chi}^2|\vec{\alpha})\notag\\ = & \frac{1}{2^{n/2 - 1 } \ , \gamma(n/2 ) } ( \chi^2)^{\frac{1}{2}(n - 1 ) } e^{-\frac{1}{2}\chi^2}.\end{aligned}\ ] ]for completeness in this appendix we present the standard hessian method for error propagation .suppose the model parameters that minimizes the is found .the method consists of expanding the around the minima as a function of the parameters : where is the hessian matrix given by that is evaluated at .next we diagonalize the matrix which gives eigenvectors with eigenvalues .the displacements in eq . ([ eq : hessian ] ) can be written in terms of rescaled vectors replacing eq .( [ eq : disp ] ) in eq .( [ eq : hessian ] ) gives notice that each displacements ( ) corresponds in eq .( [ eq : chi2_z ] ) a change of unit .the interval defined by these displacements is known as the one - sigma confidence interval .the kullback - leibler ( kl ) divergence of the posterior density from the prior is given by where the weights are defined as in sec .[ sec : the reweighting method ]. the larger the kl divergence , the greater the difference between and and , therefore , the more informative are the data about the pdf parameters , relative to what was known about them prior to inclusion of these data .a similar quantity called _ effective _ number of replicas was defined in the references : here is the number of monte carlo sample ( _ replicas _ ) taken from the prior distribution . clearly the kl divergence is related to via
two different techniques for adding additional data sets to existing global fits using bayesian reweighting have been proposed in the literature . the derivation of each reweighting formalism is critically reviewed . a simple example is constructed that conclusively favors one of the two formalisms . the effects of this choice for global fits is discussed .
partitioning image into superpixels can be used as a preprocessing step to complex computer vision tasks , such as segmentation , visual tracking , stereo matching , edge detection , etc .sophisticated algorithms benefit from working with superpixels , instead of just pixels , because superpixels reduce input entries and enable feature computation on more meaningful regions . like many terminologies in computer vision, there is no rigorous mathematical definition for superpixel .the commonly accepted description of a superpixel is `` a group of connected , perceptually homogeneous pixels which does not overlap any other superpixel . '' for superpixel segmentation ,the following properties are generally desirable .accuracy*. superpixels should adhere well to object boundaries .superpixels crossing object boundaries arbitrarily may lead to bad or catastrophic result for subsequent algorithms . propregularity*. the shape of superpixels should be regular .superpixels with regular shape make it easier to construct a graph for subsequent algorithms. moreover , these superpixels are visually pleasant which is helpful for algorithm designers analysis . prop* similar size*. superpixels should have a similar size .this property enables subsequent algorithms dealing with each superpixel without bias .as pixels have the same `` size '' and the term of `` superpixel '' is originated from `` pixel '' , this property is also reasonable intuitively .this is a key property to distinguish between superpixel and other over - segmented regions. prop .* efficiency*. a superpixel algorithm should have a low complexity .extracting superpixels effectively is critical for real - time applications . . under the constraint of prop .3 , the requirements on accuracy and regularity are to a certain extent oppositional . intuitively , if a superpixel , with a limited size , needs to adhere well to object boundaries , the superpixel has to adjust its shape to that object which may be irregular . to our best knowledge , state - of - the - art superpixel algorithms failed to find a compromise between regularity and accuracy .as four typical algorithms shown in fig .[ fig : vc5]-[fig : vc5 ] , the shape of superpixels generated by nc ( fig .[ fig : vc5 ] ) and lrw ( fig .[ fig : vc5 ] ) is more regular than that of superpixels extracted by seeds ( fig .[ fig : vc5 ] ) and ers ( fig .[ fig : vc5 ] ) nonetheless , the superpixels generated by seeds and ers adhere object boundaries better than those of nc and lrw . in this work , a gaussian mixture model ( gmm ) and an algorithm derived from the expectation - maximization ( em ) are built .it is shown that the proposed method can strike a balance between regularity and accuracy .an example is displayed in fig .[ fig : vc5 ] , the compromise is that superpixels at regions with complex textures have an irregular shape to adhere object boundaries , while at homogeneous regions , the superpixels are regular .computational efficiency is a matter of both algorithmic complexity and implementation .our algorithm has a linear complexity with respect to the number of pixels .as an algorithm has to read all pixels , linear time theoretically is the best time complexity for superpixel problem .algorithms can be categorized into two major groups : parallel algorithms that are able to be implemented with parallel techniques and scale for the number of parallel processing units , and serial algorithms whose implementations are usually executed sequentially and only part of the system resources can be used on a parallel computer .modern computer architectures are parallel and applications can benefit from parallel algorithms because parallel implementations generally run faster than serial implementations for the same algorithm .the proposed algorithm is inherently parallel and our serial implementation can easily achieve speedups by adding few simple openmp directives .our method is constructed by modelling each pixel with a gaussian mixture model ; associating each superpixel to one of the gaussian densities ; and further solving the proposed model with the expectation - maximization algorithm .differing from the commonly used assumption that data points are independent and identically distributed ( i.i.d . ) in clustering applications , pixels are assumed to be independent but non - identically distributed in our model .the proposed approach was tested on the berkeley segmentation data set and benchmarks 500 ( bsds500 ) . to the best of our knowledge ,the proposed method outperforms state - of - the - art methods in accuracy and presents a competitive performance in computational efficiency .the rest of this paper is organized as follows .section [ sec : related ] presents an overview of related works on superpixel segmentation .section [ sec : method ] introduces the model , solution , algorithm , parallel potential , parameters , and complexity of the proposed method .experiments are discussed in section [ sec : exp ] . finally , the paper is concluded in section [ sec : cons ] .+ + +the concept of superpixel was first introduced by xiaofeng ren and jitendra malik in 2003 . during the last decades , the superpixel problem has been well studied .exsiting superpixel algorithms extract superpixels either by optimizing superpixel boundaries , such as finding paths and evolving curves , or by grouping pixels , e.g. the most well - known slic .algorithms exctact superpixels not by labelling pixels directly but by marking superpixel boundaries , or by only updating the label of pixels on superpixel boundary is in this category .rohkohl et al .present a superpixel method that iteratively assigns superpixel boundaries to their most similar neighbouring superpixel .a superpixel is represented with a group of pixels that are randomly selected from that superpixel .the similarity between a pixel and a super - pixel is defined as the average similarities from the pixel to all the selected representatives .aiming to extract lattice - like superpixels , or `` superpixel lattices '' , partitions an image into superpixels by gradually adding horizontal and vertical paths in strips of a pre - computed boundary map .the paths are formed by two different methods : s - t min - cut and dynamic programming .the former finds paths by graph cuts and the latter constructs paths directly .the paths have been designed to avoid parallel paths crossing and guarantee perpendicular paths cross only once .the idea of modelling superpixel boundaries as paths ( or seam carving ) and the use of dynamic programming were borrowed by later variations or improvements . in turbopixels , levinshtein et al .model the boundary of each superpixel as a closed curve .so , the connectivity is naturally guaranteed .based on level - set evolution , the curves gradually sweep over the unlabelled pixels to form superpixels under the constraints of two velocities .although this method can produce superpixels with homogeneous size and shape , its accuracy is relative poor . in vcells , a superpixel is represented as a mean vector of colour of pixels in that superpixel . with the designed distance ,vcells iteratively updates superpixel boundaries to their nearest neighbouring superpixel .the iteration stops when there are no more pixels need to be updated .seeds exchanges superpixel boundaries using a hierarchical structure . at the first iteration ,the biggest blocks on superpixel boundary are updated for a better energy .the size of pixel blocks becomes smaller and smaller as the number of iterations increases .the iteration stops after the update of boundary exchanges in pixel level .improved from slic , present more complex energy . to minimize their corresponding energy , update boundary pixels instead of assigning a label for all pixels in each iteration .based on , adds the connectivity and superpixel size into their energy . for the pixel updating, uses a hierarchical structure like seeds , while exchanges labels only in pixel level .zhu et al .propose a speedup of slic by only moving unstable boundary pixels , the label of which changed in the previous iteration . besides , based on pre - computed line segments or edge maps of the input image , align superpixel boundaries to the lines or the edges to form superpixels with very regular shape .superpixels algorithms that assign labels for all pixels in each iteration is in this category . with an affinity matrix constructed based on boundary cue ,the algorithm developed in , which is usually abbreviated as nc , uses normalized cut to extract superpixels .this method produces very regular superpixels , while its time complexity is approximately , which is expensive as a preprocessing step , where is the number of pixels . in quick shift ( qs ) , the pixel density is estimated on a parzen window with a gaussian kernel . a pixel is assigned to the same group with its parent which is the nearest pixel with a greater density and within a specified distance .qs does not guarantee connectivity , or in other words , pixels with the same label may not be connected .veksler et al .propose an approach that distributes a number of overlapping square patches on the input image and extracts superpixels by finding a label for each pixel from patches that cover the present pixel .the expansion algorithm in is gradually adapted to modify pixel label within local regions with a fixed size in each iteration .this method can generate superpixels with regular shape and its run - time is proportional to the number of overlapping patches .a similar solution in is to formulate the superpixel problem as a two - label problem and build an algorithm through grouping pixels into vertical and horizontal bands . by doing this , pixels in the same vertical and horizontal group form a superpixel .starting from an empty graph edge set , ers sequentially adds edges to the set until the desired number of superpixels is reached . at each adding ,ers takes the edge that results in the greatest increase of an objective function .the number of generated superpixels is exactly equal to the desired number .this method adheres object boundary well and its performance in accuracy was not surpassed until our method is proposed .slic is the most well - known superpixel algorithm due to its efficiency and simplicity . in slic , a pixel corresponds to a five dimensional vector including colour and spatial location , and -means is employed to cluster those vectors locally , i.e. each pixel only compares with superpixels that fall into a specified spatial distance and is assigned to the nearest superpixel .many variations follow the idea of slic in order to either decrease its run - time or improve its accuracy .lsc also uses a -means method to refine superpixels . instead of directly using the 5d vector used in slic , lsc maps them to a feature space and a weighted -means is adopted to extract superpixels .it is the most recent algorithm that achieves equally well accuracy with ers .based on marker - based watershed transform , incorporate spatial constraints to an image gradient in order to produce superpixels with regular shape and similar size .generally , those methods run relatively faster , but adhere ground - truth boundaries badly .lrw groups pixels using an improved random walk algorithm . by using texture features to optimize an initial superpixel map, this method can produce regular superpixels in regions with complex texture .however , this method suffers from a very slow speed .although fh , mean shift and watersheds , have been refered to as `` superpixel '' alogrithms in the literature , they are not covered in this paper as the size of the regions produced by them varies enormously .this is mainly because these algorithms do not offer direct control to the size of the segmented regions .structure - sensitive or content - sensitive superpixels in are also not considered to be superpixels , as they do not aim to extract regions with similar size ( see prop . 3 in section [ sec : intro ] ) .a large number of superpixel algorithms have been proposed , however , few works present novel models and most of the exsiting energy functions are variation of the objective function of -means . in our work, we propose a novel model to tackle the superpixel problem . with a comprehensively designed algorithm , the underlying segmentation from the modelis well revealed .the proposed method can be described by two steps : the first one is to introduce the proposed new model , in which pixel and superpixel are associated with each other ; after that , an algorithm is constructed to solve this model in the second step .the complexity of the proposed algorithm is presented at the end of this section . in the proposed model , supposed to be the pixel index of an input image with its width and height in pixels .the total number of pixels in can be denoted as . for each pixel , which belongs to one of the integers in the image pixel set , represents its position on the image plane , where , , and . is used to represent its intensity or colour .if colour image is used , is a vector , otherwise , is a scalar . to better represent pixel , a random variable along with its observed value is used .note that here the random variables , for all , are independent but non - identically distributed as discussed below .the width and height of each superpixel should be specified by user .if the desired number of superpixels is preferred , we obtain and using equation . is encouraged to use the same value for and , or they should not have a big difference as we wish the generated superpixels are with square shape .once and are obtained , the numbers of superpixels and respectively along the width and the height of are defined using equation . for simplicity of discussion , we assume that and . therefore , the initial number of superpixels becomes . for each individual pixel , there are two initial superpixel numbers , and , which are defined in equation . based on equation and , it can be inferred that , and . is used to denote the random latent variable for pixel .the possible values of are in a set expressed in equation . where , , and and are positive integers , such as .obviously , is a subset of .we assume that a pixel is generated by first randomly choosing one of the gaussian densities with the same probability , and then being sampled on the selected gaussian distribution . with the definitions and notations above , pixel described by a mixture of gaussians as defined in equation . where is manually set as , .although this setting results in a fact that may not equal , its effect will be removed in our algorithm due to the same value for the prior distribution of . for a given , is a gaussian density function parametrized by a mean vector , , and a covariance matrix , as shown in equation . where and is the number of components in .given an image , our model is defined as maximizing equation , which is extended from logarithmic likelihood function used in many statistic estimation problems . in the above equation , is used to denote the parameters in the gaussian densities , where .the label of pixel is determined by the posterior distribution of as shown below . the posterior probability of can be expressed as therefore , once we find a solution to maximize , can be easily obtained .as is constant , we will use to represent it in the following text . according to jensen s inequality, is greater than or equal to as shown below . where , , and .we use the expectation - maximization ( em ) algorithm to iteratively maximize to approach the maximum of with two steps : the expectation step ( e - step ) and the maximization step ( m - step ) ._ e - step _ : once a guess of is given , is expected to be tightly attached to . to this end, is required to ensure .equation is a sufficient condition for jensen s inequality to hold the equality . where is a constant number .since , can be eliminated and can be updated by equation . notice that equation is exactly the same with equation .therefore , equation can be rewrote as _ m - step _ : in this step , is derived by maximizing with a given . to do this ,we first get the derivatives of with respect to and , and set the derivatives to zero , as seen in equations -. then the parameters are obtained by solving equation . where is a subset of .the update of will monotonically improve : . in this section, we will discuss the choice of covariance matrices and tricks to make the algorithm running well in practice .it can be noted that although the solution in section [ sec : solution ] supports full covariance matrices , i.e. , a covariance matrix with all its elements as shown in equation , only block diagonal matrices are used in this paper ( see equation ) .this is done for three reasons .first , computing block diagonal matrices is more efficient than full matrices .second , generally there is no strong relation between the spatial coordinates and the intensity or the colour .so it is reasonable to consider them separately .third , full matrices will not bring better performance but give bad results for colour images . for different colour space ,it is encouraged to split components that do not have strong relation into different covariance matrices .for example , if cielab is adopted , it is better to put colour - opponent dimensions and into a 2 by 2 covariance matrix . in this case , will become .however , we will keep using to discuss our algorithm for simplicity . where and respectively represent the spatial covariance matrix and the colour covariance matrix .the covariance matrices are updated according to equations and which are derived by replacing in equation with the block diagonal matrices , and by further solving . where and are the spatial components of and , and and are , for grayscale image , the intensity component , or , for colour image , the colour component of and .since and are positive semi - definite in practice , they may be not invertible sometimes . to avoid this trouble , we first compute the eigendecomposition of the two covariance matrices as shown in equations and , then eigenvalues on the major diagonal of and modified using equations and , and finally and are reconstructed with the equations and . where and are diagonal matrices with eigenvalues on their respective major diagonal . and for colour image are used to denote the respective eigenvalues , for and . and are orthogonal matrices . if the input image is grayscale , , and are scalars , and will be reduced to 0 . where and are two constant numbers . where and are diagonal matrices with and on their respective major diagonal .after initializing , equations , , , and are iterated until convergence .once the iteration stops , the superpixel label can be obtained using equation . as the connectivity of superpixelscan not be guaranteed , a postprocessing step is required .this is done by sorting the isolated superpixels in ascending order according to their sizes , and sequentially merging small isolated superpixels , which are less than one fourth of the desired superpixel size , to their nearest neighbouring superpixels , with only intensity or colour being taken into account .once an isolated superpixel ( source ) is merged to another superpixel ( destination ) , the size of the source superpixel is cleared to zero , and the size of the destination superpixel will be updated by adding the size of the source superpixel .this size updating trick will prevent the size of the produced superpixels from significantly varying . as a preprocessing step, superpixel algorithm should run as fast as possible .since in slic and lsc , iterating a certain number of times is sufficient for most images without checking convergence , we borrow this trick to our algorithm and set the number of iterations as a parameter .the proposed algorithm can be summarized in algorithm 1 . and , or , initialize , , using seed pixels over the input image uniformly at fixed horizontal and vertical intervals and .initialize and .compute and using equation .calculate using equation , set .compute using equations .compute and using equations and .update using equation . . is determined by equation .merge small superpixels to their nearest neighbour . as the frequency of a single processor is difficult to improve ,modern processors are designed using parallel architecture .if an algorithm is able to be implemented with parallel techniques and scales for the number of parallel processing units , its computational efficiency will be significantly improved .fortunately , the most expensive part of our algorithm , namely the computing and , can be parallelly executed .each is computed independently , and so do and . in our experiments, we will show that our implementation is easy to get speedup on multi - core cpus .in addition to the parameters ( i.e. and , or ) left to users , , , , , , and the initialization of should be assigned before starting the proposed algorithm . and control the size of overlapping region of neighbouring superpixels .we set them to for all the results in this paper .if we use a large or , the run - time will increase a lot but the results will not present a satisfactory improvement in accuracy . in general , larger will give better performance but , again , it will sacrifice the efficiency .we have found that is enough for most images . in most state - of - the - art algorithms , the size of overlapping regionis not provided as parameters .we make them free to users so that they can customize their own algorithm . unlike , and , different , , and initialization of will not change the run - time but give a different performance in accuracy .although and are originally used to prevent the covariance matrices from being singular , they also can weigh the relative importance between spatial proximity and colour similarity .for instance , a larger produces more regular superpixels , and the opposite is true for a smaller .as and are opposite to each other , we set and leave for detailed description in section [ sec : exp ] . as we hope superpixels being local or regularly positioned on the image plane , are initialized regularly as already presented in algorithm 1 . for , we set their main diagonal to , and others to zero , so that neighbouring superpixels can be well overlapped at the beginning .the initialization of is not very straightforward , the basic idea is to set their main diagonal with a small colour distance with which two pixels are perceptually uniform .the effect of different initialization of will be discussed in section [ sec : exp ] .the updating of has a complexity of , for .according to equation , , in which , , and are constant numbers in our algorithm .therefore , the complexity of updating is .based on equations , , and , the complexity of updating is , for .according to equations and , we have . as a result , , which means .therefore , the updating of gaussian parameters has a complexity of . in the worst case ,the sorting procedure in the postprocessing step requires operations , where is the number of isolated superpixels .the merging step needs operations , where is the number of small isolated superpixels and represents the average number of their adjacent neighbours . in practice , , the operations required for the postprocessing step can be ignored .therefore , the proposed superpixel algorithm is of a linear complexity .in this section , algorithms are evaluated in terms of accuracy , computational efficiency , and visual effects . like many state - of - the - art superpixel algorithms, we also use cielab colour space for our experiments because it is perceptually uniform for small colour distance ._ accuracy _ : three commonly used metrics are adopted : boundary recall ( br ) , under - segmentation error ( ue ) , and achievable segmentation accuracy ( asa ) . to assess the performance of the selected algorithms , experiments are conducted on the berkeley segmentation data set and benchmarks 500 ( bsds500 ) which is an extension of bsds300 .these two data sets have been wildly used in superpixel algorithms .bsds500 contains 500 images , and each one of them has the size of 481 or 321 with at least four ground - truth human annotations . 1 .br measures the percentage of ground - truth boundaries correctly recovered by the superpixel boundary pixels .a true boundary pixel is considered to be correctly recovered if it falls within two pixels from at least one superpixel boundary .a high br indicates that very few true boundaries are missed .2 . a superpixel should not cross ground - truth boundary , or , in other words , it should not cover more than one object . to quantify this notion ,ue calculates the percentage of superpixels that have pixels `` leak '' from their covered object as shown in equation . where and are pixel sets of superpixel and ground - truth segment . is generally accepted .3 . if we assign every superpixel with the label of a ground - truth segment into which the most pixels of the superpixel fall , how much segmentation accuracy can we achieve , or how many pixels are correctly segmented ?asa is designed to answer this question .its formula is defined in equation in which is the set of ground - truth segments . _ computational efficiency _ :execution time is used to quantify this property . as we have mentioned in section [ sec : para ] , the effect of and the initialization of is discussed in this section . are initialized to diagonal matrix with the same on their major diagonal . as shown in fig .[ fig : difflambda ] , there is no obvious regularity . in fig .[ fig : difflambda ] , the maximum difference between two lines is around 0.001 0.006 which is very small .although it seems that small will lead to a better br result , it is not true for ue and asa .for instance , in the enlarged region of fig .[ fig : lue ] , the result of is slightly better than .visual results with different are plotted in fig .[ fig : vlambda ] , it is hard for human to distinguish the difference among the five results . can be used to control the regularity of the generated superpixels in each iteration . as shown in fig .[ fig : diffec ] , small difference of does not present obvious variation for ue and asa , but it do affect the results of br .in general , a larger leads to more regular superpixels .conversely , the shape of superpixels generated with a smaller is relative irregular ( see fig . [fig : vdiffec ] ) . because superpixels with irregular shape will produce more boundary pixels , the result of br with small is better than that with greater .we will use and in the following experiments .although this setting does not give the best performance in accuracy , the shape of superpixels using this setting is regular and visually pleasant ( see fig .[ fig : vdiffec ] ) .moreover , it is enough to outperform state - of - the - art algorithms as shown in fig .[ fig : metrics ] . + in order to evaluate scalability for the number of processors ,we test our implementation on an machine attached with an intel(r ) xeon(r ) cpu e5 - 2620 v3 @ 2.40ghz and 8 gb ram .the source code is not optimized for any specific architecture .only two openmp directives are added for the updating of , , and , as they can be computed independently ( see section [ sec : para ] ) .as listed in table [ tab : para ] , for a given image , multiple cores will present a better performance .lllll resolution & 1 core & 2 cores & 4 cores & 6 cores + 240 & 393.646 & 303.821 & 227.078 & 200.708 + 320 & 776.586 & 589.785 & 400.073 & 321.548 + 480 & 1569.74 & 1011.62 & 743.629 &624.561 + 640 & 3186.71 & 2244.12 & 1353.72 & 1069.79 + we compare the proposed algorithm to eight state - of - the - art superpixel segmentation algorithms including lsc , slic , seeds , ers , turbopixels , lrw , vcells , and waterpixels .the results of the eight algorithms are all generated from implementations provided by the authors on their respective websites with their default parameters except for the desired number of superpixels , which is generally decided by users . as shown in fig .[ fig : metrics ] , our method outperforms the selected state - of - the - art algorithms especially for ue and asa .it is not easy to distinguish between our result and lsc in fig .[ fig : metrics ] .however , if we use , our result will obviously outperforms lsc as displayed in fig .[ fig : lscbr ] . to compare the run - time of the selected algorithms , we test them on a desktop machine equipped with an intel(r )core(tm ) i5 - 4590 cpu @ 3.30ghz and 8 gb ram .the results are plotted in fig .[ fig : time ] . according to fig .[ fig : time ] , as the size of the input image increases , run - time of our algorithm grows linearly , which proves our algorithm is of linear complexity experimentally .a visual comparison is displayed in fig .[ fig : visual ] . according to the zooms, only our algorithm can correctly reveal the segmentations .our superpixel boundaries can adhere object very well .lsc gives a really competitive result , however there are still parts of the objects being under - segmented .the superpixels extracted by seeds and ers are very irregular and their size varies tremendously . the remaining five algorithms can generate regular superpixels , but they adhere object boundaries very bad . in this figure . ]+ + + + + + + +this paper presents an efficient superpixel segmentation algorithm by constructing a novel gaussian mixture model . with each superpixel associated to a gaussian density , each pixelis assumed to be independently distributed according to a mixture of the gaussian densities . aiming to extract superpixles with similar size, the gaussian densities are assumed to be occurred with the same chance .we formulate a log - likelihood function to describe the probability of an image .based on jensen s inequality and the expectation - maximization algorithm , an iterative solution is constructed to approach a maximum of the log - likelihood by improving its low bound .the label of each pixel is determined to the one with maximum posterior probability . with a comprehensively designed algorithm, opportunity is discovered to control the shape of superpixels . according to our experiments, the initialization of our method produces results with tiny difference which can be ignored .the proposed algorithm is of linear complexity , which has been proved by both theoretical analysis and experimental results .what s more , it can be implemented using parallel techniques , and its run - time scales for the number of processors .the comparison with the state - of - the - art algorithms is shown that our algorithm outperforms the selected methods in accuracy and presents a competitive performance in computational efficiency . as a contribution to open source society , we will make our test code public available at https://github.com/ahban .10 url # 1`#1`urlprefixhref # 1#2#2 # 1#1
superpixel segmentation is used to partition an image into perceptually coherence atomic regions . as a preprocessing step of computer vision applications , it can enormously reduce the number of entries of subsequent algorithms . with each superpixel associated with a gaussian distribution , we assume that a pixel is generated by first randomly choosing one of the superpixels , and then the pixel is drawn from the corresponding gaussian density . unlike most applications of gaussian mixture model in clustering , data points in our model are assumed to be non - identically distributed . given an image , a log - likelihood function is constructed for maximizing . based on a solution derived from the expectation - maximization method , a well designed algorithm is proposed . our method is of linear complexity with respect to the number of pixels , and it can be implemented using parallel techniques . to the best of our knowledge , our algorithm outperforms the state - of - the - art in accuracy and presents a competitive performance in computational efficiency . expectation - maximization , gaussian mixture model , parallel algorithm , superpixel segmentation
from the point of view of an insurance company issuing or selling a _ variable annuity with a guaranteed lifetime withdrawal benefit _ ( from here : glwb ) , natural questions are the pricing , hedging , and risk management of this product . from the point of view of the purchaser , a different set of questions are important optimal product allocation to glwb s , optimal asset allocation within glwb s , optimal management of multiple glwb accounts ( eg . where and when should new deposits be made ) , and optimal initiation of withdrawals .these are all mostly open and ongoing research issues , and the latter is the question considered here .a glwb purchaser decides when to initiate withdrawals .typically , the vendor rewards the purchaser for delaying initiation by providing a bonus ( a.k.a .roll - up ) to the guarantee base of the product .the guarantee base may also rise because of resets , which are more likely if initiation is delayed .once initiated , payments are made at a rate which is a percentage of the guarantee base .this percentage may vary depending on the age at initiation .insurance fees for the glwb are charged on the guarantee base , regardless of whether withdrawals have been initiated . at the time of initiationthe bonus ( roll - up ) ceases but the guaranteed withdrawals do not trigger surrender charges . we do not model withdrawals that exceed the guaranteed amount , since in that case we would anticipate lapsation of the entire glwb .that would be a reasonable alternative if we were studying whether or not the glwb is itself a suitable investment .instead we have in mind a situation in which a firm decision has been made to retain the glwb ( perhaps due to adverse tax implications ) and the question of interest is purely one of initiating versus delaying withdrawals .we anticipate that this is a scenario faced by many individuls who purchased variable annuities with glwbs in the last few years ; currently ` under - water ' ( ie .the account value is under the guaranteed withdrawal base . ) in the industry s language , should the investor stop the accumulation process and begin withdrawals ? or , should they continue accumulating ?we note that the typical buyer of a glwb ( in the u.s . )is in his or her mid 50s , or early 60s , and that there are no additional tax penalties imposed on withdrawals after the age of 60 .one way of posing the question of the optimal initiation time is to ask what scenario the issuer must plan for in order to be fully hedged .in other words , what is the most costly scenario from the point of view of the issuer ? naturally this is also the optimal initiation time , from the point of view of a hedge fund that has purchased multiple va contracts from the original clients ( enough to be diversified from the point of view of mortality risk ) .we call this the _ risk - neutral initiation problem_. a nice feature of this formulation is that there is a unique and easily understood answer to the question , based purely on a complete - market economic analysis .auxiliary assets ( eg . holdings in a non - guaranteed account ) are irrelevant .we present this analysis in this paper .a different point of view is what we might call the _ utility maximizing initiation problem _ , namely that of a purchaser who genuinely wishes to hold the longevity protection offered by a va .there are a variety of possible motivations for purchasing the va , and optimal behavior may in principal differ depending on the actual goal of the purchase .put another way , purchasers may have a diversity of utility functions , that typically blend some combination of lifetime consumption and preservation of capital or bequest .this approach might lead to results that are quite different from a no arbitrage analaysis .nevertheless , preliminary results ( not presented here ) are entirely consistent with the conclusion from the risk - neutral version of the problem : given current product features , typically it is optimal to initiate immediately , exceptions being for particularly young individuals , individuals within a short time of a rise in withdrawal rates , or individuals holding products with extreme return characteristics ( eg very high volatility or bonus rates ) .the remainder of this paper is organized as follows . in section # 2we briefly describe the existing literature on variable annuity ( va ) guarantees to help position our contribution within the literature .then , in section # 3 we describe the risk - neutral model in which we operate .section # 4 provides the numerical examples and illustrations and section # 5 concludes the paper .all figures and tables that are referenced appear after the bibliography .scholarly research into the area of variable annuities ( vas ) , and specifically guaranteed minimum benefits ( gmxbs ) options has experienced much growth in the last decade or so , and a sizeable portion of this work has been published in the ime .one of earliest papers analyzing options inside vas and their canadian counter - part called segregated mutual funds , was windcliff , forsyth and vetzal ( 2001 ) .they analyzed the ` shout option ' while milevsky and posner ( 2001 ) examined the ` titanic option . ' to our knowledge , the first formal analysis of the guaranteed minimum withdrawal benefit ( gmwb ) was milevsky and salisbury ( 2006 ) , which focused on the cost of hedging a promise to provide a fixed - term annuity .the main conclusion of milevsky and salisbury ( 2006 ) was that glwbs appeared to be underpriced , relative to what was being charged for them in the ( us ) market .this result was echoed by coleman , li and patron ( 2006 ) as well as chen , vetzal and forsyth ( 2008 ) , who also obtained option values .although they employed a different pde - based methodology to derive hedging costs under a variety of parameter values , they arrived at similarly high costs .the work of dai , kwok and zong ( 2008 ) further re - enforced that fact that u.s .insurance companies were not charging enough for guaranteed living benefit riders on variable annuities .in fact , some might argue that this result was foreshadowed by boyle and hardy ( 2003 ) , who examined the valuation of ( uk - based ) guaranteed annuity options ( goa ) , which are somewhat different from glwbs or gmwbs , but arrived at similar conclusions .the insurance industry was under - pricing , under - reserving and mis - hedging these complex options . during and after the global financial crises of 2007 and 2008 , most insurance companies in the u.s . and canada finally realized this and pulled - back on their ( aggressive ) offerings .it is very difficult to find vas with benefits and features such as the ones described in the above - cited research papers .but , those investors fortunate enough to have purchased these product prior to their withdrawal from the market must now decide how to optimize their own withdrawals from the product .as far as more recent post crises research is concerned , the earlier pricing and valuation results have been extended to include more complex models for asset returns , mortality risk as well as policyholder behavior .of course , none of these papers have invalidated the result that these options are quite complicated and rather valuable to the consumer . if anything , they prove that the options are even more complicated than previously thought .for example , ng and li ( 2011 ) use a multivariate valuation framework and so - called regime switching esscher transforms to value va guarantees .feng and volkmer ( 2012 ) propose alternative analytical methods to calculate risk measures , which obviously reduce computational time and is prized by practitioners . in a sweeping paper , bacinello , millossovich olivieri and pitacco ( 2012 ) offer a unifying valuation framework for all guaranteed living and death benefits within variable annuities .the above papers and other extensions by the same authors in different venues all focus on the financial market uncertainty vis a vis the optionality , more so than mortality and longevity uncertainty .the paper by ngai and sherris ( 2011 ) , for example , focuses on longevity risk and the effectiveness of hedging using longevity bonds , but does nt discuss the optionality and timing problem within variable annuities . in conclusion , most of the above articles are concerned primarily with risk management from the issuers point of view , and are nt concerned with the normative or prescriptive implications for individuals who seek to maximize the embedded option value .an exception to this is the recent paper by gao and ulm ( 2012 ) who offer suggestions on the optimal asset allocation within a va with a guaranteed death benefit .their conclusion is that the ` merton ratio ' asset allocation percentage does nt necessarily hold within a va , because of the guaranteed death benefit .our paper is in the same individual - focused vein , which we believe is a very important , but often a neglected aspect of the research on risk management and valuation . in this paperwe focus exclusively on the problem from the perspective of the individual ( retiree ) who seeks guidance on when to initiate or begin withdrawals from the guaranteed living withdrawal benefit ( glwb ) .in particular we are interested in the optimal timing of annuitization , similar in spirit to the work by milevsky and young ( 2008 ) or stabile ( 2006 ) , which solves an optimal timing problem by formulating and solving the relevant hjb equation .this is nt quite annuitization in the irreversible sense , given the liquidity of the account .but the idea is the same .as discussed in the introduction , by adopting a no arbitrage perspective we are assuming the individual is trying to maximize the cost of the guarantee to the insurance company offering the glwb .the optimal policy is the one that is the most costly to the issuer .preliminary results ( not reported ) support the conclusion that this is optimal even under a utility - based analysis .something which is properly left for further research . in either case ,the practical consequences ( for the individual ) are that given the age ( above late 50s ) of typical investors holding substantial glwb s , current ( low 3% ) interest rates , existing ( low 5% ) bonus rates , forced ( low 15% ) volatility allocation , low guaranteed payout rates ( 4% to 5% ) , it is optimal to initiate the income immediately .the next section describes the set - up .we start by giving the optimal initiation time , in the sense that it is the most costly for the insurer , or the most rewarding for a ( diversified ) hedge - fund investor who has bought the va contracts .we will ignore lapsation , will assume that all contracts purchased have identical returns and initial deposits of , that all are sold to clients of the same age and health , that they initiate at the same time , and will take the number of contracts so large that mortality is completely diversified . and ,while stochastic mortality ( as well as stochastic interest rates and stochastic volatility ) are all fertile areas of research , we start with a simple model to gain clear intuition about the parameters and factors that drive the initiation decision . our models are developed in a continuous - time ( stochastic calculus ) framework in which step - ups , roll - ups , interest rates and investment returns accrue continuously .this is obviously done for analytic convenience ( and academic tradition ) . however , in a later section we conduct and report on monte carlo ( mc ) simulations that assume annual accruals and compare results to the pde - based approach , in table # 1 .we find that numerical results and specifically the initiation regions , as well as the required bonus rates to justify the delay of initiation , are quite similar .we define the following quantities : we use risk - neutral gbm dynamics , with continuous stepups , bonuses , and fees . so priorto ruin we have which implies that prior to ruin ( ) , this is a skorokhod - type equation , in which the term keeps , ie .ruin is unlikely prior to initiation , but is theoretically possible because of the fee structure .once the account ruins , clients are obliged to initiate withdrawals ( and there certainly is no incentive to delay any further . ) now , because mortality is diversified , we ignore non - hedging fees , so fees collected stay in the hedge , and cash flows derive from withdrawals and refunding account balances ( albeit with no gmdb ) at death .other than those cash - flows , the hedging portfolio is self - financing , therefore where is a martingale .our goal is to choose to maximize the required to ensure that for all . by scaling , and the fact that the dynamics do not depend on , we may write where denotes the hedge value required per dollar of remaining guarantee . there are , of course , several versions of this , depending on whether ruin or initiation has occurred . in the following ,we make our variables correspond to and , if initiation has already occurred , also . at ruin time clients initiate whether they have done so before or not .therefore we are dealing with an annuity , though the payout rate depends on the age of the client at the time of initiation .in other words , where is the price of an annuity paying for life , starting at time . will appear as a boundary condition below .alternatively , we can state this in differential form , to make clear the relation with and .since no longer changes , we may apply ito s lemma to and match terms in , or simply differentiate with respect to .this leaves us with the ode with boundary condition because we assume the hazard rate increases without bound .the martingale term vanishes .apply ito s lemma to and match terms in , we get \\ = cm_tn_t\big[rv^1\,dt - g_\tau\,dt-\lambda_ty_t\,dt\big]+d\text{mar}_t.\end{gathered}\ ] ] for fixed , this yields the following pde in the two variables and : ^1+\frac12 \sigma^2 y^2v_{yy}^1-(r+\lambda_t ) v^1 = -g_s-\lambda_ty.\ ] ] the boundary conditions are ( from the terms ) , that , and ( since the remaining cohort dies immediately ) . in the continuation region , the same analysis as above gives that \\ = cm_tn_t\big[rv^0\,dt-\lambda_ty_t\,dt\big]+d\text{mar}_t.\end{gathered}\ ] ] this yields the following pde in the two variables and : ^0+\frac12 \sigma^2 y^2v_{yy}^0-(r+\lambda_t -\beta)v^0 = -\lambda_ty.\ ] ] the boundary conditions are ( from the terms ) , ( since the remaining cohort dies immediately ) , and on the free boundary we have and ( smooth pasting ) .because initiation is forced upon ruin , this in principle could include the condition .it would naturally be of interest to evaluate the actual initial hedging cost , and to determine if it is or . in case ,the issuer is not able to fully hedge .we do not focus on this , however , since the question of interest to us is the initiation time ( ie the shape of the initiation boundary ) , not the hedgeability of the product itself .in other words , we do not opine on whether is large enough to cover the hedging cost .we solve numerically , by discretizing , , and . in full mathematical generality , could vary continuously with , in which case we would proceed as follows .fix successive grid points .assume that we have calculated and for all grid points and . to obtain the new values we solve the entire pde in for , and .we then calculate a delay ( continuation ) value from one step of the pde , and set . clearlythe time - consuming portion of this calculation is the need to re - solve the pde at every step . in practice , this issue does not arise , as real payout schedules do not step up continuously .typically they step up at a small number ( 3 - 4 ) of specified age bands .this greatly simplifies the computation , as we may carry out a version of the above procedure with just these values for , and then varying only and on a finer grid .note that numerical computation of the pde solution is not the only possible approach to this problem . recall that .so by , we may write as the risk neutral expectation of discounted cash flows from the hedge : .\ ] ] as alluded to earlier , monte carlo ( mc ) methods could in principal be used to calculate , except that the optimization over is hard to implement using simulation .however , the approach does give us a simple way of comparing our results with specific initiation strategies , to see how close they are to being optimal .each such strategy naturally provides a lower bound for .we can easily gauge other effects using mc , for example , the impact of using annual stepups , bonuses , or fees , instead of the continuous versions implemented in the pde approach .we will make such comparisons in the following section , where we compare two strategies : initiate now , versus initiate in 5 years .we will compute the `` break - even '' bonus rate , ie the at which one is indifferent between the two strategies .we do so using the mc approach ( stepup / bonus / fee adjustments made once per year ) , and compare the results obtained using the pde approach ( continuous stepups / bonuses / fees ) that gives the corresponding bonus rate based on the optimal strategy with a five - year waiting period .when the guaranteed payout rate ( gpr ) is constant and independent of age , there is typically a single contiguous initiation region , as shown in figure [ gcont ] , for example .there the parameter values are chosen to be , , gompertz mortality parameters , , , , , and .the high upper - age horizon of 120 is chosen as the maximum for life .the red - colour area denotes the region in which the accumulation should be terminated and the glwb should be initiated .the green - coloured region denotes the age and money - ness in which accumulation should continue and withdrawals should not be initiated .recall that it is only during the accumulation period that a bonus ( roll - up ) will be credited to the guaranteed base .* figure # 1 to # 5 here * the effects of varying ( asset allocation restrictions ) and ( bonus and roll - up rates ) are shown in figures [ gcont1]-[gcont4 ] . roughly speaking we find that the delay region expands as or increases .given that these products are intended as retirement savings vehicles , and that one will typically no longer contribute to a glwb once withdrawals are initiated , the initiation question is typically asked by individuals in their late 50s and up . even at a relatively high ( for this type of investment ) volatility of , as in figure [ gcont ], such individuals should delay only if the money - ness of the account approaches 1 . at a more typical ( and lower ) volatility level of , as in figure [ gcont3 ], there is no money - ness level at which delay is optimal for someone age 60 .situations at which delay is optimal for individuals at realistic ages require us to impose quite extreme parameter values , such as ( figure [ gcont4 ] ) or a bonus rate of ( figure [ gcont1 ] ) or ( figure [ gcont2 ] ) .all figures [ gcont1]-[gcont4 ] assume the withdrawal rate is constant , regardless of whether the income is initiated at age 55 or age 85 , for example .regardless of the exact parameter values , there are a number of qualitative insights that are immediately obvious after glancing at these figures .first , at older ages and lower values of money - ness ( ) it optimal to stop accumulation and initiate the glwb . this is the red region .the much smaller green region , which indicates that it is optimal to wait ( and not initiate ) only occurs in areas where the money - ness is very high and the age is quite young .* table # 1 here * viewing the problem from a different angle , we can invert the model and compute the values needed for an individual to be indifferent between initiating immediately , versus initiating in five years for example , for several different ages and values . these values are reported in table # 1 .notice how the threshold values are relatively higher than the ( currently offered ) 5% or 4% bonus for waiting .this is yet another indication of the optimality of initiation .* figure # 6 here * with a more realistic step function for which is dependent on age as in figure [ beta4gt ] ( bottom ) and using the same other parameter values as in figure [ gcont ] , we see optimal initiation well below age 50 , when the account is close to ruin .but , the boundary between the initiation and non - initiation regions becomes more complicated .although figure [ beta4gt ] is based on a very specific payout rate function ( g ) , the fragmented and non - contiguous region depicted is typical of most glwbs and gets to the economic essence of the decision to initiate .namely , there is a small sliver of age during which it is worth waiting to reach the next age band , and thus gain the higher lifetime income by immediately turning the glwb on .the region in which the optimal policy changes from accumulate ( green ) to stop and turn - on ( red ) is driven entirely by the ages at which the guaranteed payout rate jumps to the next band .the thickness of the region depends on the money - ness of the account as well as the exogenous parameters such as the bonus ( roll - up ) rate , the interest rate and the volatility of the subaccounts .finally , table # 2 provides a summary of the comparative statics on the optimality of delay .the following parameter changes are all associated with greater propensity to initiate : ( i. ) higher ages and poor health , ( ii ) a bonus ( roll - up ) rate that is smaller , ( iii . )a lifetime payout rate that is lower , ( iv . ) a volatility that is lower , or equivalently , an asset allocation within the variable annuity that contains less equity exposure , ( v. ) a risk - free valuation rate that is higher , ( vi . ) an insurance fee that is higher .finally ( vii . ) the response to money - ness is more complicated . while it typically appears that lower values of money - ness are associated with initiation , there are some cases in which this is not the case .one should therefore be careful about concluding that all variable annuities that are ( deeply ) underwater should be initiated .it really depends on the bonus rate that is being offered to wait , as well as the lifetime payout rate itself .* table # 2 here * there are a number of ( subtle ) modeling assumptions that we have made or have assumed in our analysis that are important to emphasize again , before these results are applied in practice .first , we have essentially ignored the guaranteed minimum death benefit ( gmdb ) that is attached to most variable annuities ( va ) .technically we assumed that at death the beneficiary receives the account value , only .historically , most vas offered a range of ( lucrative ) gmdb options in which the maximum account value could be protected at death , possibly with a guaranteed bonus ( or roll - up ) as well .these options ( obviously ) cost more and , more importantly , initiating withdrawals might reduce the gmdb in a disproportionate and undesirable manner .that said , one might question why a rational investor would select ( and pay for ) both the enhanced glwb and the gmdb , since they protect against two opposing risks .either way , our results are immediately applicable to individuals who have only elected the glwb , which is a non - trivial portion of the variable annuity market .second , we have assumed that once withdrawals are initiated there are no further step - ups into higher income bands , in the lifetime payout amounts .for example , if the ( deep in the money ) glwb was initiated at age 60 , at a guaranteed base value of 1 trillion worth of these va policies held by individuals .this tactical question is currently on the mind of many financial advisors with a large number of clients holding these products , in which the aggregate value of the embedded optionality is quite substantial . from a financial economics perspective, this question is complicated by the fact that the optimal policy for a consumer seeking to maximize ( and smooth ) lifetime utility of consumption is not necessarily a policy that induces the maximum liability to the issuer . in other words ,the hedging strategy ( for the manufacturer ) may not be the symmetric opposite of the dynamic utilization strategy ( for the buyer ) .our results also indicate that it rarely makes sense to contribute additional funds to an existing va + glwb policy that is in - the - money . by contributing funds to an under - water policythe policyholder is watering - down the value of the guarantee .quite perversely , consumers are unnecessarily doing the hedging for the insurance company . much more importantly and quite relevant , we find that it is optimal to initiate the policy and begin withdrawals as soon as possible , with a few exceptions .when the payout rate is about to increase ( very soon ) , the optimal policy may be to wait until the new payout rate kicks - in , and then initiate immediately .likewise , if the investor is unusually young , and able to benefit from a long lifetime of withdrawals at an elevated rate , it may be optimal to delay in order to accrue bonuses or stepups .investors lucky enough to hold a glwb with unusually generous terms ( a very high bonus rate , or the ability to allocate significant funds to high - volatility assets ) may find it optimal to delay .but for most investors , there is little reason to delay . even if an investor is still employed and/or is not interested in consuming the income from the glwb , they are still better - off withdrawing the funds and re - investing in an equivalent ( lower cost ) mutual fund or tax - sheltered variable annuity .( this can be done via a partial 1035 exchange , in the language of the u.s .tax code . ) and , if this ends - up ruining the account , so be it .they will have traded a glwb for a non - guaranteed account plus a fixed life annuity .here is some intuition .the glwb is worth something , only because of the probability the investment account will be ( i ) depleted by withdrawals at some date , and ( ii . )the individual annuitant will live well beyond that date .the insurance is paid - for by ongoing charges to the account , which come to an abrupt end if - and - when the account hits zero .thus , the sooner that account can be `` ruined '' and these insurance fees can be stopped , the worse it is for the insurance company and the better it is for the annuitant .even the allure of a higher guaranteed base if you wait longer ca nt really offset the power that comes from depleting the account as soon as possible .as soon as the account is ruined , the annuitant and beneficiary are living off the insurance company s dime . and , while it might seem odd that trying to increase the hedging cost to the insurance company is in the best interest of the policyholder , our utility - based analysis indicates a similar phenomenon .we caution that our research is ongoing , and these results are dependent on modeling assumptions in which market volatility and long - term interest rates remain at their current level . if , for example , the vix index jumps to elevated levels for an extended period of time and remains elevated , and/or long - term bond yields return to their historical levels , some of these high - level results may no longer be valid .chen , z. and p.a .forsyth ( 2008 ) , a numerical scheme for the impulse control formulation for pricing variable annuities with a guaranteed minimum withdrawal benefit ( gmwb ) , _ numerische mathematik _ ,535 - 569 kling , a. , f. ruez and j. russ ( 2011 ) , the impact of stochastic volatility on pricing , hedging , and hedge efficiency of withdrawal benefit guarantees in variable annuities , _ astin bulletin _ ,41 ( 2 ) , pg .511 - 545 marshall , c. , m. hardy , d. saunders ( 2012 ) , measuring the effectiveness of static hedging strategies for a guaranteed minimum income benefit , _ north american actuarial journal _16(2 ) , pg .143 - 182 milevsky , m.a . and s.e .posner ( 2001 ) , the titanic option : valuation of guaranteed minimum death benefit in variable annuities and mutual funds , _ journal of risk and insurance _ , vol .29(3 ) , pg . 299 - 318 .ngai , a. and m. sherris ( 2011 ) , longevity risk management for life and variable annuities : the effectiveness of static hedging using longevity bonds and derivatives , _ insurance : mathematics and economics _ ,49(1 ) , pg .100 - 114 .piscopo , g. and haberman , s. ( 2011 ) , the valuation of guaranteed lifelong withdrawal benefit options in variable annuity contracts and the impact of mortality risk , _ north american actuarial journal _ , vol 15(1 ) , pg .59 - 76 stabile , g. ( 2006 ) , optimal timing of the annuity purchase : combined stochastic control and optimal stopping problem , _ international journal of theoretical and applied finance _ , vol .9(2 ) , pg . 151 - 170 . )model assumptions : ( = interest rate ) , ( = bonus rate ) , ( = gompertz modal value ) , ( = gompertz dispersion value ) , , ( = account volatility ) , ( = glwb insurance fees ) , and ( = guaranteed payout rate ) .the red region denotes the combination of money - ness and age in which accumulation phase should be terminated and the glwb should be initiated .the green region is where the option to wait is valuable.,scaledwidth=80.0% ] c as a function of initiation age .the model assumptions are that , , , , , , .,title="fig:",width=480 ] + + as a function of initiation age .the model assumptions are that , , , , , , .,title="fig:",width=480 ] + + + current age & moneyness & pde bonus & mc bonus + age 55 & & & + age 55 & & & + age 55 & & & + age 65 & & & + age 65 & & & + age 65 & & & + age 75 & & & + age 75 & & & + age 75 & & & + age or gender or health & older = initiate + waiting bonus = roll - up rate & smaller = initiate + lifetime income payout rate & reduced = initiate + stock allocation & restricted = initiate + risk - free treasury rate & increase = initiate + insurance fee as % of base & more expensive = initiate + account value guarantee & underwater * = indeterminate +
_ optimal initiation of a glwb in a variable annuity : + no arbitrage approach _ this paper offers a financial economic perspective on the optimal time ( and age ) at which the owner of a variable annuity ( va ) policy with a guaranteed living withdrawal benefit ( glwb ) rider should initiate guaranteed lifetime income payments . we abstract from utility , bequest and consumption preference issues by treating the va as liquid and tradable . this allows us to use an american option pricing framework to derive a so - called optimal initiation region . our main practical finding is that given current design parameters in which volatility ( asset allocation ) is restricted to less than , while guaranteed payout rates ( gpr ) as well as bonus ( roll - up ) rates are less than 5% , glwbs that are in - the - money should be turned on by the late 50s and certainly the early 60s . the exception to the rule is when a non - constant gpr is about to increase ( soon ) to a higher age band , in which case the optimal policy is to wait until the new gpr is hit and then initiate immediately . also , to offer a different perspective , we invert the model and solve for the bonus ( roll - up ) rate that is required to justify delaying initiation at any age . we find that the required bonus is quite high and more than what is currently promised by existing products . our methodology and results should be of interest to researchers as well as to the individuals that collectively have over $ 1 usd trillion in aggregate invested in these products . we conclude by suggesting that much of the non - initiation at older age is irrational ( which obviously benefits the insurance industry . )
gas - kinetic scheme developed by xu is a unified solver of computational fluid dynamics ( cfd ) for both incompressible and compressible flows .base on the bhatnagar - gross - krook ( bgk ) model , gas - kinetic scheme describes the macroscopic fluid flows by microscopic distribution functions .many published results have demonstrated the accuracy and efficiency of gas - kinetic scheme in the simulations of laminar and turbulent flows .the approach of unsteady flows become more and more important in the field of engineering .explicit scheme can be seen as the best choice for the simulation of unsteady flows with sufficient accuracy .but , the physical time scales might be much more large than the explicit time step which is determined by the cfl numbers in some cases , and this will lead to very expensive computational cost . coupled with the extremely complex computation of flux in gas - kinetic scheme , it is highly necessary to develop a fast algorithm for the gas - kinetic scheme to simulate the unsteady flows .dual time - stepping strategy is one of the popular methods used widely and has been proved as an effective method for unsteady flows without impairing the accuracy . to accelerate the computation of unsteady flows ,inner iteration should be employed in each physical time step for the convergence of pseudo steady state .the solutions of pseudo steady state can be obtained by local time - stepping , multigrid and implicit scheme . in our paper , the implicit gas - kinetic scheme and the local time - stepping are used to approach the pseudo steady state .o. chit proposed an implicit gas - kinetic method based on the approximate factorization - alternating direction implicit ( af - adi ) scheme , and the results give a good agreement with the compared data .k. xu and m. mao developed an implicit scheme based on the euler fluxes and lu - sgs method , and the method is applied to simulate hypersonic laminar viscous flows .j. jiang and y. qian make a comparison of implicit gks and the multigrid gks in 3d simulations .w. li proposed an unstructured implicit gks based on the lu - sgs method . in our paper , we adopt the generalized minimal residual method ( gmres) method into gas - kinetic scheme to solve the linear systems of flux jacobian matrix , and the linear system is constructed not only by using the euler flux jacobian , but also by using the viscous flux jacobian which is not mentioned in the previous implicit gas - kinetic schemes . in our study , we set up three test cases to validate the dual time - stepping strategy for gas - kinetic scheme , and several features of the free stream flow conditions have been considered .the case of incompressible flow around the circular cylinder is focus on the simulation of unsteady flows with the low reynolds number , which also has been implemented by yuan using gas - kinetic scheme with immersed boundary method .the effect of dual time - stepping method on the incompressible turbulent flow is demonstrated in the second test case . the vortex shedding frequency, surface loads of this flows are obtained .the last case is about the transonic buffet on the naca0012 airfoil , which can be referred to many articles , such as mcdevitt and okuno , j. xiong , m. iovnovich , c.q .gao . for the approach of turbulent flows , turbulence models coupled with gas - kinetic scheme .all the tests set up in our study obtain a good accordance with the experiments and other numerical methods ( since , there are only very few literatures of gas - kinetic schemes focused on the simulation of unsteady flows , most of the methods are based on the navier - stokes equations ) .the rest of our paper is organized as follows . in the second section , the gas - kinetic scheme , the dual time - stepping strategy , the flux jacobian and the gmres methodare introduced briefly . in the third section ,three numerical test cases ( incompressible laminar flow over the stationary circular cylinder , incompressible turbulent flow around a square cylinder , and the transonic buffet on the naca0012 airfoil surface with high reynolds number ) are conducted for different purposes .finally , a short conclusion is summarized in the final section .in this section , the procedure of gas - kinetic scheme proposed by xu is introduced briefly . similar to other finite volume methods , gas - kinetic scheme in finite volume method can be expressed as where is the index of the finite volumes , means the index of interface belonged to the cell , is the total number of the cell interfaces around the finite volume , denotes the measure of the finite volume , represents the flux across the cell interface , and is the measure of the cell interface .the macroscopic variable appeared in the eq .[ gks - fvm ] reads as and the flux at the cell interface is where is the distribution function and , represents the dimension , denotes the total degree of freedom of internal variables , means the internal variables , is the density , is the macroscopic velocity and is the energy of gas in the finite volume , is the particle velocity , and represents the normal vector pointing outside of the finite volume respectively . for a finite volume method , flux across the cell interface is based on initial reconstruction in which interpolation techniques and limiters are used . for unstructured grids , it is proved that the venkatakrishnan limiter works , which has been used in our paper .the conservative variable in the finite volume can be expressed as where denotes the limiter in the finite volume , is coordinate of the cell center , means the position of a point located in the finite volume , represents the average conservative variable of the finite volume , and denotes the spatial gradient of conservative variable in the finite volume .the venkatakrishnan limiter employed in our study reads as where represents the index of cell interface surrounding the finite volume , denotes the average cell size of the grid in the computational domain , is a constant number , and are the maximum and minimum values of macroscopic conservative variables in the neighbors of the cell respectively .the value of has a great effect on the accuracy and convergence of the numerical algorithm . because of determining a proper value of is a confused and difficult problem in practice , it is hard to get a suitable value for . in our work, we follow the ideas in ref . which modified eq. [ epsilonv ] as since and are the maximum and minimum values of macroscopic conservative variables in the whole computational domain , which do not rely on the local value and provide a threshold value for the smooth region . in our tests ,the value of is given as following the suggestion in ref . .after the reconstruction stage , can be obtained using eq .[ flux ] . up to now, the only issue is to calculate the distribution function at the cell interface . in this paper , we take a special case , in which the interface is normal to x - axis , to demonstrate the computing procedure of the flux at the cell interface . in practice ,the cell interface is rarely normal to the x - axis , especially for grids with triangles and tetrahedrons .so , the transformation of coordinate system must be applied . can be written as for notational convenience we define where is the heaviside function where , and , obtained after the initial reconstruction of macroscopic conservative variables , are the maxwellian distribution function at and both sides of cell interface respectively . , , and are the spatial gradients of , and . , , and are the time derivatives . the kernel of gas - kinetic scheme is to compute the distribution function at the cell interface , and the detailed determinations of , , , , , and can be seen in ref . .the collision time appeared in eq .[ finterface ] is defined as where is the pressure , is the dynamic viscosity coefficient and satisfying sutherland s law where , , , and is the temperature corresponding with . the second term on the right hand side of eq .[ tau ] represents the artificial numerical viscosity . is the explicit time step , which can be calculated by where reads as in eq .[ varlambdac ] represents the number of cell interfaces around the finite volume , denotes the macroscopic ( averaged ) velocity in the finite volume , means the measure of cell interface , and represents the sound speed in the finite volume . for the prediction of turbulent flows ,. [ tau ] can be rewritten as where is the turbulent eddy viscosity , and it comes from the allied turbulence model .there are other techniques , chen et al. and succi et al. , used for modifying the collision time , and we use eq . [ modified - tau ] in our paper for simplicity .the simulation of unsteady flows phenomena is of more and more importance in many disciplines of engineering .explicit scheme is considered as the best choice for the simulation of unsteady flows with great accuracy .but , in some cases , such as the unsteady turbulent flows , the physical time scales might be very large in comparison to the explicit time steps which are determined by cfl numbers .since predicting such flows using explicit scheme spends so long times , computational costs are very expensive .it is necessary to develop less expensive methods without impairing the accuracy of the prediction . in this section , dual time - stepping strategy , which is very popular for unsteady flows ,is introduced .the explicit and implicit schemes can be expressed as one basic non - linear schemes .it reads as where , represents the local time step , and the parameters and appeared in eq .[ basicscheme ] are used to determine the type ( explicit or implicit ) and also the temporal accuracy .dual time - stepping strategy is based on the eq .[ basicscheme ] .we set and .hence , we obtain where denotes the global physical time step and eq .[ dual1 ] is a second order time accurate version of eq .[ basicscheme ] . the left side of eq .[ dual1 ] is a three - point backward - difference approximation of the time derivation .thus , eq . [ dual1 ] can be treated as a modified steady state problem to be solved using a pseudo - time step where is the approximation to .the unsteady residual can be expressed as where represents the source term , the steady state solution of eq .[ dual2 ] , which is solved using gmres method in our paper , approximates the macroscopic flow variables at the time step level , i.e. , . to apply an implicit scheme for the steady solution in pseudo time , the first stage is to formulate eq .[ dual2 ] as an nonlinear implicit scheme as follow where is the new time level of pseudo - time .then , the right side of eq .[ dualimplicit1 ] can be linearised as where substituting eq .[ linearunsteadyres ] and eq .[ dualjac ] into eq .[ dualimplicit1 ] , we get the following implicit scheme \delta \bm{w}_i^ * = -(\bm{r}^*)^l.\ ] ] let ,\ ] ] eq . [ dualimplicit2 ] can be rewritten as for solving the linear system of eq .[ axb ] , we employ the gmres method in our paper . in the subsection [ secdualtimestepping ] , a linear system eq .[ axb ] is constructed for the implicit gas - kinetic scheme , and both the implements of implicit gas - kinetic scheme in structured grids and unstructured grids have been developed by other researchers . in this section ,we only focus on the determination of flux jacobian at the cell interface . in order to employ the implicit gas - kinetic scheme ,the time averaged flux is needed . for a gas - kinetic scheme , the time averaged flux function reads as where means the explicit time step determined by eq .[ exdeltat ] . in the right side of eq .[ dual1 ] can be written as and then , thus , where .although the expression of flux jacobian has been given in eq .[ fluxjacobian1 ] , it is still difficult to be computed based on the bgk model . in our study , we construct the flux jacobian based on the euler equations and navier - stokes equations .the partial derivative in the right and left side of eq .[ fluxjacobian1 ] can be decomposed as and respectively . where and are corresponding to the convective part . and are corresponding to the viscous part .for the convective part , we employ the flux jacobian due to roe scheme as follows and fig .[ figfluxjac1 ] plots the finite volumes at both sides of the interface .the viscous part is very important for the simulation of viscous flows , but it is not yet mentioned in the previous implicit gas - kinetic schemes . in our study , it can be written as the details of , , eq .[ fluxjacroe1 ] and eq .[ fluxjacroe2 ] can be seen in the literature , and up to now , the computation of flux jacobian is completed .implicit schemes used to accelerate the convergence behaviors are always resulted in the solving of linear systems like eq .the gmres method , originally suggested by saad and schulz , is one of the popular method used widely .defined as an real matrix and and are two dimensional subspace of .a projection technique onto the subspace and orthogonal to is a process which finds an approximate solution to eq .[ axb ] by imposing the conditions that belong to and that the new residual vector be orthogonal to , in a gmres method , the dimensional subspace is the dimension krylov subspace formed as where and is the initial guess of solution .the subspace is defined as since the obvious basis , . , , , of is not very attractive from a numerical point of view , the kernel of gmres method is to construct a group of orthogonal basis , , , , for the subspace .let then the approximation can be expressed as , where minimizes the function , i.e. , where , , and is the hessenberg matrix .thus , the general procedure of the gmres method for solving eq . [ axb ] can be summarized as follows 1 .guessing an initial solution for eq .[ axb ] ; 2 .constructing a group of orthogonal basis,, ... , , for the subspace , and the modified gram - schmidt method is always employed in this stage ; 3 . minimizing the function , and finding ; 4 . obtaining the approximate solution whether can satisfy the eq .[ axb ] , if eq .[ axb ] is satisfied , the solution is obtained ; if not , let , then go to stage 1 .the details of gmres method can be referred to .the gas - kinetic scheme proposed by xu is a unified methods which can be used for both incompressible and compressible flows . in our study, we develop a dual time - stepping strategy for gas - kinetic scheme , which is proved to be successful in the numerical methods based on navier - stokes equations .three test cases are set up in this section , and they are used to demonstrate that the dual time - stepping method is not only useful for both incompressible and compressible flows , but also for laminar and turbulent flows . the source code based on our proposed algorithmis deployed on the stanford university unstructured ( su2 ) open - source platform .we appreciate the development team of su2 for their great work .the laminar flow past a single stationary circular cylinder , which has been studied using many experimental and numerical methods , is a benchmark of unsteady flows . in our paper , the aim of this test case is to validate the time - stepping strategy in the prediction of unsteady incompressible laminar flows . in this case , the free - stream mach number is , the reynolds number are , and the definition of reynolds number is read as where , represents the diameter of the circular cylinder , , , and denote the density , velocity and the laminar viscosity of free stream flow respectively . the computational domain shown in fig .[ figcylindergrid ] is divided into an o - type grid , which has 400 points on the cylinder surface and 200 points on the radial direction .characteristic information ( riemann invariants ) based far - field boundary condition is applied on the outer of computational domain , and the no - slip and adiabatic wall condition is enforced on the surface of cylinder .the nearest distance of mesh points from the wall is , and the y - plus is about .table [ tablecircularcylindercdcl ] shows the comparison of drag coefficients and lift coefficients at different reynolds numbers . denotes the time - averaged total drag coefficient , represents the fluctuations of drag coefficients away from , and is the amplitude of the fluctuations of lift coefficients .the compared data come from other numerical methods and experiments , and the results demonstrate a good agreement with the referenced data . table [ tablecircularcylindercdcl ] shows that both the fluctuations of and are evident , and it is clear that the fluctuations of lift coefficient are much bigger than drag coefficient . as increases , the amplitude of fluctuations of total drag coefficient and lift coefficient increase , but the total drag coefficient decreases .< p40pt < p40pt < p60pt < p60pt < p40pt < p40pt < p40pt < & & yuan & tritton & + + & & & & & & & & + & & & & & & & & + & & & & & & & & + & & & & & & & & + in dimensional analysis , the strouhal number is a non - dimensional number which describes the vortex shedding frequency of unsteady flows , and it is defined in our paper as where , denotes the vortex shedding frequency . c. williamson gives an approximative formula for strouhal number versus reynolds number for circular cylinders , which can be expressed as the strouhal numbers investigated in our paper are compared with data from other researchers .table [ tablecircularcylinderst ] gives the details of strouhal number in our study , and the results shows a good accordance with compared data .< p120pt < p80pt < & present & williamson & silva + & & & - + & & & + & & & + the length of the recirculation bubble is defined as the distance between two stagnation points downstream of the cylinder . for an unsteady flow , the determination of defined in fig .[ figcircularcylinderlwh ] is based on the mean flow field in a long time interval . in our study, we use the horizontal velocity on the line to calculate the length of recirculation bubble , and fig .[ figcircularcylinderu ] plots the mean horizontal velocity at different reynolds numbers . fig .[ figcircularcylinderlw ] shows the comparison of with data by other numerical methods and experiments .+ the pressure coefficients of mean flow field at different reynolds numbers on the cylinder surface are shown in fig .[ figcircularcylindercp ] , where and correspond to the stagnation and base points respectively .the plots demonstrate a good accordance with the compared data by park .a qualitative picture of flow streamlines , , laid over a mach number contour plots is presented in fig .[ figcircularcylindert ] .as expected , the periodic vortex shedding can be seen clearly in the wake of circular cylinder .it is obvious that the vortices are shed alternative from each side of the circular cylinder , and then converted down stream in the wake of the cylinder .p90pt p40pt < p80pt < p70pt < scheme & & inner iteration & pseudo steady resudial + explicit & & & + dual time - stepping & & & + table [ tablecircularcylinderworks ] shows the time step of explicit scheme and dual time - stepping strategy respectively ( ) .the time step of explicit scheme is determined by eq .[ exdeltat ] and the time step of dual time - stepping method is the physical time step . to predict the flow field at time , the explicit scheme needs steps , and the dual time - stepping method needs only steps .it is obvious that the dual time - stepping strategy of gas - kinetic scheme can save a lot of computational works in the approach of unsteady incompressible flows , and the residual of pseudo steady solution is sufficient to guarantee the accuracy of the dual time - stepping method for flow simulation of unsteady flows .the incompressible turbulent flow around a square cylinder is investigated in this section .the case is studied by many numerical methods and experiments . in our paper, we explore it using gas - kinetic scheme coupled with menter s shear stress transport ( sst ) turbulence model , and a gas - kinetic scheme coupled with sst turbulence model has been introduced in the ref . .the aim of this test case is to examine the behavior of dual time - stepping method on the incompressible turbulent flow . at the beginning of the simulation , an incompressible free stream flow with and initiated in the computational domain . with the time evolution ,the unsteady phenomena appear inside the flow field .the reynolds number is defined as and represents the side length of a square cylinder . for a turbulent flow , a small , , turbulence intensity is imposed in the inlet , and the ration of eddy viscosity and laminar viscosity equals in the far field . the computational domain is a rectangle .the square cylinder is located at .the boundary conditions used in the approach are adopted from the study of franke .[ figsqcylinderdomain ] shows the details of computational domain and boundary conditions for the flow simulation .[ figsqcylindergrid ] shows the hybrid grids used for the prediction of incompressible flow around a square cylinder .the grid is made up of rectangles and triangles , and the total number of cells in the domain is . the rectangular part distributed around the cylinderis used to guarantee the simulation accuracy inside viscous boundary layer , and the rectangular part in the wake of cylinder is used to obtain the accurate vortex frequency . the nearest distance from cylinder wallis , and the y - plus is .+ + the incompressible turbulent flow ( ) around a square cylinder which is investigated in our paper presents coherent vortex shedding with a periodically oscillating wake .a summary of data from present simulation , several numerical methods and experiments , are reported in table [ tablesqcylinder ] . denotes the time averaged drag coefficient , and the strouhal number is defined as where is the frequency of vortex shedding . and are the root mean square of drag and lift coefficients respectively . the vortex shedding frequency represented by strouhal number is in a good agreement with experimental and computational results found in the literature .one of important features that has to be analyzed is the length of recirculation region just downstream of a square cylinder .the recirculation region , which is formed due to the separation , is characterized by , and the definition of is shown in fig .[ figsqcylinderxr ] . to determine the value of , meanflow field must be obtained in a long time interval .the value of in our study is in a good accordance with the data from experiments and other numerical methods .the surface loads are also of great importance .it can be seen in table [ tablesqcylinder ] that the time - averaged drag coefficient in our simulation is acceptable compared with the other data . and represent the fluctuations of drag and lift coefficient respectively , and both of them are in good accordance with the compared data .p50pt < p70pt < p20pt< p20pt < p45pt < p45pt < p10pt < p40pt contribution & model & & & & & + present & sst model & & & & & + lyn & experiments & & & & & + lee & experiments & & & & & + vickery & experiments & & & & & + iaccarino & unsteady & & & & & 0.141 + rodi & tl model & & & & & 0.143 + bosch & tl model & & & & & 0.122 + tl model represents the two layer model . the horizontal velocity distributed on the centerline is plotted in fig . [ figsqcylinderuc ] , andthe information of time averaged separation region behind the cylinder can be seen in the velocity profiles along the centerline .it shows a fairly well agreement in comparison to experimental and numerical approach data .[ figsqcylinderu4 ] displays the streamwise velocity profiles at four positions behind a square cylinder .very good agreement is obtained between present simulations and data extracted from literatures .a qualitative picture of the vortex shedding behind the square cylinder is presented in fig .[ figsqcylindestreamline4 ] .the streamlines are laid over on the mach number contour plots .as expected , the alternative vortex shedding from upper and bottom side of the cylinder is shown clearly in the picture , and the vortexes are converted downstream in the wake of the cylinder .< p80pt < p70pt < scheme & & inner iteration & pseudo steady resudial + explicit & & & + dual time - stepping & & & + for the approach of incompressible turbulent flow around a square cylinder , the explicit time step can be obtained by the eq .[ exdeltat ] .the details of explicit time step and the physical time step of dual time - stepping strategy are shown in table [ tablesqcylinderworks ] .it is evident that to predict the flow state at certain time , the dual time - stepping method only costs about one - tenth of the computational works of the explicit scheme .the accuracy of approach is also guaranteed by using inner iterations in a single physical time step . for the transonic flow around an airfoil with certain combined conditions such as mach number , reynolds number , the airfoil profile and the angle of attack , a strong shock wave oscillations , which is termed as buffet , may be aroused and self - sustained even in the absence of any airfoil motion .such a case studied in our paper is a transonic turbulent flow over a naca0012 airfoil .the mach number of the free stream flow is .the reynolds number , which is defined as , equals to , where represents the chord length of naca0012 airfoil .the angle of attack is .the aim of the case in this section is to validate the dual time - stepping method of gas - kinetic scheme in the simulation of unsteady transonic turbulent flow . for turbulent flow simulations , the spalart - allmaras ( sa ) turbulence model is combined with the gas - kinetic scheme in present method .actually the gas - kinetic scheme has been coupled with different types of turbulence models by some researchers .the sa turbulence model is one of the popular models which is very suitable for simulation of separated flow .the details of the coupled methods and the turbulence model are not the focus of attentions in this study , which will not be described in detail here .+ in fig . [ fig0012grid ] , the computational domain and the hybrid grids used in this approach are displayed .the total number of cells in the domain is .the rectangular meshes are used to maintain enough accuracy of numerical simulations of the flows within the boundary layer near the airfoil .the nearest distance of mesh points to the wall of airfoil is , and the y - plus is .the rectangle region is extruded layers from the wall of airfoil , and there are points located on the airfoil .the outer domain is about times chord length of the airfoil .p65pt p25pt <p100pt < p100pt < set & & & + & & & + & & & + & & & + & & & + the reynolds number of free stream flow in the experiments is about .the experiment of naca0012 transonic buffet was carried out by mcdevitt and okuno at the nasa ames research center s high - reynolds number facility .four conditions sets which mcdevitt and okuno chose to obtain the stable self - sustained transonic buffet in the experiments are listed in table [ tablenaca0012expset ] . denotes the reduced frequency , which is defined as in our study , the conditions of set 6 is chose for the test of transonic buffet on naca0012 airfoil . to validate the current computational setup ,the results of computed transonic buffet in our paper are compared with experiments and other numerical methods .table [ tablenaca0012clf ] lists the details of comparisons using the conditions of set 6 in the reference . represents the amplitude of the lift coefficient , and denotes the distance of shock - buffet traveling on the airfoil surface .the results demonstrate a very good accordance with the references .p65pt p25pt < p100pt < p100pt < p100pt & present & mcdevitt & iovnovich + & & & + & & & + && & + the evolution of the pressure coefficient in a period is plotted in fig . [ fignaca0012cp ] . as expected , the shuttle of shock - buffet is shown in this figure .[ fignaca0012sbli ] displays the captured shock - wave boundary layer interaction .the lambda - shock structure can be seen in the plot .since the resolution of mesh is insufficient for the flow at high reynolds number , the lambda region is not well resolved and the lambda structure is not very clear . to study the effect of physical time step in the simulation on transonic buffet responses , simulations were performed using the physical time step ranging from to .[ fignaca0012cl ] shows the time histories of lift coefficient at different time steps .the convergence is evident with decreasing physical time step , and the time step is chose in our tests .p90pt p40pt < p80pt< p70pt < scheme & & inner iteration & pseudo steady resudial + explicit & & & + dual time - stepping & & & + the computational works of explicit scheme and dual time - stepping method are also compared in table [ tablenaca0012works ] . from the table[ tablenaca0012works ] , we can easily conclude that the dual time - stepping method can not only reduce the computational costs greatly , but also predict the transonic buffet with sufficient accuracy .in present work , the dual time - stepping strategy of gas - kinetic scheme is proposed for the prediction of unsteady flows .the test cases not only cover viscous flows throughout the mach number range from incompressible through transonic flows , but also cover the flows throughout the reynolds number range from laminar to turbulent flows .all the three tests obtain a good agreement with the referred data and meet the goals which are designed for the validation . to accelerate the convergence of pseudo steady state , implicit gas - kinetic scheme is employed in the inner iteration . both inviscid flux jacobian and viscous flux jacobianare considered in the construction of linear system , and gmres method is adopted to approach the solution of linear system .it can be obviously seen in present study that the ability of dual time - stepping method to save the computational work is evident compared with explicit scheme .the good results demonstrate that the dual time - stepping strategy of gas - kinetic scheme can simulate unsteady flows accurately and efficiently , and the present work is of particular usefulness for unsteady flow predictions in the field of engineering .the project financially supported by national natural science foundation of china ( grant no . 11472219 ) , natural science basic research plan in shaanxi province of china ( program no .2015jm1002 ) , as well as national pre - research foundation of china . 47ifxundefined [ 1 ] ifx#1 ifnum [ 1 ]# 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) in _ _ ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) _ _ , vol .( , ) ( ) in _ _ ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) in _ _ ( , ) pp . * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( )
a dual time - stepping strategy of gas - kinetic scheme for the simulation of unsteady flows is introduced in this work . the dual time - stepping strategy is always used in the approaches of unsteady flows , and the ability of dual time - stepping to accelerate the computation with acceptable error tolerance is evident . in our paper , we adopt the techniques of dual time - stepping methods for implicit gas - kinetic scheme to simulate the unsteady flows , which is very popular for the numerical methods based on the navier - stokes equations . it is carried out by ( a ) solving the gas - kinetic scheme in finite volume method ; ( b ) obtaining the inviscid flux jacobian by roe scheme ; ( c ) involving the computation of viscous flux jacobian which is not mentioned in the previous implicit gas - kinetic schemes ; ( d ) approximating the linear system of pseudo steady state by generalized minimal residual algorithm ( gmres ) . the explicit gas - kinetic scheme has been proved to be an accurate approach for both the steady and unsteady flows , and the implicit gas - kinetic scheme is also be developed to accelerate the convergence of steady state . the dual time - stepping method proposed in our study is of great importance to the computations of unsteady flows . several numerical cases are performed to evaluate the behavior of dual time - stepping strategy of gas - kinetic scheme . the incompressible flow around blunt bodies ( stationary circular cylinder and square cylinder ) and the transonic buffet on the naca0012 airfoil are simulated to demonstrate the overall performance of the proposed method which is applicable to the fluid flows from laminar to turbulent and from incompressible to compressible .
after the classical work of hodgkin and huxley , it is widely recognized that the conformational changes in the sodium and potassium channels account for the generation of nerve spike . in this specific case ,the time constants of the corresponding temporal processes are rather small ( on the order of a few milliseconds ) .it is known that in some other cases ( such as the ligand - gated channels ) the time constants associated with conformational changes in protein molecules can have much larger values .the growing body of evidence suggests that such slower conformational changes have direct behavioral implications .that is , the dynamical computations performed by ensembles of protein molecules at the level of individual cells play important role in complex neuro - computing processes .an attempt to formally connect some effects of cellular dynamics with statistical dynamics of conformations of membrane proteins was made in .the present paper discusses a generalization of this formalism .the approach is based on an abstract computational model referred to as _ protein molecule machine ( pmm)_. the name expresses the hypothesis that such microscopic machines are implemented in biological neural networks as protein molecules .a pmm is a continuous - time first - order markov system with real input and output vectors , a finite set of discrete states , and the input - dependent conditional probability densities of state transitions .the output of a pmm is a function of its input and state .the components of input vector , called _ generalized potentials _ , can be interpreted as membrane potential , and concentrations of neurotransmitters .the components of output vector , called _ generalized currents _ , can be viewed as ion currents , and the flows of second messengers .an _ ensemble of pmms ( epmm ) _ is a set of independent identical pmms with the same input vector , and the output vector equal to the sum of output vectors of individual pmms .the paper explains how interacting epmms can work as robust statistical analog computers performing a variety of complex computations at the level of a single cell .the epmm formalism suggests that much more computational resources are available at the level of a single neuron than is postulated in traditional computational theories of neural networks . it was previously shown such cellular computational resources are needed for the implementation of context - sensitive associative memories ( csam ) capable of producing various effects of working memory and temporal context .a computer program employing the discussed formalism was developed .the program , called channels , allows the user to simulate the dynamics of a cell with up to ten different voltage - gated channels , each channel having up to eighteen states .two simulation modes are supported : the monte - carlo mode ( for the number of molecules from 1 to 10000 ) , and the continuous mode ( for the infinite number of molecules ) .new program capable of handling more complex types of pmms is under development .( visit the web site www.brain0.com for more information . )the rest of the paper consists of the following sections : 1 .abstract model of protein molecule machine ( pmm ) 2 .example : voltage - gated ion channel as a pmm 3 . abstract model of ensemble of protein molecule machines ( epmm ) 4 .epmm as a robust analog computer 5 . replacing connections with probabilities 6 .examples of computer simulation 7 .epmm as a distributed state machine ( dsm ) 8. why does the brain need statistical molecular computations ? 9 . summarya _ protein molecule machine _ ( pmm ) is an abstract probabilistic computing system , where * * x * and * y * are the sets of real input and output vectors , respectively * * * s** is a finite set of states * is a function describing the input - dependent conditional probability densities of state transitions , where is the conditional probability of transfer from state to state during time interval , where is the value of input , and is the set of non - negative real numbers .the components of are called _generalized potentials_. they can be interpreted as membrane potential , and concentrations of different neurotransmitters .* is a function describing output .the components of are called _ generalized currents_. they can be interpreted as ion currents , and the flows of second messengers .let , , be , respectively , the values of input , output , and state at time t , and let be the probability that .the work of a pmm is described as follows : summing the right and the left parts of ( [ eq1 ] ) over yields so the condition ( [ eq2 ] ) holds for any t. the internal structure of a pmm is shown in figure 1 , where is the probability of transition from state to state during time interval .the gray circle indicates the current state . the output is a function of input and the current state .for the probability of transition from state to state we have it follows from ( [ eq1 ] ) that channels are studied by many different disciplines : biophysics , protein chemistry , molecular genetics , cell biology and others ( see extensive bibliography in ) .this paper is concerned with the information processing ( computational ) possibilities of ion channels .i postulate that , at the information processing level , ion channels ( as well as some other membrane proteins ) can be treated as pmms .that is , at this level , the exact biophysical and biochemical mechanisms are not important .what is important are the properties of ion channels as abstract machines .this situation can be meaningfully compared with the general relationship between statistical physics and thermodynamics .only some properties of molecules of a gas ( e.g. , the number of degrees of freedom ) are important at the level of thermodynamics .similarly , only some properties of protein molecules are important at the level of statistical computations implemented by the ensembles of such molecules .the general structure of a voltage - gated ion channel is shown schematically in figure [ fi2]a .figures [ fi2]b and [ fi2]c show how this channel can be represented as a pmm . in this examplethe pmm has five states , a single input ( the membrane potential ) and a single output ( the ion current ) . using the goldman - hodgkin - katz ( ghk ) current equation we have the following expression for the output function . where * is the ion current in state with input * ] is the faraday constant * is the ratio of membrane potential to the thermodynamic potential , where ] is the gas constant * and $ ] are the cytoplasmic and extracellular concentrations of the ion , respectively one can make different assumptions about the function , describing the conditional probability densities of state transitions .it is convenient to represent this function as a matrix of voltage dependent coefficients . where .note that the diagonal elements of this matrix are not used in equation ( [ eq1 ] ) . in the model of spike generation discussed in both sodium , , and potassium , channels were treated as pmms with five states shown in figure [ fi2 ] .coefficients , , where assumed to be sigmoid functions of membrane potential , and coefficients and - constant . in the case of the sodium channel, was used as a high permeability state , and was used as inactive state . in the case of potassium channel , and assumed to be high permeability states . *note*. as the experiments with the program channels ( mentioned in section [ sec1 ] ) show , in a model with two voltage - gated channels ( and ) , the spike can be generated with many different assumptions about functions and .an _ ensemble of protein molecule machines _ ( epmm ) is a set of identical independent pmms with the same input vector , and the output vector equal to the sum of output vectors of individual pmms .the structure of an epmm is shown in figure [ fi3 ] , where is the total number of pmms , is the output vector of the k - th pmm , and is the output vector of the epmm .we have let denote the number of pmms in state ( the occupation number of state ) . instead of ( [ eq10 ] )we can write are random variables with the binomial probability distributions has the mean and the variance .let us define the relative number of pmms in state ( the relative occupation number of state ) as the behavior of the average is described by the equations similar to ( [ eq1 ] ) and ( [ eq2 ] ) . the average output is equal to the sum of average outputs for all states . the standard deviation for is equal to it is convenient to think of the relative occupation numbers as the states of analog memory of an epmm . in states of such dynamical cellular short - term memory ( stm ) were called _ e - states_. figure [ fi4 ] illustrates the implementation of e - states as relative occupation numbers of the microscopic states of a pmm .the number of independent e - state variables is equal to .the number is reduced by one because of the additional equation ( [ eq15 ] ) .an epmm can serve as a robust analog computer with the input controlled coefficient matrix shown in figure [ fi5 ] .since all the characteristics of the statistical implementation of this computer are determined by the properties of the underlying pmm , this statistical molecular implementation is very robust .the implementation using integrating operational amplifiers shown in figure [ fi5 ] is not very reliable .the integrators based on operational amplifiers with negative capacitive feedback are not precise , so condition ( [ eq15 ] ) will be gradually violated .( a better implementation should use any equations from ( [ eq14 ] ) combined with equation . ) in the case of the discussed statistical implementation condition is guaranteed because the number of pmms , , is constant .the most remarkable property of the statistical implementation of the analog computer shown in figure [ fi5 ] is that the matrix of input - dependent macroscopic connections is implemented as the matrix of input - dependent microscopic probabilities . for a sufficiently large number of states ( say , ) , it would be practically impossible to implement the corresponding analog computers ( with required biological dimensions ) relying on traditional electronic operational amplifiers with negative capacitive feedbacks that would have to be connected via difficult to make matrices of input - dependent coefficients. a single neuron can have many different epmms interacting via electrical messages ( membrane potential ) and chemical messages ( different kinds of neurotransmitters ) . as mentioned in section [ sec3 ] ,the hodgkin - huxley model can be naturally expressed in terms of two epmms ( corresponding to the sodium and potassium channels ) interacting via common membrane potential ( see figure [ fi6]a ) .figure [ fi6]b shows two epmms interacting via a second messenger . in this example , epmm1 is the primary transmitter receptor and epmm2 is the second messenger receptor .figure [ fi7 ] presents examples of computer simulation done by program channels mentioned in section [ sec1 ] .lines 2 - 4 in figure [ fi7]a display random pulses of sodium current produced by 1 , 2 , and 3 pmms , respectively , representing sodium channel , in response to the pulse of membrane potential shown in line 1 .line 4 shows a response of 100 pmms .( a description of the corresponding patch - clamp experiments can be found in ) .figure [ fi7]b depicts the spike of membrane potential produced by two interacting epmms representing ensembles of sodium and potassium channels ( ) . in this simulation , the sodium and potassium channels were represented as five - state pmms mentioned in section [ sec3 ] .the specific values of parameters are not important for the purpose of this illustration .[ fi8 ] let the number of pmms go to infinity ( ) . in this caseepmm is a deterministic system described by the set of differential equations [ eq14 ] and [ eq15 ] . in some cases of highly nonlinear input - dependent coefficients , it is convenient to think about this dynamical system as a _ distributed state machine _ ( dsm ) .such machine simultaneously occupies all its discrete states , with the levels of occupation described by the _ occupation vector _ .we replaced by , since .this interpretation offers a convenient language for representing dynamical processes whose outcome depends on the sequence of input events . in the same way as a traditional state machine is used as a logic sequencer , a dsm can be used as an analog sequencer . the example shown in figure [ fi8 ] illustrates this interesting possibility . if the sequence of input events is the dsm ends up `` almost completely '' in state 2 ( lines 1 - 3 ) .the ba sequence leads to state 4 ( lines 4 - 6 ) .many different implementations of a dsm producing this sequencing effect can be found .here is an example of an epmm implementation : let , , and let be described as follows : if input satisfies condition ( event a ) then for transitions ; if input satisfies condition ( event b ) then for transitions . in all other cases . this example can be interpreted as follows .if input exceeds its threshold level before input exceeds its threshold level , the epmm ends up `` mostly '' in state 2 .if these events occur in the reverse direction , the epmm ends up `` mostly '' in state 4 .starting with the classical work of mcculloch and pitts it is well known that any computable function can be implemented as a network of rather simple artificial neurons . though the original concept of the mccullough - pitts logic neuron is now replaced by a more sophisticated model of a leaky integrate - and - fire ( lif ) neuron ,the latter model is still very simple as compared to the epmm formalism discussed in the present paper ._ why does the brain need statistical molecular computations ? _ _ why is it not sufficient to do collective statistical computations at the level of neural networks ? _the answer to this question is straightforward .there is not enough neurons in the brain to implement the required computations such as those associated with different effects of neuromodulation , working memory and temporal context in the networks built from the traditional artificial neurons .( visit _ www.brain0.com _ to find a discussion of this critically important issue . )a class of statistical analog computers built from large numbers of microscopic probabilistic machines is introduced .the class is based on the abstract computational model called _ protein molecule machine ( pmm)_. the discussed statistical computers are represented as _ ensembles of pmms ( epmms)_. ( sections [ sec2 ] and [ sec4 ] . ) 2 .it is postulated that at the level of neural computations some protein molecules ( e.g. , ion channels ) can be treated as pmms .that is , at this level , specific biophysical and biochemical mechanisms are important only as tools for the physical implementation of pmms with required abstract computational properties .( section [ sec3 ] . ) 3 .the macroscopic states of analog memory of the discussed statistical computers are represented by the average relative occupation numbers of the microscopic states of pmms .it was proposed that such states of cellular analog memory are responsible for the psychological phenomena of working memory and temporal context ( mental set ) .( section [ sec4 ] . ) 4 . in some cases , it is useful to think of an epmm as a distributed state machine ( dsm ) that simultaneously occupies all its discrete states with different levels of occupation .this approach offers a convenient language for representing dynamical processes whose outcome depends on the sequence of input events .( section [ sec8 ] . ) 5 .a computer program employing the discussed formalism was developed .the program , called channels , allows the user to simulate the dynamics of a cell with up to ten different voltage - gated channels , each channel having up to eighteen states .two simulation modes are supported : the monte - carlo mode ( for the number of molecules from 1 to 10000 ) , and the continuous mode ( for the infinite number of molecules ) .new software capable of handling more complex types of pmms is under development .( visit the web site www.brain0.com for more information . )i express my gratitude to prof .b. widrow , prof .l. stark , prof .y. eliashberg , prof .m. gromov , dr .i. sobel , and dr .p. rovner for stimulating discussions .i am especially thankful to my wife a. eliashberg for constant support and technical help . 12 changeux , f. ( 1993 ) .chemical signaling in the brain . _ scientific american , november _ , 58 - 62 .eliashberg , v. ( 1989 ) .context - sensitive associative memory : `` residual excitation '' in neural networks as the mechanism of stm and mental set . _ proceedings of ijcnn-89 , june 18 - 22 , 1989 , washington , d.c .i _ , 67 - 75 .eliashberg , v. ( 1989 ) .eliashberg , v. ( 1990 ) .universal learning neurocomputers ._ proceeding of the fourth annual parallel processing symposium .california state university , fullerton .april 4 - 6 , 1990 ._ 181 - 191 .eliashberg , v. ( 1990 ) .molecular dynamics of short - term memory . _ mathematical and computer modeling in science and technology . vol .14 _ , 295 - 299 .hille , b. ( 2001 ) .ion channels of excitable membranes . _ sinauer associates .sunderland , ma _hodgkin , a.l . ,huxley , a.f .1952 . a quantitative description of membrane current and its application to conduction and excitation in nerve . _ journal of physiology , 117 _ , 500 - 544 .kandel , e.r . , and spencer , w.a .( 1968 ) . cellular neurophysiological approaches in the study of learning. physiological rev .48 , 65 - 134 .kandel , e. , jessel , t ., schwartz , j. ( 2000 ) . principles of neural science . _ mcgraw - hill_. marder , e. , thirumalai , v. ( 2002 ) .cellular , synaptic and network effects of neuromodulation . _ neural networks 15 , 479 - 493 _ .mcculloch , w. s. and pitts , w. h. ( 1943 ) . a logical calculus of the ideas immanent in nervous activity ._ bulletin of mathematical biophysics , 5:115 - 133_. nichols , j.g . , martin , a.r . ,wallace b.g . , ( 1992 ) from neuron to brain , _ third edition , sinauer associates_. spiking neurons in neuroscience and technology ._ 2001 special issue , neural networks vol .
a class of analog computers built from large numbers of microscopic probabilistic machines is discussed . it is postulated that such computers are implemented in biological systems as ensembles of protein molecules . the formalism is based on an abstract computational model referred to as _ protein molecule machine ( pmm)_. a pmm is a continuous - time first - order markov system with real input and output vectors , a finite set of discrete states , and the input - dependent conditional probability densities of state transitions . the output of a pmm is a function of its input and state . the components of input vector , called _ generalized potentials _ , can be interpreted as membrane potential , and concentrations of neurotransmitters . the components of output vector , called _ generalized currents _ , can represent ion currents , and the flows of second messengers . an _ ensemble of pmms ( epmm ) _ is a set of independent identical pmms with the same input vector , and the output vector equal to the sum of output vectors of individual pmms . the paper suggests that biological neurons have much more sophisticated computational resources than the presently popular models of artificial neurons .
direct electrical coupling through gap - junctions is a common way of communication between neurons , as well as between cells of the heart , pancreas , and other physiological systems .electrical synapses are important for synchronization of the network activity , wave propagation , and pattern formation in neuronal networks . a prominent example of a gap - junctionally coupled network , whose dynamics is thought to be important for cognitive processing , is a group of neurons in the locus coeruleus ( lc ) , a nucleus in the brainstem .electrophysiological studies of the animals performing a visual discrimination test show that the rate and the pattern of activity of the lc network correlate with the cognitive performance .specifically , the periods of the high spontaneous activity correspond to the periods of poor performance , whereas the periods of low synchronized activity coincide with good performance . based on the physiological properties of the lc network , it was proposed that the transitions between the periods of high and low network activity are due to the variations in the strength of coupling between the lc neurons .this hypothesis motivates the following dynamical problem : to study how the dynamics of electrically coupled networks depends on the coupling strength .this question is the focus of the present work .the dynamics of an electrically coupled network depends on the properties of the attractors of the local dynamical systems and the interactions between them .following , we assume that the individual neurons in the lc network are spontaneously active .specifically , we model them with excitable dynamical systems forced by small noise .we show that depending on the strength of electrical coupling , there are three main regimes of the network dynamics : uncorrelated spontaneous firing ( weak coupling ) , formation of clusters and waves ( intermediate coupling ) , and synchrony ( strong coupling ) .the qualitative features of these regimes are independent from the details of the models of the individual neurons and network topology .using the center manifold reduction and the freidlin - wentzell large deviation theory , we derive a variational problem , which provides a useful geometric interpretation for various patterns of spontaneous activity .specifically , we show that the location of the minima of a certain continuous function on the surface of the unit encodes the most likely activity patterns generated by the network . by studying the evolution of the minima of this function under the variation of the control parameter ( coupling strength ) , we identify the principal transformations in the network dynamics . the minimization problem is also used for the quantitative description of the main dynamical regimes and transitions between them . in particular , for the weak and strong coupling regimes , we present asymptotic formulae for the activity rate as a function of the coupling strength and the degree of the network .the variational analysis is complemented by the stability analysis of the synchronous state in the strong coupling regime . in analyzing various aspects of the network dynamics, we pay special attention to the role of the structural properties of the network in shaping its dynamics .we show that in weakly coupled networks , only very rough structural properties of the underlying graph matter , whereas in the strong coupling regime , the finer features , such as the algebraic connectivity and the properties of the cycle subspace associated with the graph of the network , become important .therefore , this paper presents a comprehensive analysis of electrically coupled networks of excitable cells in the presence of noise .it complements the existing studies of related deterministic networks of electrically coupled oscillators ( see , e.g. , and references therein ) .the outline of the paper is as follows . in section [ themodel ], we formulate the biophysical model of the lc network .section [ s3 ] presents numerical experiments elucidating the principal features of the network dynamics . in section [ analysis ] , we reformulate the problem in terms of the bifurcation properties of the local dynamical systems and the properties of the linear coupling operator .we then introduce the variational problem , whose analysis explains the main dynamical regimes of the coupled system . in section[ another ] , we analyze the stability of the synchronous dynamics in the strong coupling regime , using fast - slow decomposition .the results of this work are summarized in section [ discuss ] .according to the dynamical mechanism underlying action potential generation , conductance - based models of neurons are divided into type i and type ii classes .the former assumes that the model is near the saddle - node bifurcation , while the latter is based on the andronov - hopf bifurcation .electrophysiological recordings of the lc neurons exhibit features that are consistent with the type i excitability .the existing biophysical models of lc neurons use type i action potential generating mechanism . in accord with these experimental and modeling studies ,we use a generic type i conductance - based model to simulate the dynamics of the individual lc neuron here , dynamical variables and are the membrane potential and the activation of the potassium current , , respectively . stands for the membrane capacitance .the ionic currents are modeled using the hodgkin - huxley formalism ( see appendix for the definitions of the functions and parameter values used in ( [ 1.1 ] ) and ( [ 1.2 ] ) ) .a small gaussian white noise is added to the right hand side of ( [ 1.1 ] ) to simulate random synaptic input and other possible fluctuations affecting system s dynamics . without noise ( ) , the system is in the excitable regime . for , it exhibits spontaneous spiking .the frequency of the spontaneous firing depends on the proximity of the deterministic system to the saddle - node bifurcation and on the noise intensity .a typical trajectory of ( [ 1.1 ] ) and ( [ 1.2 ] ) stays in a small neighborhood of the stable equilibrium most of the time ( fig . [ f.1]a ) .occasionally , it leaves the vicinity of the fixed point to make a large excursion in the phase plane and then returns to the neighborhood of the steady state ( fig .[ f.1]a ) .these dynamics generate a train of random spikes in the voltage time series ( fig .[ f.1]b ) . * a * ) , ( [ 1.2 ] ) : nullclines plotted for the deterministic model ( ) and a trajectory of the randomly perturbed system ( ) .the trajectory spends most time in a small neighborhood of the stable fixed point .occasionally , it leaves the basin of attraction of the fixed point to generate a spike . *b * ) the voltage timeseries , , corresponding to spontaneous dynamics shown in plot * a*. , title="fig:",width=240,height=192 ] ) , ( [ 1.2 ] ) : nullclines plotted for the deterministic model ( ) and a trajectory of the randomly perturbed system ( ) .the trajectory spends most time in a small neighborhood of the stable fixed point .occasionally , it leaves the basin of attraction of the fixed point to generate a spike . *b * ) the voltage timeseries , , corresponding to spontaneous dynamics shown in plot * a*. , title="fig:",width=288,height=192 ] in neuroscience , the ( average ) firing rate provides a convenient measure of activity of neural cells and neuronal populations .it is important to know how the firing rate depends on the parameters of the model . in this paper, we study the factors determining the rate of firing in electrically coupled network of neurons .however , before setting out to study the network dynamics , it is instructive to discuss the behavior of the single neuron model first . to this end, we use the center - manifold reduction to approximate ( [ 1.1 ] ) and ( [ 1.2 ] ) by a system : z = -u^(z)+ w_t , u(z)=z-13 z^3 + 23 ^ 3/2 , where is the rescaled projection of onto a slow manifold , is the distance to the saddle - node bifurcation , and is the noise intensity after rescaling .we postpone the details of the center - manifold reduction until we analyze a more general network model in [ center ] .the time between two successive spikes in voltage time series corresponds to the first time the trajectory of ( [ 1d ] ) with initial condition overcomes potential barrier .the large deviation estimates ( cf . ) yield the logarithmic asymptotics of the first crossing time _ 0 ^2_z_0= 2u()=4 ^ 3/23 _ z_0\ { 4 ^ 3/23 ^ 2 } , where stands for the expected value with respect to the probability generated by the random process with initial condition . throughout this paper, we use to denote logarithmic asymptotics .it is also known that the first exit time is distributed exponentially as shown in fig .[ f.2 ] ( cf . ) . equation ( [ arhenius ] ) implies that the statistics of spontaneous spiking of a single cell is determined by the distance of the neuronal model ( [ 1.1 ] ) and ( [ 1.2 ] ) to the saddle - node bifurcation and the intensity of noise .below we show that , in addition to these two parameters , the strength and topology of coupling are important factors determining the firing rate of the coupled population . , obtained by integration ( [ 1.1 ] ) and ( [ 1.2 ] ) .the interspike intervals are distributed approximately exponentially ., width=211,height=192 ] the network model includes cells , whose intrinsic dynamics is defined by ( [ 1.1 ] ) and ( [ 1.2 ] ) , coupled by gap - junctions .the gap - junctional current that cell receives from the other cells in the network is given by i_c^(i)=g_i=1^n a_ij ( v^(j)-v^(i ) ) , where is the gap - junction conductance and ^ 2.\ ] ] adjacency matrix defines the network connectivity . by adding the coupling current to the right hand side of the voltage equation ( [ 1.1 ] ) and combining the equations for all neurons in the network , we arrive at the following model where are independent copies of the standard brownian motion .* a * ) ; ( b ) neighbor ( cf . example [ ex.2 ] ) ; ( c ) all - to - all ( cf . example [ ex.3 ] ) . ,title="fig:",scaledwidth=20.0% ] ) ; ( b ) neighbor ( cf .example [ ex.2 ] ) ; ( c ) all - to - all ( cf .example [ ex.3 ] ) . ,title="fig:",scaledwidth=20.0% ] ) ; ( b ) neighbor ( cf .example [ ex.2 ] ) ; ( c ) all - to - all ( cf .example [ ex.3 ] ) . , title="fig:",scaledwidth=20.0% ]+ the network topology is an important parameter of the model ( [ 1.4 ] ) and ( [ 1.5 ] ) .the following terminology and constructions from the algebraic graph theory will be useful for studying the role of the network structure in shaping its dynamics .let denote the graph of interactions between the cells in the network . here , and denote the sets of vertices ( i.e. , cells ) and edges ( i.e. , the pairs of connected cells ) , respectively . throughout this paper , we assume that is a connected graph . for each edge , we declare one of the vertices to be the positive end ( head ) of , and the other to be the negative end ( tail ) .thus , we assign an orientation to each edge from its tail to its head .the coboundary matrix of is defined as follows ( cf . ) h=(h_ij)^mn , h_ij=\ { cl 1 , & v_j e_i , + -1 , & v_j e_i , + 0 , & . .let be a spanning tree of , i.e. , a connected subgraph of such that , and there are no cycles in . without loss of generality , we assume that e(g)=\{e_1 , e_2 , , e_n-1}. denote the coboundary matrix of by .matrix l = h^h is called a graph laplacian of .the laplacian is independent of the choice of orientation of edges that was used in the definition of .alternatively , the laplacian can be defined as l = d - a , where is the degree map and is the adjacency matrix of .let denote the eigenvalues of arranged in the increasing order counting the multiplicity .the spectrum of the graph laplacian captures many structural properties of the network ( cf . ) . in particular, the first eigenvalue of , , is simple if and only if the graph is connected .the second eigenvalue is called the algebraic connectivity of , because it yields a lower bound for the edge and the vertex connectivity of .the algebraic connectivity is important for a variety of combinatorial , probabilistic , and dynamical aspects of the network analysis .in particular , it is used in the studies of the graph expansion , random walks , and synchronization of dynamical networks .next , we introduce several examples of the network connectivity including nearest neighbor arrays of varying degree and a pair of degree symmetric and random graphs .these examples will be used to illustrate the role of the network topology in pattern formation .* a*. the graph in ( a ) is formed using regular coupling scheme , whereas edges of the graph in ( b ) are generated using a random algorithm ( cf .example [ ex.4 ] ) . , title="fig:",width=192,height=192 ] .the graph in ( a ) is formed using regular coupling scheme , whereas edges of the graph in ( b ) are generated using a random algorithm ( cf .example [ ex.4 ] ) ., title="fig:",width=192,height=192 ] the nearest - neighbor coupling scheme is an example of the local connectivity ( fig .[ f.3]a ) . for simplicity, we consider a array . for higher dimensional lattices ,the nearest neighbor coupling is defined similarly . in this configuration , each cell in the interior of the array is coupled to two nearest neighbors .this leads to the following expression for the coupling current : the coupling currents for the cells on the boundary are given by the corresponding graph laplacian is l= ( cccccc 1 & -1 & 0 & & 0&0 + -1 & 2 & -1 & & 0&0 + & & & & & + 0 & 0 & 0 & & -1 & 1 ) .the neighbor coupling scheme is a natural generalization of the previous example .suppose each cell is coupled to of its nearest neighbors from each side whenever they exist or as many as possible otherwise : i_c^(j)=_i=1^\{k , n - j } g(v^(j+i)-v^(j ) ) + _ i=1^\{k , j } g(v^(j - i)-v^(j ) ) , j=2,3, ,n-1 , where we use a customary convention that if .the coupling matrix can be easily derived from ( [ 1.7 ] ) . the all - to - all coupling features global connectivity ( fig .[ f.3]c ) : i_c^(j)=g_i=1^n ( v^(i)-v^(j)),j=1,2,3, ,n .the laplacian in this case has the following form l= ( cccccc n-1 & -1 & -1 & & -1&-1 + -1 & n-1 & -1 & & -1&-1 + & & & & & + -1 & -1 & -1 & & -1 & n-1 ) .the graphs in the previous examples have different degrees : ranging from in example [ ex.1 ] to in example [ ex.3 ] .in addition to the degree of the graph , the pattern of connectivity itself is important for the network dynamics .this motivates our next example .consider a pair of degree graphs shown schematically in fig .the graph in fig .[ f.3a]a has symmetric connections .the edges of the graph in fig .[ f.3a]b were selected randomly .both graphs have the same number of nodes and equal degrees .graphs with random connections like the one in the last example represent expanders , a class of graphs used in many important applications in mathematics , computer science and other branches of science and technology ( cf . ) . in section[ another ] we show that dynamical networks on expanders have very good synchronization properties ( see also ) .let be a family of graphs on vertices , with the following property : _ 2(g_n)>0,n .such graphs are called ( spectral ) expanders .there are known explicit constructions of expanders , including the celebrated ramanujan graphs .in addition , families of random graphs have good expansion properties .in particular , it is known that \ { _ 2(g_n)d-2-}=1-o_n(1 ) > 0 , where stands for the family of random graphs of degree and .* a*nearest neighbor coupling ( solid line ) and all - to - all coupling ( dash - dotted line ) ( see examples [ ex.1]-[ex.3 ] ) . the graphs in ( b )are plotted for the symmetric and random degree graphs in solid and dashed lines respectively ( see example [ ex.4 ] ) .( c ) the firing rate plot for the model , in which the coupling is turned off for values of the membrane potential above the firing threshold .the symmetric ( solid line ) and random ( dashed line ) degree graphs are used for the two plots in ( c ) . , title="fig:",width=192,height=172 ] * b*nearest neighbor coupling ( solid line ) and all - to - all coupling ( dash - dotted line ) ( see examples [ ex.1]-[ex.3 ] ) .the graphs in ( b ) are plotted for the symmetric and random degree graphs in solid and dashed lines respectively ( see example [ ex.4 ] ) .( c ) the firing rate plot for the model , in which the coupling is turned off for values of the membrane potential above the firing threshold .the symmetric ( solid line ) and random ( dashed line ) degree graphs are used for the two plots in ( c ) . , title="fig:",width=192,height=172 ] * c*nearest neighbor coupling ( solid line ) and all - to - all coupling ( dash - dotted line ) ( see examples [ ex.1]-[ex.3 ] ) .the graphs in ( b ) are plotted for the symmetric and random degree graphs in solid and dashed lines respectively ( see example [ ex.4 ] ) .( c ) the firing rate plot for the model , in which the coupling is turned off for values of the membrane potential above the firing threshold .the symmetric ( solid line ) and random ( dashed line ) degree graphs are used for the two plots in ( c ) . , title="fig:",width=192,height=172 ]the four parameters controlling the dynamics of the biophysical model ( [ 1.4 ] ) and ( [ 1.5 ] ) are the excitability , the noise intensity , the coupling strength , and the network topology . assuming that the system is at a fixed distance from the bifurcation, we study the dynamics of the coupled system for sufficiently small noise intensity . therefore , the two remaining parameters are the coupling strength and the network topology .we focus on the impact of the coupling strength on the spontaneous dynamics first . at the end of this section, we discuss the role of the network topology .the numerical experiments of this section show that activity patterns generated by the network are effectively controlled by the variations of the coupling strength .* a * * b * * c * * d * to measure the activity of the network for different values of the control parameters , we will use the average firing rate - the number of spikes generated by the network per one neuron and per unit time . fig .[ f.4]a shows that the activity rate varies significantly with the coupling strength .the three intervals of monotonicity of the activity rate plot reflect three main stages in the network dynamics en route to complete synchrony : weakly correlated spontaneous spiking , formation of clusters and wave propagation , and synchronization .we discuss these regimes in more detail below .* a*-[f.6a ] are coupled through the nearest neighbor scheme . , title="fig:",width=240,height=192 ] * b*-[f.6a ] are coupled through the nearest neighbor scheme . ,title="fig:",width=240,height=192 ] * c*-[f.6a ] are coupled through the nearest neighbor scheme . , title="fig:",width=240,height=192 ] * d*-[f.6a ] are coupled through the nearest neighbor scheme . ,title="fig:",width=240,height=192 ] * a * * b * * weakly correlated spontaneous spiking . * for sufficiently small , the activity retains the features of spontaneous spiking in the uncoupled population .[ f.5]b shows no significant correlations between the activity of distinct cells in the weakly coupled network .the distributions of the interspike intervals are exponential in both cases ( see fig .[ f.5 ] ( c , d ) ) .there is an important change , however : the rate of firing goes down for increasing values of for small .this is clearly seen from the graphs in fig .the decreasing firing rate for very weak coupling can also be noted from the interspike interval distributions in fig .[ f.5]c , d : the density in fig .[ f.5]d has a heavier tail .thus , weak electrical coupling has a pronounced inhibitory ( shunting ) effect on the network dynamics : it drains the current from a neuron developing a depolarizing potential and redistributes it among the cells connected to it .this effect is stronger for networks with greater number of connections .the three plots shown in fig .[ f.4]a correspond to nearest - neighbor coupling , neighbor coupling , and all - to - all coupling .note that the slope at zero is steeper for networks with greater degree .* coherent structures .* for increasing values of the system develops clusters , short waves , and robust waves ( see fig .[ f.6 ] ) .the appearance of these spatio - temporal patterns starts in the middle of the first decreasing portion of the firing rate plot in fig .[ f.4]a and continues through the next ( increasing ) interval of monotonicity . while patterns in fig .[ f.6 ] feature progressively increasing role of coherence in the system s dynamics , the dynamical mechanisms underlying cluster formation and wave propagation are distinct . factors a and b below identify two dynamical principles underlying pattern formation in this regime . * factor a : * * factor b : * factor a follows from the variational interpretation of the spontaneous dynamics in weakly coupled networks , which we develop in section [ analysis ] .it is responsible for the formation of clusters and short waves , like those shown in fig .[ f.6]a . to show numericallythat that factor a ( vs. factor b ) is responsible for the formation of clusters , we modified the model ( [ 1.4 ] ) and ( [ 1.5 ] ) in the following way .once a neuron in the network has crossed the threshold , we turn off the current that it sends to the other neurons in the network until it gets back close to the the stable fixed point .we will refer to this model as the modified model ( [ 1.4 ] ) and ( [ 1.5 ] ) .numerical results for the modified model in fig .[ f.6a]a , b , show that clusters are formed as the result of the subthreshold dynamics , i.e. , are due to factor a. factor b becomes dominant for stronger coupling .it results in robust waves with constant speed of propagation .the mechanism of the wave propagation is essentially deterministic and is well known from the studies of waves in excitable systems ( cf .however , in the presence of noise , the excitation and termination of waves become random ( see fig .[ f.6](b , c ) ) .* synchrony . * the third interval of monotonicity in the graph of the firing rate vs. the coupling strength is decreasing ( see fig . [it features synchronization , the final dynamical state of the network . in this regime , once one cell crosses the firing threshold the entire network fires in unison .the distinctive feature of this regime is a rapid decrease of the firing rate for increasing ( see fig .[ f.4]a ) .the slowdown of firing in the strong coupling regime was studied in ( see also ) . when the coupling is strong the effect of noise on the network dynamics is diminished by the dissipativity of the coupling operator .the reduced effect of noise results in the decrease of the firing rate . in [ topology ] , we present analytical estimates characterizing denoising by electrical coupling for the present model . * * a * * ) for the same value of the coupling strength .the randomly connected network is already synchronized ( b ) , while the regular network is en route to synchrony ( a ) ., title="fig:",width=259,height=211 ] * * b * * ) for the same value of the coupling strength .the randomly connected network is already synchronized ( b ) , while the regular network is en route to synchrony ( a ) ., title="fig:",width=259,height=211 ] all connected networks of excitable elements ( regardless of the connectivity pattern ) undergo the three dynamical regimes , which we identified above for weak , intermediate , and strong coupling .the topology becomes important for quantitative description of the activity patterns .in particular , the topology affects the boundaries between different phases .we first discuss the role of topology for the onset of synchronization .the transition to synchrony corresponds to the beginning of the third phase and can be approximately identified with the location of the point of maximum on the firing rate plot ( see fig .[ f.4]a , b ) .the comparison of the plots for and - neighbor coupling schemes shows that the onset of synchrony takes place at a smaller value of for the latter network .this illustrates a general trend : networks with greater number of connections tend to have better synchronization properties .however , the degree is not the only structural property of the graph that affects synchronization .the connectivity pattern is important as well .[ f.8 ] shows that a randomly connected degree network synchronizes faster than its symmetric counterpart ( cf .example [ ex.4 ] ) . the analysis in [ strong ] shows that the point of transition to synchrony can be estimated using the algebraic connectivity of the graph .specifically , the network is synchronized , if where stands for the coupling strength in the rescaled nondimensional model .the algebraic connectivity is easy to compute numerically . for many graphs with symmetries including those in examples [ ex.1]-[ex.3 ] ,the algebraic connectivity is known analytically .on the other hand , there are effective asymptotic estimates of the algebraic connectivity available for certain classes of graphs that are important in applications , such as random graphs and expanders .the algebraic connectivities of the graphs in examples [ ex.1]-[ex.2 ] tend to zero as .therefore , for such networks one needs to increase the strength of coupling significantly to maintain synchrony in networks growing in size .this situation is typical for symmetric or almost symmetric graphs .in contrast , it is known that for the random graph from example [ ex.4 ] the algebraic connectivity is bounded away from zero ( with high probability ) as .therefore , one can guarantee synchronization in dynamical networks on such graphs using finite coupling strength when the size of the network grows without bound .this counter - intuitive property is intrinsic to networks on expanders , sparse well connected graphs . for a more detailed discussion of the role of network topology in synchronization ,we refer the interested reader to section in .the discussion in the previous paragraph suggests that connectivity is important in the strong coupling regime .it is interesting that to a large extent the dynamics in the weak coupling regime remains unaffected by the connectivity .for instance , the firing rate plots for the random and symmetric degree- networks ( example [ f.4 ] ) shown in fig . [ f.4]bcoincide over an interval in near .furthermore , the plots for the same pair of networks based on the modified model ( [ 1.4 ] ) and ( [ 1.5 ] ) are almost identical , regardless the disparate connectivity patterns underlying these networks .the variational analysis in [ weak ] shows that , in the weak coupling regime , to leading order the firing rate of the network depends only on the number of connections between cells .the role of the connectivity in shaping network dynamics increases in the strong coupling regime .in this section , we analyze dynamical regimes of the coupled system ( [ 1.4 ] ) and ( [ 1.5 ] ) under the variation of the coupling strength . in [ center ] , we derive an approximate model using the center manifold reduction . in [ exit - problem ], we relate the activity patterns of the coupled system to the minima of a certain continuous function on the surface of an .the analysis of the minimization problem for weak , strong , and intermediate coupling is used to characterize the dynamics of the coupled system in these regimes . in preparation for the analysis of the coupled system ( [ 1.4 ] ) and ( [ 1.5 ] ) , we approximate it by a simpler system using the center manifold reduction . to this end , we first review the bifurcation structure of the model .denote the equations governing the deterministic dynamics of a single neuron by x=(x , ) , where and is a smooth function and is a small parameter , which controls the distance of ( [ local ] ) from the saddle - node bifurcation .suppose that at , the unperturbed problem ( [ local ] ) has a nonhyperbolic equilibrium at the origin such that has a single zero eigenvalue and the rest of the spectrum lies to the left of the imaginary axis .suppose further that at there is a homoclinic orbit to entering the origin along the center manifold . )is near the saddle - node on an invariant circle bifurcation ., width=240,height=192 ] then under appropriate nondegeneracy and transversality conditions on the local saddle - node bifurcation at , for near zero the homoclinic orbit is transformed into either a unique asymptotically stable periodic orbit or to a closed invariant curve having two equilibria : a node and a saddle ( fig . [ excitable ] ) . without loss of generality , we assume that the latter case is realized for small positive , and the periodic orbit exists for negative .let be a sufficiently small fixed number , i.e. , ( [ local ] ) is in the excitable regime ( fig .[ excitable ] ) . for simplicity, we assume that the stable node near the origin is the only attractor of ( [ local ] ) .we are now in a position to formulate our assumptions on the coupled system .consider local systems ( [ local ] ) that are placed at the nodes of the connected graph and coupled electrically : x = ( x,)-g(lj)x + ( i_np)w , where , is an identity matrix , , and is the laplacian of .matrix defines the linear combination of the local variables engaged in coupling . in the the neuronal network model above , .parameters and control the coupling strength and the noise intensity respectively . is a gaussian white noise process in .the local systems are taken to be identical for simplicity .the analysis can be extended to cover nonhomogeneous networks .we next turn to the center manifold reduction of ( [ coup ] ) .consider ( [ coup] ( the zero subscript refers to ) for . by our assumptions on the local system ( [ local ] ) , has a kernel .denote ed(0,0)/\{0 } p(d(0,0))^ p^e=1 . by the center manifold theorem, there is a neighborhood of the origin in the phase space of ( [ coup ] ) , , and such that for and , in , there exists an attracting locally invariant slow manifold .the trajectories that remain in for sufficiently long time can be approximated by those lying in .thus , the dynamics of ( [ coup] can be reduced to , whose dimension is times smaller than that of the phase space of ( [ coup] .the center manifold reduction is standard .its justification relies on the lyapunov - schmidt method and taylor expansions ( cf .formally , the reduced system is obtained by projecting ( [ coup] onto the center subspace of ( [ coup] for ( see ) : y = a_1 y^2 -a_2- a_3 g ly+ o(|y|^3,^2,g^2 ) , where provided that the following nondegeneracy conditions hold conditions ( [ a1 ] ) and ( [ a2 ] ) are the nondegeneracy and transversality conditions of the saddle - node bifurcation in the local system ( [ local ] ) .condition ( [ a3 ] ) guarantees that the projection of the coupling onto the center subspace is not trivial .all conditions are open . without loss of generality , assume that nonzero coefficients are positive .next , we include the random perturbation in the reduced model .note that near the saddle - node bifurcation ( ) , the vector field of ( [ coup] is much stronger in the directions transverse to than in the tangential directions .the results of the geometric theory of randomly perturbed fast - slow systems imply that the trajectories of ( [ coup ] ) with small positive that start close to the node of ( [ coup] remain in a small neighborhood of on finite intervals of time with overwhelming probability ( see for specific estimates ) . to obtain the leading order approximation of the stochastic system ( [ coup ] ) near the slow manifold, we project the random perturbation onto the center subspace of ( [ coup] for and add the resultant term to the reduced equation ( [ red ] ) : y = a_1 y^2-a_2- a_3 g ly+bw+ , b = i_n(p^p)^nnd .we replace by identically distributed , where is a white noise process in and . here, stands for the euclidean norm of .after rescaling the resultant equation and ignoring the higher order terms , we arrive at the following reduced model z = z^2- - lz+ w , where stands for a standard brownian motion in and here , with a slight abuse of notation , we continue to use to denote the small parameter in the rescaled system . in the remainder of this paper , we analyze the reduced model ( [ rescale ] ) . in this subsection ,the problem of identifying most likely dynamical patterns generated by ( [ coup ] ) is reduced to a minimization problem for a smooth function on the surface of the unit cube .consider the initial value problem for ( [ rescale ] ) z=(z ) - lz+w , l = h^h , z(0)=z_0d^n , where ( z)=(f(z_1 ) , f(z_2 ) , , f(z_n)),f()=^2 - 1 , and d=\ { z=(z_1 , z_2, ,z_n ) : -2-b <z_i < 1 , i:=\{1,2, ,n } } , where auxiliary parameter will be specified later . let ^+ d = \{ z=(z_1,z_2 , , z_n ) : z|d & ( i z_i=1 ) } , denote a subset of the boundary of d , . if , then at least one of the neurons in the network is at the firing threshold .it will be shown below that the trajectories of ( [ diffusion ] ) exit from through with probability as , provided is sufficiently large .therefore , the statistics of the first exit time = \{t>0 : z(t)d } and the distribution of the location of the points of exit characterize the statistics of the interspike intervals and the most probable firing patterns of ( [ 1.1 ] ) and ( [ 1.2 ] ) , respectively . the freidlin - wentzell theory of large deviations yields the asymptotics of and for small . to apply the large deviation estimates to the problem at hand , we rewrite ( [ diffusion ] ) as a randomly perturbed gradient system z=- z u_(z)+w_t , where where stands for the inner product in .the additive constant in the definition of the potential function is used to normalize the value of the potential at the local minimum .the following theorem summarizes the implications of the large deviation theory for ( [ gradient ] ) .let denote the points of minima of on and .then for any and , where stands for the distance in .the statements a)-c ) can be shown by adopting the proofs of theorems 2.1 , 3.1 , and 4.1 of chapter 4 of to the case of the action functional with multiple minima .theorem [ main ] reduces the exit problem for ( [ diffusion ] ) to the minimization problem u_(z ) , zd . in the remainder of this section , we study ( [ min ] ) for the weak , strong , and intermediate coupling strength . in this subsection, we study the minima of on for small .first , we locate the points of minima of the for ( cf .lemma [ u_0 ] ) . then , using the implicit function theorem , we continue them for small ( cf .theorem [ imf ] ) .let in the definition of ( [ define - d ] ) be fixed .the minimum of on is achieved at points ^i=(^i_1,^i_2, ,_n^i),^i_j=\ { cc 1 , & j = i , + -1 , & ji , .j .the minimal value of on is denote and ] .+ suppose in ( [ define - d ] ) is sufficiently large .there exists such that for on each face ] is a smooth function such that ^i(0)=-,^i()|_=0.= -l^i , and is the column of the graph laplacian after deleting the entry .the equations in ( [ phii ] ) are written using the following local coordinates for moreover , the minimal value of on is given by u^i_:=_z_i d u _= 43 + ( v_i ) + o(^2 ) .consequently , u_:=_zdu_=43+_k ( v_k ) + o(^2 ) .let denote the restriction of on : u_(y)=2hz(y ) , hz(y)+_i=1^n-1f(y_i ) + 43 , where next , we compute the gradient of : u_(y)= 2 yhz(y ) , hz(y ) -(y ) , where .further , where , ] that satisfies ^1(0)=-,^1 ( ) |_=0 .= -^-1 . by taking into account , from ( [ imfthm ] )we have ^1 ( ) |_=0 .. this shows ( [ phii ] ) . to show ( [ valeps ] ) , we use the taylor expansion of : {\gamma=0 , y=-\mathbf{1_{n-1}}}+ o(\gamma^2)\\ & = & \lbl{taylor } { 2\over 3 } + { \gamma\over 2 } \langle h ( 1,-\mathbf{1_{n-1}}),h ( 1,-\mathbf{1_{n-1}})\rangle + o(\gamma^2)= { 4\over 3 } + \gamma~\mathrm{deg}(v_1)+ o(\gamma^2).\end{aligned}\ ] ] by choosing in ( [ define - d ] ) large enough one can ensure that for , ] , can be made arbitrarily large by choosing sufficiently large in ( [ define - d ] ) .+ the second equation in ( [ phii ] ) shows that the minima of the potential function lying on the faces corresponding to connected cells move towards the common boundaries of these faces , under the variation of . for small ,the minima of are located near the minima of the potential function ( cf .( [ func-1 ] ) ) . in this subsection, we show that for larger , the minima of are strongly influenced by the quadratic term , which corresponds to the coupling operator in the differential equation model ( [ diffusion ] ) . to study the minimization problem for , we rewrite as follows : u_(z)=\ { 12hz , hz+1 ( z)}= : u^1(z ) .thus , the problem of minimizing for becomes the minimization problem for u^(z):=hz , hz+(z ) , z^+ d , ||1 . attains the global minimum on at : u^0:=u^0()=0 . is nonnegative , moreover , finally , .+ let , ] , achieves its minimal value on at : u^:=u^()=4n3 , provided in the definition of ( cf .( [ define - d ] ) ) is sufficiently large . by the interlacing eigenvalues theorem ( cf .theorem 4.3.8 , ) , . ] , because is connected ( cf . theorem 6.3 , ) . with these observations ,theorem [ small - lambda ] yields an estimate for the onset of synchrony in terms of the eigenvalues of : 2 ( _ i_1(l^i ) ) ^-12(_2(l))^-1 .note that ( [ onset - syn ] ) yields smaller lower bounds for the onset of synchrony for graphs with larger algebraic connectivity .in particular , for the families of expanders ( cf .example [ ex.5 ] ) , it provides bounds on the coupling strength guaranteeing synchronization that are uniform in . for the proof of theorem [ small - lambda ] , we need the following auxiliary lemma. for there exists ] . without loss of generality ,let . then ,\ ] ] and where the quadratic function yields : therefore , ( theorem [ small - lambda ] ) let .by lemma [ corner ] , for some ] . on the other hand , on , be made arbitrarily large for any provided in ( [ define - d ] ) is sufficiently large .+ in this subsection , we develop a geometric interpretation of the spontaneous dynamics of ( [ 1.4 ] ) and ( [ 1.5 ] ) . after introducing certain auxiliary notation, we discuss how the spatial location of the minima of on the surface of the encodes the most likely activity patterns of ( [ 1.4 ] ) and ( [ 1.5 ] ). then we proceed to derive a lower bound on the coupling strength necessary for the development of coherent structures . let ,\ ; 1\le i_1<i_2<\dots < i_k\le n ] = ( _ 1,_2, ,_n-1)-1 = z_i|_k(i , j)|1^nn^(n-1)(n-1)^(n-1)n(,)^n\xi)\xi_1\\ ( 2\eta + \left[\row_2(s)+ \row_3(s)\right]\xi)\xi_2\\ \dots \\( 2\eta + \left[\row_{n-1}(s)+ \row_{n}(s)\right]\xi)\xi_{n-1}. \end{pmatrix } ] l(l)=_2(l)>0=0\\ \lbl{asymptot } & = & \tr\left[\lambda\int_0^s \exp\{-2\hat l\}u\mathrm{d}u \right ] \rightarrow { 1\over 2}\kappa(g,\tl g ) , s\to\infty , \end{eqnarray } where \be\lbl{kappa } \kappa(g,\tl g):=\tr\{\hat{l}^{-1}\lambda\ } \quad\mbox{and}\quad\lambda=\tl h\tl{h}^\t . \eeparameter ] , there corresponds a unique cycle of length , such that it consists of and the edges from .the following lemma , relates the value of to the properties of the cycles . let be a connected graph .a : : if is a tree then ( g , g)=n-1 .b : : otherwise , let be a spanning tree of a and be the corresponding independent cycles .+ b.1 ; ; denote then 1 , b.2 ; ; if then 1-cn-1(1 - 1m)1 , where } \{|o_k| + \sum_{l\neq k } |o_k\cap o_l|\}.\ ] ] b.3 ; ; if ] face of . in particular , the network becomes completely synchronized , when the minimum of reaches .this observation allows one to estimate the onset of synchronization ( cf .theorem [ small - lambda ] ) and cluster formation ( cf .lemma [ cluster ] ) .furthermore , we show that in the strong coupling regime , the network dynamics has two disparate timescales : fast synchronization is followed by an ultra - slow escape from the potential well . the analysis of the slow - fast system yields estimates of stability of the synchronous state in terms of the coupling strength and structural properties of the network . in particular, it shows the contribution of the network topology to the synchronization properties of the network .we end this paper with a few concluding remarks about the implications of this work for the lc network .the analysis of the conductance - based model of the lc network in this paper agrees with the study of the integrate - and - fire neuron network in and confirms that the assumptions of spontaneously active lc neurons coupled electrically with a variable coupling strength are consistent with the experimental observations of the lc network .following the observations in that stronger coupling slows down network activity , we have studied how the firing rate depends on the coupling strength .we show that strong coupling results in synchronization and significantly decreases the firing rate ( see also ) .surprisingly , we found that the rate can be effectively controlled by the strength of interactions already for very weak coupling .we show that the dependence of the firing rate on the strength of coupling is nonmonotone .this has an important implication for the interpretation of the experimental data . because two distinct firing patterns can have similar firing rates ,the firing rate alone does not determine the response of the network to external stimulation .this situation is illustrated in fig .we choose parameters such that two networks , the spontaneously active ( fig .[ f.9]a ) and the nearly synchronous one ( fig .[ f.9]b ) , exhibit about the same activity rates ( see fig .[ f.9]c , d ) . however , because the activity patterns generated by these networks are different , so are their responses to stimulation ( fig .[ f.9]e , f ) .the network in the spontaneous firing regime produces a barely noticeable response ( fig .[ f.9]g ) , whereas the response of the synchronized network is pronounced ( fig . [ f.9]h ) .network responses similar to these were observed experimentally and are associated with the good ( fig .[ f.9]h ) and poor ( fig .[ f.9]g ) cognitive performance .our analysis suggests that the state of the network ( i.e. , the spatio - temporal dynamics ) rather than the firing rate , determines the response of the lc network to afferent stimulation .the main hypotheses used in our analysis are that the local dynamical systems satisfy assumption [ sn ] and interact through electrical coupling .the latter means that the coupling is realized through one of the local variables , interpreted as voltage , and is subject to the two kirchhoff s laws for electrical circuits . in this formour assumptions cover many biological , physical , and technological problems , including power grids , sensor and communication networks , and consensus protocols for coordination of autonomous agents ( see and references therein ) .therefore , the results of this work elucidate the principles of pattern formation in an important class of problems .* acknowledgments .* this work was partly supported by the nsf award dms 1109367 ( to gm ) .[ sec : a ] to emphasize that the results of this study do not rely on any specific features of the lc neuron model , in our numerical experiments we used the morris - lecar model , a common type i biophysical model of an excitable cell .this model is based on the hodgkin - huxley paradigm .the function on the right hand side of the voltage equation ( [ 1.1 ] ) , , models the combined effect of the calcium and sodium currents , , the potassium current , , and a small leak current , , constants , , and stand for the reversal potentials and , , and denote the maximal conductances of the corresponding ionic currents .the activation of the calcium and potassium channels are modeled using the steady - state functions and the voltage - dependent time constant the parameter values are summarized the following table .brown , eric and moehlis , jeff and holmes , philip and clayton , ed and rajkowski , janusz and aston - jones , gary , the influence of spike rate and stimulus duration on noradrenergic neurons , _ journal of comp .neuroscience_,17 , 13 - 29 , 2004 .g. margulis , explicit group - theoretic constructions of combinatorial schemes and their applications in the construction of expanders and concentrators .( russian ) problemy peredachi informatsii 24 ( 1988 ) , no .1 , 5160 ; ( english translation in problems inform .transmission 24 ( 1988 ) , no .1 , 3946 ) .
the mathematical theory of pattern formation in electrically coupled networks of excitable neurons forced by small noise is presented in this work . using the freidlin - wentzell large deviation theory for randomly perturbed dynamical systems and the elements of the algebraic graph theory , we identify and analyze the main regimes in the network dynamics in terms of the key control parameters : excitability , coupling strength , and network topology . the analysis reveals the geometry of spontaneous dynamics in electrically coupled network . specifically , we show that the location of the minima of a certain continuous function on the surface of the unit encodes the most likely activity patterns generated by the network . by studying how the minima of this function evolve under the variation of the coupling strength , we describe the principal transformations in the network dynamics . the minimization problem is also used for the quantitative description of the main dynamical regimes and transitions between them . in particular , for the weak and strong coupling regimes , we present asymptotic formulae for the network activity rate as a function of the coupling strength and the degree of the network . the variational analysis is complemented by the stability analysis of the synchronous state in the strong coupling regime . the stability estimates reveal the contribution of the network connectivity and the properties of the cycle subspace associated with the graph of the network to its synchronization properties . this work is motivated by the experimental and modeling studies of the ensemble of neurons in the locus coeruleus , a nucleus in the brainstem involved in the regulation of cognitive performance and behavior .
curved gabor filter , ridge frequency estimation , curved regions , curvature , fvc2004 , image enhancement , orientation field estimation , fingerprint recognition , verification tests , biometrics .gabor functions , in the form of gabor filters ( gfs ) and gabor wavelets , are applied for a multitude of purposes in many areas of image processing and pattern recognition .basically , the intentions for using gf and log - gf can be grouped into two categories : first , gf aim at enhancing images and the second common goal is to extract gabor features obtained from responses of filterbanks .typical fields of application include : [ [ texture ] ] texture + + + + + + + texture segmentation and classification , with applications such as e.g. recognizing species of tropical wood or classifying developmental stages of fruit flies .[ [ medical - and - biological - applications ] ] medical and biological applications + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in medical imaging , gfs are applied for the enhancement of structures like e.g. finger veins and muscle fibers in ultrasound images , for the detection of blood vessels in retinal images , as well as for many other tasks like e.g. analyzing event - related brain activity , assessing osteoporosis in radiographs and for modeling the behavior of simple cells in the mammalian visual cortex . [[ optical - character - recognition ] ] optical character recognition + + + + + + + + + + + + + + + + + + + + + + + + + + + + + gfs are utilized for text segmentation , character recognition , font recognition , and license plate recognition .[ [ object - recognition ] ] object recognition + + + + + + + + + + + + + + + + + + objects can be detected by gfs , e.g. cars .moreover , gfs can be used for performing content - based image retrieval .[ [ biometrics ] ] biometrics + + + + + + + + + + gabor functions play an important role in biometric recognition .they are employed for many physical or behavioral traits including iris , face , facial expression , speaker , speech , emotion recognition in speech , gait , handwriting , palmprint , and fingerprint recognition .[ [ fingerprint - recognition ] ] fingerprint recognition + + + + + + + + + + + + + + + + + + + + + + + gabor filterbanks are used for the segmentation and quality estimation of fingerprint images , for core point estimation , classification and fingerprint matching based on gabor features .gfs are also employed for generating synthetic fingerprints .the use of gf for fingerprint image enhancement was introduced in ., .the underlying curved region consists of 33 parallel curved lines with 65 points for each line .[ figcurvedgabor3d],scaledwidth=50.0% ] all aforementioned applications have in common that they use _ straight _ gabor filters , i.e. the x- and y - axis of the window underlying the gf are straight lines which are orthogonal . having the natural curvature inherent to fingerprints in mind , we propose _ curved gabor filters _( see figure [ figcurvedgabor3d ] ) for image enhancement .a gf can be regarded as an anisotropic diffusion filter which smooths along the orientation and performs inverse diffusion orthogonal to the orientation .the basic idea is to adopt the gf to the curved structure and smooth along the bent ridges and valleys .while this paper focuses on fingerprint image enhancement , curved gabor filters might also be useful in other fields of application , e.g. for the enhancement of curved structures like muscle fibers , cell filaments or annual rings in tree discs .image quality has a big impact on the performance of a fingerprint recognition system ( see e.g. and ) . the goal of image enhancement is to improve the overall performance by optimally preparing input images for later processing stages .most systems extract minutiae from fingerprints , and the presence of noise can interfere with the extraction . as a result , true minutiae may be missed and false minutiae may be detected , both having a negative effect on the recognition rate . in order to avoid these two types of errors , image enhancement aims at improving the clarity of the ridge and valley structure . with special consideration to the typical types of noise occurring in fingerprints ,an image enhancement method should have three important properties : * reconnect broken ridges , e.g. caused by dryness of the finger or scars ; * separate falsely conglutinated ridges , e.g. caused by wetness of the finger or smudges ; * preserve ridge endings and bifurcations .enhancement of low quality images ( occurring e.g. in all databases of fvc2004 ) and very low quality prints like latents ( e.g. nist sd27 ) is still a challenge .techniques based on contextual filtering are widely used for fingerprint image enhancement and a major difficulty lies in an automatic and reliable estimation of the local context , i.e. the local orientation and ridge frequency as input of the gf .failure to correctly estimate the local context can lead to the creation of artifacts in the enhanced image which consequently tends to increase the number of identification or verification errors . for low quality images, there is a substantial risk that an image enhancement step may impair the recognition performance as shown in ( results are cited in table [ tabbz3 ] of section [ secresults ] ) .the situation is even worse for very low quality images , and current approaches focus on minimizing the efforts required by a human expert for manually marking information in images of latent prints ( see and ) .the present work addresses these challenges as follows : in the next section , two state - of - the - art methods for orientation field estimation are combined for obtaining an estimation which is more robust than each individual one . in section [ secrf ] ,curved regions are introduced and employed for achieving a reliable ridge frequency estimation .based on the curved regions , in section [ seccgf ] curved gabor filters are defined . in section [ secresults ] ,all previously described methods are combined for the enhancement of low quality images from fvc2004 and performance improvements in comparison to existing methods are shown .the paper concludes with a discussion of the advantages and drawbacks of this approach , as well as possible future directions in section [ secconclusions ] .in order to obtain a robust orientation field ( of ) estimation for low quality images , two estimation methods are combined : the line sensor method and the gradients based method ( with a smoothing window size of 33 pixels ) .the ofs are compared at each pixel .if the angle between both estimations is smaller than a threshold ( here ) , the orientation of the combined of is set to the average of the two .otherwise , the pixel is marked as missing .afterwards , all inner gaps are reconstructed and up to a radius of 16 pixels , the orientation of the outer proximity is extrapolated , both as described in .results of verification tests on all 12 databases of fvc2000 to 2004 showed a better performance of the combined of applied for contextual image enhancement than each individual of estimation .the of being the only parameter that was changed , lower equal error rates can be interpreted as an indicator that the combined of contains fewer estimation errors than each of the individual estimations .simultaneously , we regard the combined of as a segmentation of the fingerprint image into foreground ( endowed with an of estimation ) and background .the information fusion strategy for obtaining the combined of was inspired by .the two of estimation methods can be regarded as judges or experts and the orientation estimation for a certain pixel as a judgment .if the angle between both estimations is greater than a threshold , the judgments are considered as incoherent , and consequently not averaged .if an estimation method provides no estimation for a pixel , it is regarded as abstaining .orientation estimations for pixels with incoherent or abstaining judges are reconstructed or extrapolated from pixels with coherent judgments ., title="fig:",scaledwidth=50.0% ] , title="fig:",scaledwidth=50.0% ] in , a ridge frequency ( rf ) estimation method was proposed which divides a fingerprint image into blocks of pixels , and for each block , it obtains an estimation from an oriented window of pixels by a method called ` x - signature ' which detects peaks in the gray - level profile .failures to estimate a rf , e.g. caused due to presence of noise , curvature or minutiae , are handled by interpolation and outliers are removed by low - pass filtering . in our experience , this method works well for good and medium quality prints , but it encounters serious difficulties obtaining a useful estimation when dealing with low quality prints . in this section , we propose a rf estimation method following the same basic idea - to obtain an estimation from the gray - level profile - but which bears several improvements in comparison to : ( i ) the profile is derived from a curved region which is different in shape and size from the oriented window of the x - signature method , ( ii ) we introduce an information criterion ( ic ) for the reliability of an estimation and ( iii ) depending on the ic , the gray - level profile is smoothed with a gaussian kernel , ( iv ) both , minima and maxima are taken into account and ( v ) the inverse median is applied for the rf estimate .if the clarity of the ridge and valley structure is disturbed by noise , e.g. caused by dryness or wetness of the finger , an oriented window of pixels may not contain a sufficient amount of information for a rf estimation ( e.g. see figure [ figcomparisonrf ] , left image ) . in regions where the ridges run almost parallel ,this may be compensated by averaging over larger distances along the lines . however ,if the ridges are curved , the enlargement of the rectangular window does not improve the consistency of the gray - profile , because the straight lines cut neighboring ridges and valleys . in order to overcome this limitation , we propose _ curved regions _ which adapt their shape to the local orientation .it is important to take the curvature of ridges and valleys into account , because about 94 % of all fingerprints belong to the classes right loop , whorl , left loop and tented arch , so that they contain core points and therefore regions of high curvature .pixels as used by the x - signature method ( left ) and a curved region consisting of 33 parallel lines and 65 points per line .noise can cause the x - signature method to fail , because the oriented window may contain an insufficient amount of information .magnifying the window ( blue ) along the local orientation does not remedy these deficiencies in regions of curvature and would lead to an erroneous gray level profile .the rf estimation based on curved regions ( right ) overcomes these limitations by considering the change of local orientation , i.e. curvature .[ figcomparisonrf],title="fig:",scaledwidth=40.0% ] pixels as used by the x - signature method ( left ) and a curved region consisting of 33 parallel lines and 65 points per line .noise can cause the x - signature method to fail , because the oriented window may contain an insufficient amount of information .magnifying the window ( blue ) along the local orientation does not remedy these deficiencies in regions of curvature and would lead to an erroneous gray level profile .the rf estimation based on curved regions ( right ) overcomes these limitations by considering the change of local orientation , i.e. curvature .[ figcomparisonrf],title="fig:",scaledwidth=40.0% ] let be the center of a curved region which consists of parallel curves and points along each curve . the midpoints ( depicted as blue squares in figure [ figcurvedregion1 ] ) of the parallel curves are initialised by following both directions orthogonal to the orientation for steps of one pixel unit , starting from the central pixel ( red square ) . at each step, the direction is adjusted , so that it is orthogonal to the local orientation. if the change between two consecutive local orientations is greater than a threshold , the presence of a core point is assumed , and the iteration is stopped .since all x- and y - coordinates are decimal values , the local orientation is interpolated .nearest neighbour and bilinear interpolation using the orientation of the four neighboring pixels are examined in section [ secresults ] .starting from each of the midpoints , curves are obtained by following the respective local orientation and its opposite direction ( local orientation ) for steps of one pixel unit , respectively . as a by - product of constructing curved regions , a pixel - wise estimate of the local curvatureis obtained using the central curve of each region ( cf .the red curves in figures [ figcurvedregion1 ] and [ figcomparisonrf ] ) .the estimate is computed by adding up the absolute values of differences in orientation between the central point of the curve and the two end points .the outcome is an estimate of the curvature , i.e. integrated change in orientation along a curve ( here : of 65 pixel steps ) . for an illustration ,see figure [ figcurvature ] .the curvature estimate can be useful for singular point detection , fingerprint alignment or as additional information at the matching stage . , title="fig:",scaledwidth=40.0% ] , title="fig:",scaledwidth=40.0% ] . here ,false extrema are removed by one smoothing iteration with a gaussian kernel ( size=7 , ) .ieds for the profile at the bottom : 11 , 11 , 11 .median : 11 and .[ figprofile1],title="fig:",scaledwidth=25.0% ] + . here ,false extrema are removed by one smoothing iteration with a gaussian kernel ( size=7 , ) .ieds for the profile at the bottom : 11 , 11 , 11 .median : 11 and .[ figprofile1],title="fig:",scaledwidth=25.0% ] + . here, false extrema are removed by one smoothing iteration with a gaussian kernel ( size=7 , ) .ieds for the profile at the bottom : 11 , 11 , 11 .median : 11 and .[ figprofile1],title="fig:",scaledwidth=25.0% ] gray values at the decimal coordinates of the curve points are interpolated . in this study , three interpolation methods are taken into account : nearest neighbor , bilinear and bicubic ( considering 1 , 4 and 16 neighboring pixels for the gray value interpolation , respectively ) .the gray - level profile is produced by averaging the interpolated gray values along each curve ( in our experiments , the minimum number of valid points is set to 50% of the points per line ) .next , local extrema are detected and the distances between consecutive minima and consecutive maxima are stored . the rf estimate is the reciprocal of the median of the inter - extrema distances ( ieds ) .the proportion of the largest ied to the smallest ied is regarded as an information parameter for the reliability of the estimation : large values of are considered as an indicator for the occurrence of false extrema in the profile ( see figure [ figprofile1 ] ) . or for the absence of true extrema .only rf estimations where is below a threshold are regarded as valid ( for the tests in section [ secresults ] , we used ) .if of the gray - level profile produced by averaging along the curves exceeds the threshold , then , in some cases it is still possible to obtain a feasible rf estimation by smoothing the profile which may remove false minima and maxima , followed by a repetition of the estimation steps ( see figure [ figprofile1 ] ) . a gaussian with a size of 7 and was applied in our study , and a maximum number of three smoothing iterations was performed . in an additional constraint we require that at least two minima and two maxima are detected and the rf estimation is located within an appropriate range of valid values ( between and ) . as a final step ,the rf image is smoothed by averaging over a window of size pixels .the gabor filter is a two - dimensional filter formed by the combination of a cosine with a two - dimensional gaussian function and it has the general form : \right\ } \cdot \cos \left ( 2\pi \cdot f \cdot x_{\theta } \right)\ ] ] in ( 1 ) , the gabor filter is centered at the origin . denotes the rotation of the filter related to the x - axis and the local frequency . and signify the standard deviation of the gaussian function along the x- and y - axis , respectively .a curved gabor filter is computed by mapping a curved region to a two - dimensional array , followed by a point - wise multiplication with an unrotated gf ( ) .the curved region centered in consists of parallel lines and points along each line . the corresponding array contains the interpolated gray values ( see right image in figure [ figcurvedregion1 ] ) .the enhanced pixel is obtained by : finally , differences in brightness are compensated by a locally adaptive normalization ( using the formula from who proposed a global normalization as a first step before the of and rf estimation , and the gabor filtering ) . in our experiments , the desired mean and standard deviationwere set to 127.5 and 100 , respectively , and neighboring pixels within a circle of radius were considered ., and the size of the curved regions is pixels .[ figenhance1],title="fig:",scaledwidth=24.0% ] , and the size of the curved regions is pixels .[ figenhance1],title="fig:",scaledwidth=24.0% ] , and the size of the curved regions is pixels .[ figenhance1],title="fig:",scaledwidth=24.0% ] , and the size of the curved regions is pixels .[ figenhance1],title="fig:",scaledwidth=24.0% ] in the case of image enhancement by straight gfs , and other authors ( e.g. ) use quadratic windows of size pixels and choices for the standard deviation of the gaussian of , or very similar values .we agree with their arguments that the parameter selection of and involves a trade - off between an ineffective filter ( for small values of and ) and the risk of creating artifacts in the enhanced image ( for large values of and ) .moreover , the same reasoning holds true for the size of the window . in analogy to the situation during the rf estimation ( see figure [ figcomparisonrf ] ) , enlarging a rectangular window in a region with curved ridge and valley flow increases the risk for introducing noise and , as a consequence of this , false structures into the enhanced image .the main advantage of curved gabor filters is that they enable the choice of larger curved regions and high values for and without creating spurious features ( see figures [ figenhance1 ] and [ figenhancedetail ] ) . in this way ,curved gabor filters have a much greater smoothing potential in comparison to traditional gf . for curved gfs ,the only limitation is the accuracy of the of and rf estimation , and no longer the filter itself .the authors of applied a straight gf for fingerprint enhancement and proposed to use a circle instead of a square as the window underlying the gf in order to reduce the number of artifacts in the enhanced image .similarly , we tested an ellipse with major axis and minor axis instead of the full curved region , i.e. in equation [ eqcgf ] , only those interpolated gray values of array are considered which are located within the ellipse . in our tests , both variants achieved similar results on the fvc2004 databases ( see table [ tabbz3 ] ) .as opposed to , the term ` circular gf ' is used in and for denoting the case . ) , the only difference between the two is the shape of window underlying the gabor filter .artifacts are created by the straight filter which may impair the recognition performance and a true minutia is deleted ( highlighted by a red circle ) .[ figenhancedetail],title="fig:",scaledwidth=30.0% ] ) , the only difference between the two is the shape of window underlying the gabor filter .artifacts are created by the straight filter which may impair the recognition performance and a true minutia is deleted ( highlighted by a red circle ) .[ figenhancedetail],title="fig:",scaledwidth=30.0% ] ) , the only difference between the two is the shape of window underlying the gabor filter .artifacts are created by the straight filter which may impair the recognition performance and a true minutia is deleted ( highlighted by a red circle ) .[ figenhancedetail],title="fig:",scaledwidth=30.0% ]two algorithms were employed for matching the original and the enhanced gray - scale images .the matcher `` bozorth3 '' is based on the nist biometric image software package ( nbis ) , applying mindtct for minutiae extraction and bozorth3 for template matching .the matcher `` verifinger 5.0 grayscale '' is derived from the neurotechnology verifinger 5.0 sdk . for the verification tests , we follow the fvc protocol in order to ensure comparability of the results with and other researchers .2800 genuine and 4950 impostor recognition attempts were conducted for each of the fvc databases .equal error rates ( eers ) were calculated as described in ..eers in % for matchers bozorth3 and verifinger on the original and enhanced images of fvc2004 .parentheses indicate that only a small foreground area of the fingerprints was useful for recognition .results listed in the top four rows are cited from .parameters of the curved gabor filters : size of the curved region , interpolation method ( nn = nearest neighbor ) , considered pixels ( f = full curved region , e = elliptical ) , standard deviations of gaussian . [ tabbz3 ] [ cols="^,^,^,^,^",options="header " , ] curved gabor filters were applied for enhancing the images of fvc2004 .several choices for , , the size of the curved region and interpolation methods were tested .eers for some combinations of filter parameters are reported in table [ tabbz3 ] .other choices for the size of the curved region and the standard deviations of the gaussian resulted in similar eers . relating to the interpolation method ,only results for nearest neighbor are listed , because replacing it by bilinear or bicubic interpolation did not lead to a noticeable improvement in our tests . in order to compare the enhancement performance of curved gabor filters for low quality images with existing enhancement methods ,matcher bozorth3 was applied to the enhanced images of fvc2004 which enables the comparison with the traditional gf proposed in , short time fourier transform ( stft ) analysis and pyramid - based image filtering ( see table [ tabbz3 ] ) .furthermore , in order to isolate the influence of the of estimation and segmentation on the verification performance , we tested the x - signature method for rf estimation and straight gabor filters in combination with our of estimation and segmentation .eers are listed in the second and sixth row of table [ tabbz3 ] . in comparison to the results of the cited implementation which applied an of estimation and segmentation as described in , this led to lower eers on db1 and db2 , a higher eer on db3 and a similar performance on db4 . in comparison to the performance on the original images , an improvement was observed on the first database and a deterioration on db3 and db4 .visual inspection of the enhanced images on db3 showed that the increase of the eer was caused largely by incorrect rf estimates of the x - signature method .moreover , we combined minutiae templates which were extracted by mindtct from images enhanced by curved gabor filters and from images enhanced by anisotropic diffusion filtering .a detailed representation of this combination can be found in and results are listed in table [ tabbz3 ] . to the best of our knowledge ,this combination performed with the lowest eers on the fvc2004 databases which have been achieved so far using mindtct and bozorth3 .the matcher referred to as verifinger 5.0 grayscale has a built - in enhancement step which can not be turned off , so that the results for the original images in table [ tabbz3 ] are obtained on matching images which were also enhanced ( by an undisclosed procedure of the commercial software ) .results using this matcher were included in order to show that even in the face of this built - in enhancement , the proposed image smoothing by curved gabor filters leads to considerable improvements in verification performance .the present work describes a method for ridge frequency estimation using curved regions and image enhancement by curved gabor filters . for low quality fingerprint images , in comparison to existing enhancement methods improvements of the matching performancewere shown .besides matching accuracy , speed is an important factor for fingerprint recognition systems .results given in section [ secresults ] were achieved using a proof of concept implementation written in java . in a first test of a gpu based implementation on a nvidia tesla c2070 ,computing the rf image using curved regions of size pixels took about 320 ms and applying curved gabor filters of size pixels took about 280 ms .the rf estimation can be further accelerated , if an estimate is computed only e.g. for every fourth pixel horizontally and vertically instead of a pixel - wise computation .these computing times indicate the practicability of the presented method for on - line verification systems . in our opinion, the potential for further improvements of the matching performance rests upon a better of estimation .the combined method delineated in section [ secof ] produces fewer erroneous estimations than each of the individual methods , but there is still room for improvement .as long as of estimation errors occur , it is necessary to choose the size of the curved gabor filters and the standard deviations of the gaussian envelope with care in order to balance strong image smoothing while avoiding spurious features .future work includes an exploration of a locally adaptive choice of these parameters , depending on the local image quality , and e.g. the local reliability of the of estimation .in addition , it will be of interest to apply the curved region based rf estimation and curved gabor filters to latent fingerprints .the author would like to thank thomas hotz , stephan huckemann , preda mihilescu and axel munk for their valuable comments , and daphne bcker for her work on the gpu based implementation .f. alonso - fernandez , j. fierrez - aguilar and j. ortega - garcia , `` an enhanced gabor filter - based segmentation algorithm for fingerprint recognition systems '' , proc .symposium on image and signal processing and analysis ( ispa2 ) , zagreb , croatia , pp .239 - 244 , 2005 .f. alonso - fernandez , j. fierrez - aguilar , j. ortega - garcia , j. gonzalez - rodriguez , h. fronthaler , k. kollreider and j. bigun , `` a comparative study of fingerprint image - quality estimation methods '' , ieee trans .forensics security , vol .2 , no.4 , pp . 734 - 743 , dec .2007 .a. m. bazen and s. h. gerez , `` systematic methods for the computation of the directional fields and singular points of fingerprints '' , ieee trans . pattern anal .24 , no . 7 , pp . 905 - 919 , jul .2002 .r. cappelli , d. maio and d. maltoni , `` semi - automatic enhancement of very low quality fingerprints '' , proc .6th int . symp . on image andsignal processing and analysis ( ispa ) , salzburg , austria , sep 16 - 18 , pp .678 - 683 , 2009 .j. g. daugman , `` uncertainty relation for resolution in space , spatial frequency , and orientation optimized by two - dimensional visual cortical filters '' , journal of the optical society america a , vol .2 , pp . 1160 - 1169 , 1985 . c. gottschlich , `` fingerprint growth prediction , image preprocessing and multi - level judgment aggregation '' , ph.d .thesis , university of gttingen , germany , 2010 , + http://resolver.sub.uni-goettingen.de/purl/?webdoc-2763 .c. gottschlich , p. mihilescu and a. munk , `` robust orientation field estimation and extrapolation using semilocal line sensors '' , ieee trans .forensics security , vol .802 - 811 , dec . 2009 .l. he , m. lech , n. maddage and n. allen , `` stress and emotion recognition using log - gabor filter analysis of speech spectrograms '' , proc .affective computing and intelligent interaction ( acii ) , amsterdam , netherlands , sep .10 - 12 , pp . 1 - 6 , 2009 .t. r. mengko and j. tjandra pramudito , `` implementation of gabor filter to texture analysis of radiographs in the assessment of osteoporosis '' , proc .asia - pacific conf .circuits and systems ( apccas ) , vol .251 - 254 , 2002 .v. mildner , s. goetze , k. d. kammeyer and a. mertins , `` optimization of gabor features for text - independent speaker identification '' , proc .circuits and systems ( iscas ) , new orleans , usa , may 27 - 30 , pp .3932 - 3935 , 2007 .r. ramanathan _ et alii _ , `` robust feature extraction technique for optical character recognition '' , proc .advances in computing , control , and telecommunication technologies ( act ) , trivandrum , india , pp .573 - 575 , 2009 .d. tao , x. li , x. wu and s. j. maybank , `` general tensor discriminant analysis and gabor features for gait recognition '' , ieee trans .pattern anal .1700 - 1715 , oct . 2007 . c. i. watson , m. d. garris , e. tabassi , c. l. wilson , r. m. mccabe , s. janet and k. ko , `` user s guide to nist biometric image software ( nbis ) '' , national institute of standards and technology , gaithersburg , usa , 2007 .q. wu , l. zhang and g. shi , `` robust speech feature extraction based on gabor filtering and tensor factorization '' , proc .acoustics , speech and signal process .( icassp ) , taipei , republic of china , apr .19 - 24 , pp .4649 - 4652 , 2009 .j. zhang and j. yang , `` finger - vein image enhancement based on combination of gray - level grouping and circular gabor filter '' , proc .int . conf . on information engineering and computer science ( iciecs ) , wuhan , china , dec 19 - 20 , pp. 1 - 4 , 2009 .h. zhong , w .- b . chen and c. zhang , `` classifying fruit fly early embryonic developmental stage based on embryo in situ hybridization images '' , proc .conf . semantic computing ( icsc ) , berkeley , ca , usa , sep .14 - 16 , pp .145 - 152 , 2009 .e. zhu , j. yin , g. zhang and c. hu , `` a gabor filter based fingerprint enhancement scheme using average frequency '' , int .journal of pattern recog . and artif .3 , pp . 417 - 429 , 2006 .
gabor filters play an important role in many application areas for the enhancement of various types of images and the extraction of gabor features . for the purpose of enhancing curved structures in noisy images , we introduce curved gabor filters which locally adapt their shape to the direction of flow . these curved gabor filters enable the choice of filter parameters which increase the smoothing power without creating artifacts in the enhanced image . in this paper , curved gabor filters are applied to the curved ridge and valley structure of low - quality fingerprint images . first , we combine two orientation field estimation methods in order to obtain a more robust estimation for very noisy images . next , curved regions are constructed by following the respective local orientation and they are used for estimating the local ridge frequency . lastly , curved gabor filters are defined based on curved regions and they are applied for the enhancement of low - quality fingerprint images . experimental results on the fvc2004 databases show improvements of this approach in comparison to state - of - the - art enhancement methods .
the acoustic characteristics of a room have been shown to be important to predict the speech quality and intelligibility , which is relevant to speech enhancement as well as for automatic speech recognition ( asr ) .the reverberation time and the direct - to - reverberation ratio ( ) are two important acoustic parameters .traditionally , and can be obtained from a measured room impuls response ( rir ) .however , it is not practical or not even possible to measure the corresponding rirs in most applications . consequently , the demand of blind and estimation directly from speech and audio signals is increasing . a number of approaches for blind estimation have been proposed earlier : based on the spectral decay distribution of the reverberant signal , is determined in by estimating the decay rate in each frequency band .a noise - robust version is presented in . in a blind estimation is achieved by a statistical model of the sound decay characteristics of reverberant speech .inspired by this , uses a pre - selection mechanism to detect plausible decays and a subsequent application of a maximum - likelihood criterion to estimate with a low computational complexity .alternatively , motivated by the progress that has been achieved using artificial neural networks in machine learning tasks , proposed a method to estimate blindly from reverberant speech using trained neural networks , for which short - term root - mean square values of speech signals were used as the network input .the approach in was also extended to estimate various acoustic room parameters in using the low frequency envelope spectrum .our work proposed a multi - layer perceptron using spectro - temporal modulation features to estimate .a comparison of energies at high and low modulation frequencies , the so - called speech - to - reverberation modulation energy ratio ( srmr ) , which is highly correlated to and , is evaluated in .the approaches mentioned so far use a single audio channel for obtaining the estimate , however , the majority of blind off - the - shelf estimators rely on multi - channel data .an approach to estimate based on a binaural input signal from which the direct component is eliminated by an equalization - cancellation operation was proposed in .another method using an octagonal microphone array has been presented in , where a spatial coherence matrix for the mixture of a direct and diffuse sound field was employed to estimate using a least - squares criterion . in ,an analytical expression was derived for the relationship between the and the binaural magnitude - squared coherence function .a null - steering beamformer is employed in to estimate the with a two - element microphone array .motivated by the fact that the amount of perceived reverberation depends on both and , we propose a novel approach to simultaneously and blindly estimate these parameters . in our previous work , we found spectro - temporal modulation features obtained by a 2d gabor filterbank to be strongly and non - linearly correlated with reverberation parameters .we refer to these features as _ auditory _gabor features , since the filters used for extraction resemble the spectro - temporal receptive fields in the auditory cortex of mammals , i.e. , it is likely that our auditory system is explicitly tuned to such patterns .the gabor features are used as input to an artificial neural network , i.e. a multi - layer perceptron ( mlp ) , which is trained for blind estimation of the parameters pair .the evaluation of performance focuses on the acoustic characterization of environments ( ace ) challenge evaluation test set in fullband mode with a single microphone .the remainder of this paper is organized as follows : section [ sec : method ] introduces the blind estimator based on the 2d gabor features and an mlp classifier .the detailed experimental procedure is described in section [ sec : expm ] according to the ace challenge regulations .the results and discussion are presented in section [ sec : result ] for the proposed estimator with the ace evaluation test set , and section [ sec : conclu ] concludes the paper .an overview of the estimation process is presented in figure [ fig : mlp ] : in a first step , reverberant signals are converted to spectro - temporal gabor filterbank features to capture information relevant for room parameters estimation .an mlp is trained to map the input pattern to pairs of parameters , where the label information is according to from the available rirs .since the mlp generates one estimate per time step , we obtain an _ utterance_-based estimate by simple temporal averaging and subsequent selection of the output neuron with the highest average activation ( _ winner - takes - all _ ) , as shown in figure [ fig : mlp_label ] for instance .the noisy reverberant speech signal ] convolved with measured rirs ] , denoted as = s[k ] \ast h[k ] + n[k] ] [ c][c] ] [ c][c] $ ] [ c][c]available [ c][c]rirs [ c][c]anechoic [ c][c]speech [ c][c]noise [ c][c]gabor features [ c][c]feature [ c][c]extraction [ c][c]mlp [ c][c]label [ c][c]temporal [ c][c]average [ c][c] estimation . ][ r][r]100 [ r][r]80 [ r][r]60 [ r][r]40 [ r][r]20 [ c][c]0.1 [ c][c]0.3 [ c][c]0 [ c][c]0.8 [ c][c]0.6 [ c][c]0.4 [ c][c]0.2 [ c][c]frames in time slot [ c][c]mean probability [ c][c]labels [ c][c](a ) mlp output [ c][c](b ) mean value across frames gabor features are generated by 2d gabor filters applied to filter log - mel - spectrograms .the filters are localized spectro - temporal patterns that are with a high sensitivity towards amplitude modulations , as defined by & = & { s}_\mathrm{carr } [ m , \ell ] \cdot { h}_{\mathrm{env } } [ m , \ell ] \ , , \label{eqn : gabor}\\ { s}_\mathrm{carr } [ m , \ell ] & = & \exp{(i \omega_m(m - m_0 ) + i \omega_\ell(\ell-\ell_0 ) ) } \label{eqn : gabor_carr } \ , , \\ { h}_{\mathrm{env } } [ m,\ell ] & = & 0.5 - 0.5 \cdot \cos\left ( \frac{2\pi(m - m_0)}{w_m+1 } \right ) \nonumber \\ & & \hspace{37.6pt } \cdot \cos\left ( \frac{2\pi(\ell-\ell_0)}{w_\ell+1 } \right ) \ , , \label{eqn : gabor_env}\end{aligned}\ ] ] with and denoting the ( mel-)spectral and temporal frame indices , and the hann - envelope window lengths with the center indices , respectively . the periodicity of the sinusoidal - carrier function is defined by the radian frequencies , which allow the gabor filters to be tuned to particular directions of spectro - temporal modulation .the purely diagonal gabor filters as shown in figure [ fig : gbfb_2d ] , were found to result in the maximal sensitivity to the reverberation effect and thus , are used here to construct the gabor features for the estimator .each log - mel - spectrogram is filtered with these 48 filters in the filterbank that cover temporal modulations from 2.4 to 25 hz and spectral modulations from -0.25 to 0.25 cycles / channel , respectively .[ c][c]25.0 [ c][c]15.7 [ c][c]9.9 [ c][c]6.2 [ c][c]3.9 [ c][c]2.4 [ c][c]0.0 [ l][l]0.25 [ l][l]0.12 [ l][l]0.06 [ l][l]0.03 [ l][l]0.0 [ l][l]-0.25 [ l][l]-0.12 [ l][l]-0.06 [ l][l]-0.03 [ c][c]diagonal gabor filters [ c][c]temporal modulation frequency [ hz ] [ c][c]spectral modulation frequency [ cycl./chan . ]the ace challenge provides a development ( dev ) dataset for algorithm fine - tuning and an evaluation ( eval ) dataset for the final algorithm test .the task is aiming at blindly estimating two acoustic parameters , i.e. and , from noisy and reverberant speech .two different modes i.e. fullband and subband ( 1/3-octave iso since and are both frequency dependent parameters ) , and six microphone configurations , i.e. a single microphone ( single ) and microphone arrays with two ( laptop ) , three ( mobile ) , five ( cruciform ) , eight ( linear ) , and thirty - two ( spherical ) microphones , were introduced . the dataset was generated using anechoic speech convolved with rirs measured from real rooms with additive noise recorded under the same conditions .also , three types of noise signals , i.e. ambient , babble and fan noises , were added to generate the noisy reverberant dataset . for dev dataset , the signal - to - noise ratios ( snrs )were chosen to be 0 , 10 and 20 db , while for eval , the snrs were -1 , 12 and 18 db .the dev dataset is approximately 120 h length from all multi - microphone scenarios .each test set from eval contains 4500 utterances categorized by these 3 noise types and 3 snrs . for this paper, we focus on the tasks in the fullband mode of and estimation in the single microphone scenarios .our approach is also applicable to multi - microphone scenarios by selecting any channel of the speech data .the ground truth values of and were provided by the ace challenge .the ground truth is based on the energy decay curve computed from the rirs using the schroeder integral , to which the method proposed in is used to estimate .this method is shown to be more reliable under all conditions than the standard method according to iso3382 .the ground truth is estimated using the method of , where the direct path is determined by the ms around the maximum found using an equalization filter .the mlp shown in figure [ fig : mlp ] was implemented with the open - source kaldi asr toolkit compiled with a tesla k20c nvidia gpu with 5 gb memory size .it had 3 layers : the number of neurons in the input layer is 600 , i.e. dimension of the 2d diagonal gabor features ( cf .figure [ fig : gbfb_2d ] ) calculated in matlab . the temporal context considered by the mlpis limited to 1 frame , i.e. no splicing is applied .the number of hidden units is a free parameter that was optimized given the amount of training data and set to 8192 units , and the number of output neurons corresponds to the amount of pairs , i.e. 100 as defined in the following ( also cf .figure [ fig : mlp_label ] ) .ace database was recorded by different individuals who were reading different text materials in english . here , we applied timit corpus to generate the training data for mlp , since timit contains recordings of phonetically - balanced prompted english speech and a total of 6300 sentences ( approximately 5.4 h ) . to avoid a strong mismatch between training and test data ( which is likely to hurt mlp classification performance ) we added the ace dev dataset to the training data . in order to match the amount of the dev dataset ( approximately 120 h ) , thereby balancing the two sets , timit utterances were convolved with the collected rirs circularly , which resulted in approximately 117 h timit training data .the sampling rate of all signals is 16 khz .[ c][c]-6 [ c][c]-5 [ c][c]-4 [ c][c]-3 [ c][c]-2 [ c][c]-1 [ c][c]0 [ c][c]1 [ c][c]2 [ c][c]3 [ c][c]4 [ c][c]5 [ c][c]6 [ c][c]7 [ c][c]8 [ c][c]9 [ c][c]10 [ c][c]11 [ c][c]12 [ c][c]13 [ c][c]14 [ c][c]15 [ r][r]1350 [ r][r]1250 [ r][r]1150 [ r][r]850 [ r][r]750 [ r][r]650 [ r][r]550 [ r][r]450 [ r][r]350 [ r][r]250 [ r][r]150 [ c][c] / ms [ c][c] / db [ c][c] distribution [ l][l]collected rirs [ l][l]ace dev [ l][l]ace eval , as well as the ground truth values of the ace dev and eval datasets . ] to cover a wide range of rirs that occur in real life scenarios , we use several open - source rir databases such as mardy , air database , reverb challenge and smard .further , we also recorded several rirs in two regular office rooms in our group .figure [ fig : rir_distru ] shows the distribution of values from the collected rirs , as well as the ace dev and eval datasets . ground truth values of the collected rirs were calculated based on the methods described in section [ ssec : ace ] . due to the lack of the corresponding equalization filters for the source ,the absolute peak position is considered as the maximum to determine the direct path for the calculations .an mlp has a limited number of output neurons , which limits the resolution for the target estimate .we chose a resolution based on the distribution of training rirs , with the aim of obtaining a sufficient number of observations for each pair , which is 100 ms for and 1 db for ( cf .figure [ fig : rir_distru ] where one bounding box represents one class ) .the boundaries of are 100 ms and 900 ms , with ranging from -6 db to 15 db .with these boundaries and the chosen resolution , 76 classes are obtained for the collected rirs ( light blue boxes ) , and 51 classes are obtained from the ace dev dataset ( light red boxes ) .these classes are partially overlapping ( light yellow boxes ) and result in a total of 100 classes .the ace noise signals were recorded in the same acoustic conditions as the rir measurement , i.e. , the noise captured by the microphone is reverberated .hence , the noise signals combined with our extended rirs should be reverberated as well . since the original noise signals were not available in the context of the challenge , we created noise signals with similar characteristics as the original ambient , babble and fan noise .* ambient noise was created by mixing recorded car noise and pink noise to obtain a colored noise with high energy in the low frequencies ( as the original ambient noise ) .* to create babble noise , we mixed clean speech signals ( two male , two female speakers ) from the wsjcam0 corpus . *a fan noise was recorded in an almost anechoic chamber to obtain the last noise type .subsequently , the noise signals were added to the anechoic speech at snrs of 0 , 10 and 20 db ( mimicking the procedure for the ace dev dataset ) , which were then convolved with the collected rirs .the estimation error is used for analysis and is defined as the difference between the estimated value and the ground truth value , i.e. in s for and in db for . for comparison ,the methods proposed in and in are employed as baseline to blindly estimate and , respectively .note that the blind estimator in requires a mapping function between the overall srmr from 5th to 8th channel and the ( both expressed in db ) , which is obtained by the ace dev single dataset .[ c][c]ambient [ c][c]babble [ c][c]fan [ c][c] / s [ c][c] / db [ c][c]snr / db [ l][l]baseline [ l][l]proposed estimation for ace eval single dataset . on each box ,the central mark is the median , the edges are the 25th and 75th percentiles , the whiskers show extreme values and the outliers are plotted individually . ][ l][l]laptop [ l][l]mobile [ l][l]cruciform [ l][l]linear [ l][l]spherical [ c][c]ambient [ c][c]babble [ c][c]fan [ c][c] / s [ c][c] / db [ c][c]snr / db as seen in figure [ fig : t60_single_comp ] , in general , the proposed method outperforms the baseline approaches . for ,the baseline method works better in slightly noisy environments with an snr of 18 db , while the performances degrade with lower snrs .the proposed method has a higher robustness with respect to additive noise , presumably because the statistical model is trained on noisy reverberant speech with various snrs .the median values of are close to 0 ms for all conditions ( 3 noise types and 3 snrs ) , and the upper and lower percentiles are within ms , which indicates that the proposed method is capable of providing accurate blind estimation .in addition , far less outliers are obtained compared to the baseline method .the same trend can be observed for , for which the baseline produces large errors for both median and percentiles , particularly in the low snr situations .the is underestimated by approximately -1.5 db . this could be explained by the limited resolution of estimates ( 100 ms for , 1 db for ) and the mismatch of data range for training data on the one hand , and for eval dataset on the other : as shown in figure [ fig : rir_distru ] , values from 1100 ms to 1400 ms are not covered by the training data at all .a detailed post analysis showed that underestimates of that arise from this mismatch go along with underestimates of ; for instance , a test sample with ground truth of ( 1293 ms , 4.96 db ) was estimated to be ( 750 ms , 1 db ) .it appears that the underestimated reverberation effect caused by an underestimate of is somehow compensated by the corresponding underestimate of .further , the mismatches of the snrs and the noise signals might also lead to estimation errors , and it seems that such mismatches affect the estimate stronger than the estimate .additionally , the proposed estimator is tested with the ace multi - microphone data , but only one channel ( here the first channel _ ch1 _ ) is selected to perform the same estimation process .the overall trend of the estimation results as shown in figure [ fig : multi_ch1 ] is similar to previous results , which serves as verification of our approach on a different ( and larger ) test set .again , the median values of are near to 0 ms and the percentiles are within ms , and the median values of are between -1 db and -2 db with .5 db percentiles .consistent performances across noise types and snrs indicate the importance of exploiting training data with a high amount of variability for a discriminative model in order to achieve robustness in adverse conditions .the computational cost of our approach is quantified in terms of the real - time factor ( rtf ) , defined as the ratio between the time taken to process a sentence and the length of the sentence .two components in our approach contribute most to the overall complexity , i.e. , the calculation of gabor features and the forward - run of the neural net ( cf .figure [ fig : rir_distru ] ) . for optimization of the first component ,the 2d convolution of spectrograms with gabor filters was replaced by multiplication with a universal matrix . since the proposed mlp estimator operates on a gpu ( cf .section [ ssec : plat ] ) , the computational complexity is measured in frames per second ( fps ) with the frame length of 25 ms and overlapping of 10 ms . a rough transfer from fps to rtfcan be computed by . with an average gpu speed of 23736 fps ,an average rtf of 0.0042 is obtained . in summary ,the average rtf of the proposed estimator for the single - microphone scenario ( 4500 utterances ) is ( providing both and ) , while the rtfs of baseline estimator and estimator are 0.0483 and 0.3101 , respectively .this contribution presented a novel method for and in a blind and joint way using an mlp for classification .it has been shown that the proposed method is capable of accurately estimating and in the context of the ace challenge using single - microphone , fullband speech signals .the estimation errors of and cover a relatively small range of ms and .5 db with corresponding median values of nearly 0 ms and -1.5 db on average , respectively .furthermore , compared to the baseline approaches that only estimate either or estimation at a time , the computational complexity of the proposed estimator is significantly lower since the signal processing for feature extraction and the forward - run of the neural net are not very demanding in terms of computational cost , and since the and are estimated simultaneously .
blind estimation of acoustic room parameters such as the reverberation time and the direct - to - reverberation ratio ( ) is still a challenging task , especially in case of blind estimation from reverberant speech signals . in this work , a novel approach is proposed for joint estimation of and from wideband speech in noisy conditions . 2d gabor filters arranged in a filterbank are exploited for extracting features , which are then used as input to a multi - layer perceptron ( mlp ) . the mlp output neurons correspond to specific pairs of estimates ; the output is integrated over time , and a simple decision rule results in our estimate . the approach is applied to single - microphone fullband speech signals provided by the acoustic characterization of environments ( ace ) challenge . our approach outperforms the baseline systems with median errors of close - to - zero and -1.5 db for the and estimates , respectively , while the calculation of estimates is 5.8 times faster compared to the baseline . reverberation time , direct - to - reverberation ratio , 2d gabor features , multi - layer perceptron , ace challenge
why do you read this proceeding of the meeting of a national astronomical society ? probably because of the same reason as for me to write it : because we love music .do you remember when did your interest in space sciences start ? and in music ?the first author ( jac ) is an astronomer because he watched _stars wars _ and _ the empire strikes back _ when he was four years old , and since then he has always wanted to `` explore '' other worlds , in situ or , more realistically , from the ground with state - of - the - art instruments and telescopes of all sizes .his interest in music came later , when he was already a teenager .the first compact disc that he ever bought was _ the songs of distant earth _ ( 1994 ) , but he had already all the mike oldfield s discography in cassette .the lyrics of one of the oldfield s songs , https://www.youtube.com/watch?v=gvzg9sio704[``saved by a bell '' ] in his album _ discovery _ ( 1984 ) , read like this : + _ would you like to look through my telescope ? + the milky way s a fine sight to see .+ all around our universe , we try so hard to view + what s new ._ make a trip down to sagittarius + and take a spin by some nebula .+ i hope the sky stays clear for us , the night goes on so far + in stars .[ ... ] shining like bright diamonds , the galaxies .+ jupiter and saturn spin by .+ passing by companions , they all go drifting by . + they fly !carry me down to see aquarius . +we re hoping to meet a shooting star .+ i can see there s going to be a message from afar .+ how close we are .+ at one point , jac was suggested to listen to _ dilogos tres _ , a daily programme on world music , ambient and new age in radio 3 ( the spanish analogue to bbc radio 1 , 2 and 3 , but with no classical music ) . after being hooked on it, he started listening the following programme in the radio dayparting , _ el ambig _ , and later the following one , and when he started the grade of physics he did not know how to study without listening music of any style ( radio 3 , radio clsica or his own cassettes and cds ) .his passion for music was so intense that every chapter of his phd thesis started with a piece of lyrics , such as claude bertout had done it for his review on t tauri stars with the leonard cohen s poem `` another night with telescope '' : + _ i know the stars + are wild as dust + and wait for no man s discipline + but as they wheel + from sky to sky they rake + our lives with pins of light . + _ in the meantime , enrique morente , antonio arias ( aa ) and a few great musicians in spain challenged the flamenco with a breaking album , https://play.spotify.com/album/3d9njfydhcxchmliuah63l[_omega_ ] ( 1996 ) , which has been played in concerts all over the world ( new york , ciudad de mxico , buenos aires , paris , marseille , cannes , bastia , antwerp and the whole spain ) . afterwards , enrique morente went on mixing the most traditional flamenco roots with other influences , while aa , as the leader of the rock band lagartija nick , started exploring new concepts , sometimes with astronomical inspiration and even lyrics .for example , in the homonymic album _lagartija nick _( 1999 ) , he composed songs on pulsars , ether , spheres traveling in space , a moon base , selenography , the experience of astronauts in space , hal 9000 and even light pollution ( https://www.youtube.com/watch?v=xendubjbrm8[``azora 67 '' ] ) : + _ demasiada luz , demasiada luz too much light , too much light + la luz ensucia el cielo light messes the sky + mi cielo est vaco con demasiada luz my sky is empty with too much light + la luz oculta estrellas light hides stars + mi cielo est vaco con demasiada luz my sky is empty with too much light + _ in 2007 , jac published an outreach paper in astronoma , the spanish counterpart of sky & telescope or sterne und weltraum , on examples of musical astronomy and astronomical music . just afterwards , the paths of jac , a professional astrophysicist expert in stars , brown dwarfs , planets and instrumentation , and aa , a professional musician , composer , vocalist , bassist and guitarist , merged into a single astro - musical project , dubbed *sounds*of*cosmos * .we are far from being pioneers in the use of music for education and outreach of astronomy . for example , carl sagan et al . or andrew fraknoi already set up other comprehensive lists of astro - musical examples when some of the authors of this contribution had not been born yet .however , our aim here is to show how we use 21st century tools for communicating `` astronomy for the masses '' ( in depeche mode s words ) .covers of the antonio arias albums _ multiverso _ ( 2009 , left , with the dome of the 2.2 m calar alto telescope in the background ) and _ multiverso ii .de la sole de la ciencia a la fsica de la inmortalidad _ ( 2013 , right).,title="fig:",scaledwidth=46.0% ] covers of the antonio arias albums _ multiverso _ ( 2009 , left , with the dome of the 2.2 m calar alto telescope in the background ) and _ multiverso ii . de la sole de la ciencia a la fsica de la inmortalidad _ ( 2013 , right).,title="fig:",scaledwidth=46.0% ] the origin of _ multiverso _ ( 2009 ; fig .[ fig1 ] , left panel ) , the first aa s solo album outside lagartija nick and 091 was another outreach paper in astronoma by jac on poetry and astronomy .the title of the album mixed cosmological ( `` multiverses '' ) and poetical ( `` multi - verses '' ) concepts ._ multiverso _ started with the noises of the dome of the 2.2 m calar alto telescope , continued with songs with lyrics from poems composed by carlos marzal , natalia carbajosa , jos emilio pachecho or , especially , david jou , and finished with a 21st - century revision of the johannes kepler s _ harmonices mundi _( music of the spheres ) . the album was premiered during the closing ceremony of the international year of astronomy 2009 in spain .the second track in _ multiverso _ was `` el ordenador simula el nacimiento de las estrellas '' ( computer simulates the formation of stars ; fig .[ fig2 ] ) , for which we produced a videoclip with real simulations by bate et al . .the other songs had titles such as `` desde una estrella enana '' ( on a g2v star with an old planet ) , `` gnesis '' ( on the big bang ) or `` la derrota de bill gates '' ( on the effect of a coronal mass ejection on the earth surface ) . _ multiverso ii .de la sole de la ciencia a la fsica de la inmortalidad _ ( 2013 ; fig .[ fig1 ] , right panel ) went on mixing scientific poetry with electric and bass guitars , drums and keyboards .the last track of the album was the first of our soundtracks for astronomical instruments , facilities or data releases : https://www.youtube.com/watch?v=yt36fuhxq8c[``c.a.r.m.e.n.e.s . '' ] .carmenes is the new optical and near - infrared high - resolution spectrograph at the 3.5 m calar alto , especially designed for the discovery of exoearths in the habitable zone around m dwarfs with the radial - velocity method .there was a version of the soundtrack in spanish starred by sole morente , the youngest enrique morente s daughter , and another https://www.youtube.com/watch?v=sgv8yiz-e9c[in english ] , which was played live during a concert in one of the telescope domes of the calar alto observatory ._ multiversos _ ( 2015 ) was a box set compilation that gathered both _multiverso _ and _ multiverso ii _ in vinyl lps , together with four digital downloadable tracks , which are the seed for a future _ multiverso iii_. one of the four tracks was https://www.youtube.com/watch?v=gruh3i9zcx4[``q-u-i joint tenerife '' ] .quijote is a set of two telescopes and three instruments at the observatorio del teide that measure the polarisation of the cosmic microwave background radiation in the 1140ghz frequency range with a spatial resolution of 1deg .some members of *sounds*of*cosmos * travelled to tenerife , where staff of the instituto de astrofsica de canarias filmed and edited the videoclip for the quijote soundtrack .norcia / malarge ( tambin brilla la materia ) " ] ( soundcloud , 2015 ) was our answer to the european space agency s estrack 40th anniversary sound contest .our brand - new rock song begun with the names of the esa ground - based space - tracking stations worldwide ( some of which appear in the title ) and followed with our characteristic astro - poetical lyrics in spanish ( _ hidden behind immense clouds , matter also shines ... _ ) .we accompanied our music with , e.g. , an ariane 5 go / no - go pre - launch sequence , the sputnik i s beep - beep and the asteroseismological sounds of a pulsating star .the instrumental song https://play.spotify.com/album/0pemwqj3bhxgglzt5omj20[_gaia_ dr1 ( a soundtrack for the esa billion star surveyor ) ] ( spotify , 2016 ) was premiered contemporaneously to the _ gaia _ first data release .the track duration of 63s reflected the 63d precession period of the _ gaia _ s lissajous orbit around the sun - earth lagrangian point l .caption of the videoclip of `` el ordenador simula el nacimiento de las estrellas '' ( _ multiverso _ 2009 ) .music by antonio arias , lyrics by david jou , original simulation by matthew bate , videoclip by david cabezas and jos a. caballero , and special effects by david callejn and javier fernndez .video available at https://www.youtube.com / watch?v = j9lcscv3mkk[youtube].,scaledwidth=99.0% ] besides the albums , we have also performed live shows , dubbed astroconcerts , in which we mix rock , pop , electronica , stellar astrophysics , introduction to astronomy , science in general , poetry and video art in different proportions depending on the audience and facilities . between december 2009 and may 2016 , we have played 12 astroconcerts in munich ( municon , `` eso opc p98 get - together '' ) , la laguna ( aguere cultural ) , almera ( calar alto observatory , live streaming ) , barcelona ( cosmocaixa , together with prof .david jou ) , madrid ( sala el sol , twice ; ix scientific meeting of the spanish astronomical society ; xix congreso estatal de astronoma ) , and granada ( palacio de congresos , twice : one together with prof .robert w. wilson , the other with 3d glasses , cinema - like projection screen and broadcasted live by radio 3 ; parque de las ciencias , together with prof .emilio alfaro ; instituto de astrofsica de andaluca ) , a number of musicians have collaborated in our astroconcerts , including members of rock bands los planetas , lori meyers , lagartija nick and pjaro jack ( see acknowledgements ) . since 2013 , jac is the contributing editor of _ musica universalis _, a section of the astronoma magazine .there , he writes monthly on the music that did _ not _ go in the golden voyager record ; muse ( the rock band ) , muse ( the vlt instrument ) and the muses ( euterpe and urania ) ; franco battiato ( _ telescopi giganti per seguire le stelle _ ) ; t - shirts with the mozart s _ eine kleine nachtmusik _ score and the s. jocelyn bell burnell s little green men-1 ( pc 1919 ) pulsar ... now , music and astronomy reaches a much wider audience , since jac has closed the loop and now collaborates with http://www.rtve.es/alacarta/audios/longitud-de-onda/[_longitud de onda _ ] , a radio clsica programme .there , he talks every other week on the music that _ did _ go in the golden voyager record ; flamenco and astronoma through the figures of _ el planeta _ , the first reported flamenco artist , and enrique morente ; science - fiction films that happen beyond the earth s low orbit and which soundtracks have been awarded or nominated to the academy award to the best score ; f. william herschel , who composed 24 symphonies and many concertos , apart from discovering uranus , titania , oberon , enceladus , mimas and infrared radiation , and building the largest telescope of the world for 50 years ...jac is a klaus tschira stiftung postdoctoral fellow at the lsw .we thank the sociedad espaola de astronoma for their support .other artists who have participated in our astroconcerts are : xarim arest , carmen arias , juano azagra , jaime beltrn , juanb .codornu , david fernndez , mafo fernndez , nayra garca , alfonso gonzlez ( popi ) , daniel guirado , carlos gracia , antonio lpez ( noni ) , migueline lpez , alejandro mndez , julian mndez , arturo muoz , florent muoz , juan r. rodrguez ( j ) , mario rodrguez and pepe ruiz .
we have been congratulated on the stage by a nobel laureate ( he was our curtain raiser ) , played our music in planetariums , museums , observatories throughout spain and at the end of the meeting of the eso telescopes time allocation committee , shocked audiences in rock concerts , written monthly on _ musica universalis _ , made the second concert in 3d in spain after kraftwerk and broadcasted it live in radio 3 , mixed our music with poetry read aloud by scientists , composed the soundtracks of carmenes , quijote , estrack and the _ gaia _ first data release , made a videoclip on how computer simulates the formation of stars ... all those moments will not be lost in time like tears in rain , but put together in bilbao during the 2016 meeting of the spanish astronomical society .
he tapjacking attack basically tricks the user into tapping on an object in the background layer by clever positioning of a foreground layer that is not tappable .hence , any user touches will be applied onto the background layer which is not visible to the user .it is essentially a delivery mechanism and the payload can be customised by the attacker .the exploit is payload and aspect ratio specific , therefore the exploit code will need to be modified depending on the payload desired by the attacker as well as the target device s aspect ratio .the attack is also limited by the screen real estate of the device , i will be elaborating more on that in the section on developing the application .the first step in developing the exploit will be to choose a payload .for this walkthrough , i will be using the application installer payload .we will need to note down the location and number of taps a user would make in order to install an application . in the case of google play ,the steps are as follows . 1 .open the app detail page of the target app 2 .tap install + + + 3 .tap accept + + + + 2 once the desired payload and steps has been identified , we can move on to developing the application .we would need to create a toast activity and have the image overlay the buttons which need to be pressed .toasts are normally used to display short text notifications and any taps will be filtered down to the background layer .positioning of the toast has to be done by trial and error .we will want to use density independent pixels ( dp ) when specifying the position so that the exploit code will work on devices with different resolutions but same aspect ratios .+ + the images have to be placed such that no image overlaps a tappable area of any previous screen . e.g. the image for the install buttonhas been shifted to the left slightly so it does not overlap the `` learn more '' link in the permissions page .this minimises the probability of the exploit failing .thus the attack in practical is limited to 2 to 3 clicks at most due to limited screen real estate .furthermore , the attack will also be unlikely to work if the size of the button is too small as it will be difficult as the victim might not be able to tap the exact spot .the next step would involve setting the toast to repeat on a loop so that is always displayed on the screen and set the background of the toast to white so as to obscure the target application .+ + + + at this point , we might want to include baits promising the user an incentive if they tap on the image repeatedly .we are now done with the development of the exploit and it can be packaged and installed on the target device via an appropriate method .as mentioned in the introduction section , the tapjacking attack is a delivery mechanism , hence its impact would depend on the payload . assuming that the attacker chose to use the installer payload, he would be able to perform a privilege escalation through the stealthy installation of a second app which requires multiple permissions that the user did not agree to .the exploit app itself does not require any permissions .+ + + + the second app will most likely request the following core permissions . 1 .receive_boot_completed - allows the attacker to start a service in the background whenever the phone is restarted .thus the user does not even need to run the application .2 . internet - allows upload of data on the phone to the attacker s server 3 .access_network_state - the attacker might want to upload data only when wifi is active so as not to use up too much quota and raise suspicions .depending on the attacker s motive , he can make use of any of the following permissions ( access_fine_location , camera , record_audio , read_calendar , read_call_log , read_contacts , read_sms , read_external_storage ... ) to compromise the privacy of the target user .there are a few tactics an attacker can use to conceal the attack from the user .one of the methods involves removing the app icon from the launcher and can be achieved by replacing `` android.intent.category.launcher '' in the manifest file with `` android.intent.category.default '' .therefore , the user will not be able to locate the app when he swipes through the list of installed apps on the launcher .the second method is to use a generic name such as `` android update service '' or `` bluetooth connection helper '' .on encountering such an application , a user will likely assume that the application is part of the android operating system and will ignore the application .apart from the installer payload , which is triggered by opening a `` market:// '' url in a webview , other url based payloads include a `` http(s):// '' and a `` tel:// '' payload . the http payload will allow an attacker to open any url inside a webview .the webpage could contain a full screen button which could trigger a file download or run code which exploits a vulnerability in the webview container .however , the tapjacking application will need the permission to access the internet which could raise user s suspicions .+ + the `` tel:// '' payload will cause the user to silently dial a number in the background. it does not have such a high impact on security and the worst that could happen would be that the attacker programmed the payload with a premium number and the user would be left with a higher phone bill than expected .apart from urls , an attacker can also use intents to launch other activities .for example , the settings activity can be called up using the following snippet of code `` new intent ( android.provider.settings.action_settings ) '' . with the settings activity in the background ,the attacker can then trick the user into performing various actions ranging from switching on and off wifi and bluetooth to allowing installation of apps from unknown sources .+ + the attacker can also use intents to launch third party applications using the following code snippet `` getpackagemanager().getlaunchintentforpackage ( ' ' com.bank.app " ) ; the impact of the attack would depend on the app in question .needless to say , the attacker would need to ensure that the target user has the target application installed and must be familiar with the various activities and layout of the target application .this variation of the attack is thus one of the most difficult to pull off .1 . exploitability - proof of concept 2 . impact - high 3 .complexity - very high 4 .overall - low only proof of concept code is available at the moment .thus an attacker will need to know basic android development in order to write or modify the code needed to exploit the vulnerability . as of now, there does not exist any tool which would automate the development of such an app when fed a payload .+ + the impact of the attack is ( potentially ) high and depends on the type of payload . in the case of the installer payload, an attacker would be able to access the call information , smses , location , files on sd card , camera and microphone , completely compromising the privacy of the user .hence , the impact is relatively severe .+ + complexity is very high because the attacker has first got to convince the user to install the application .he then has to convince the user to comply with the instructions and tap repeatedly on the images .lastly , there is a substantial chance of failure especially if the user s taps are not accurate .+ + in summary , the attack is not feasible because it requires the attacker to be skilled enough to write custom code and the user to be gullible enough to follow through with instructions .the attack is also not scalable as it only works on devices of a specific aspect ratio .a skilled attacker would be able to compromise phones in masses using easier techniques .therefore , this attack is not feasible and likely only used in a targeted attack .according to an unofficial source , the tapjacking vulnerability was claimed to have been patched in android version 4.0.3 .however , i have successfully carried out this attack on my phone which is running android 4.3 .i am unable to ascertain if this is because the manufacturer of my phone has not applied the patch in their images or whether the patch does not exist .+ + developers can set the filtertoucheswhenobscured property to true or override the onfiltertoucheventforsecurity method . setting the property to trueis the declarative security method and will ignore all taps when the app is not in the foreground . overriding the method is the programmatic security approach and gives the developer more flexibility .he can choose to ignore or to process the taps based on certain conditions .i.e. if the app was in the foreground within the last 5 seconds .given that even google play itself is vulnerable , it is unlikely that many developers practice either one of the methods above .+ + this is little that users can do to guard themselves against a tapjacking attack .but in general , users should try not to download obscure apps or download apps from third party app stores .they should look out for suspicious behaviour such as unsolicited app installs and practice common sense .i have walked through the process of planning and developing an application that exploits the tapjacking vulnerability . even though there is not much an android user can do to protect himself from such an attack ,there is little cause for concern as the attack is not feasible to pull off .nevertheless , android users should still adopt good security practices to thwart other attacks out there .lastly , developers should also play a more active role in ensuring that their applications are safe from such attacks .part of the code was shamelessly taken from nvisium s tapjacking proof of concept .the code was then revised for a more updated version of the android sdk and customised for the aspect ratio of my phone .i then stripped out some of the features so i could demonstrate how tapjacking works in the background .
android is an open source mobile operating system that is developed mainly by google . it is used on a significant portion of mobile devices worldwide . in this paper , i will be looking at an attack commonly known as tapjacking . i will be taking the attack apart and walking through each individual step required to implement the attack . i will then explore the various payload options available to an attacker . lastly , i will touch on the feasibility of the attack as well as mitigation strategies . 2
the study of dynamics of opinion formation is nowadays a hot topic in the statistical physics of complex systems , with a considerable amount of papers published in the last years ( see and references therein ) .even simple models can exhibit an interesting collective behavior that emerges from the microscopic interaction among individuals or agents in a given social network .usually those models exhibit nonequilibrium phase transitions and rich critical phenomena , which justifies the interest of physicists in the study of opinion dynamics . in the last few years , a recent attention has been done to the kinetic exchange opinion models ( keom ) , inspired in models of wealth exchange .the lccc model was the first one to consider kinetic exchanges among pairs of agents that present continuous states ( opinions ) . in this case, the model presents a continuous symmetry - breaking phase transition .after that , some extensions were analyzed for continuous and discrete opinions .for example , the inclusion of competitive interactions , three - agents interactions , dynamic self - confidence , presence of inflexible agents , and others , similarly to was done previously in other opinion dynamics , like the galam s models . in all these extensions the critical behavior of the system was extensively analyzed .dynamics of decision - making has been treated in several works in psychology and neuroscience . for the dynamics of opinion formation, we find many models by physicists dedicated to explain the decision - making process or the exchange of opinion through interactions among agents .the mechanisms consider kinetic exchanges ( keom ) , imitation ( voter model , sznajd model ) or the power of local majorities ( majority - rule model , majority - vote model ) , among others .nevertheless , the inclusion of noise and disorder can be considered in such models .usually discrete opinion models consider two distinct positions or opinions ( yes or no , democrat or republican , candidate a or candidate b ) .they can be enriched with the inclusion of a third state , , representing neutral state or indecision .indecision is a current and rising phenomenon which affects both recent and consolidated democracies .many reasons can lead an individual to become neutral or undecided , for example it can be associated to an anticonformism / nonconformism to the proposals on both sides of the debate . the impact of indecision / neutrality was considered recently in many works . in this workwe consider a discrete keom in the presence of noise and disorder .in addition to pairwise random interactions , we introduce an indecision noise that significantly affects the dynamics of the system .our aim is to analyze the critical behavior of the model . in this case ,based on analytical and numerical results , we found three distinct phase transitions , namely the usual ferro - paramagnetic transition , and two distinct transitions to an absorbing state : from the ferromagnetic state and from the paramagnetic one .we considered a keom with competitive positive / negative interactions .our artificial society is represented by individuals in a fully - connected graph .each agent can be in one of three possible opinions at each time step , i.e. , or .this general scheme can represent a public debate with two distinct choices , for example _ yes _ and _ no _ , and also including the undecided / neutral state .the following microscopic rules control our model : 1 .we choose two agents at random , say and , in a way that will try to persuade ; 2 . with probability ,the opinion of agent in the next step is updated according to the kinetic rule $ ] ; 3 . with probability ,the agent spontaneously change to the neutral state , i.e. , . in the above dynamic rule , sgn(x )is the signal function defined such that sgn(0)=0 .this is usual in keom , in order to keep all the agents opinions in one the three possible ones , , or .the pairwise couplings are quenched random variables does not affect our results , they can also be considered as annealed variables . ]that follows the discrete probability distribution . in other words, the parameter stands for the fraction of negative interactions .as discussed in previous works , the consideration of such negative interactions produces an effect similiar to the introduction of galam s contrarians in the population .in addition , competitive interactions were also considered for the modelling of coalition forming .the probability acts as a noise in the system , and it allows an autonomous decision of an individual to become neutral .it can be viewed as the volatility of some individuals , who tend to spontaneously change their choices . in a two - candidate election, if a given individual does not agree with the arguments of supporters of both sides , he / she can decide to not vote for any candidate , and in this case he / she becomes neutral . in this case, this indecision noise must be differentiated / disassociated from other usual kinds of noises because , unlike the others , it privileges only the neutral opinion . as a recent example , in the 2012 usa election barack obama and mitt romney disputed for the election for president as the main candidates .it was reported that two months out from election day , nearly a quarter of all registered voters are either undecided about the presidential race or iffy in their support for a candidate , as indicated by polls . for ,i.e. , in the absence of noise , the model undergoes a nonequilibrium order - disorder ( or ferro - paramagnetic ) transition at a critical fraction . in the ordered ferromagnetic phase ,one of the extreme opinions or dominates the population , whereas in the disordered paramagnetic phase the three opinions coexist with equal fractions ( ) . at this point , some definitions are necessary .the order parameter of the system can be defined as that is the `` magnetization per spin '' of the system , and stands for average over disorder or configurations , computed at the steady states .let us also define and as the stationary fractions or densities of opinions and , respectively .one can start considering the probabilities that contribute to increase and decrease the order parameter .following , one can obtain the master equation for , - \\ \nonumber & & \mbox{}- qf_1 -(1-q ) [ ( 1-p)f_1 f_{-1 } + pf_{1}^{2 } + ( 1-p)f_0 f_{-1 } + \\ & & \mbox{}+ pf_0 f_{1 } ] = 0 ~.\end{aligned}\ ] ] in the stationary state . using the normalization condition , we obtain two solutions for eq .( [ eq2 ] ) in the stationary state , namely , which implies in ( disordered solution ) , or in this case , eq .( [ eq3 ] ) is valid in the ferromagnetic phase .we emphasize that leads to , which agrees with the result of ref .one can obtain another equation for considering the fluxes into and out of the neutral state . in this case , the master equation for is given by considering the disordered phase , where , eq . ( [ eq4 ] ) gives us in the stationary state ( where ) \left(\frac{1-f_{0}}{2}\right ) ~,\ ] ] which gives us two solutions , namely which can be ignored by considering the steady state of the other two fractions and , or in this case , eq .( [ eq6 ] ) is valid in the paramagnetic phase .the above equations ( [ eq3 ] ) and ( [ eq6 ] ) are both valid at the critical point , and we can equate them to obtain these critical noises separate the ferromagnetic and the paramagnetic phases . as discussed above , in the ferromagnetic phase one of the extreme opinions or dominates the population ( one of the sides wins the debate ) , whereas in the paramagnetic phase the two extreme opinions coexist ( , i.e. , there is no decision ) .notice that we recover in eq .( [ eq6 ] ) and in eq .( [ eq7 ] ) for , in agreement with . in order to obtain an analytical expression for the order parameter, one can consider the fluxes into and out of the state .the master equation for is then - ( 1-q)\left[pf_{1}^{2 } + ( 1-p)f_{1 } f_{-1}\right ] - \\ & & \mbox{}-qf_{1 } ~.\end{aligned}\ ] ] considering the normalization condition and the expression for valid in the ferromagnetic phase , eq .( [ eq3 ] ) , we obtain for in the stationary state ( where ) \pm \sqrt{\delta}}{2(1-p)(2p-1 ) } ~,\ ] ] where [1 - 2(2p+q - pq ) ] ~.\ ] ] eq .( [ eq9 ] ) is plotted in fig .[ fig1 ] as a function of for typical values of . as eq .( [ eq9 ] ) predicts two solutions ( see the signals ) , one has two curves for each value of since , where is given by eq .( [ eq7 ] ) .when assumes one of these values consequently takes the other one . the curve labeled as is the disordered paramagnetic solution for the stationary fractions and solution signals the limit of validity of eq .( [ eq9 ] ) , and it will be discussed in the following . the order parameter can be given by .considering eqs .( [ eq3 ] ) and ( [ eq9 ] ) for and , respectively , one obtains \left [ 1 - 2(2p+q - pq)\right ] } } { ( 1-p)(1-q ) \sqrt{1 - 2p } } ~.\ ] ] one can see from eq .( [ eq11 ] ) that consensus is reached only for , i.e. , in the absence of negative interactions and noise all agents of the system will share one of the extreme opinions , or . in order to obtain the critical exponent that governs the order parameter in the vicinity of the order - disorder phase transition, one can simplify eq .( [ eq11 ] ) using eqs .( [ eq3 ] ) and ( [ eq7 ] ) . in this case, we have where is given by eq .( [ eq3 ] ) , i.e. , the solution valid in the ferromagnetic phase . in other words , one can write the order parameter in the usual form , where , a typical ising mean - field exponent in a ferro - paramagnetic phase transition , suggesting that our model is in the same universality class of the mean - field ising model , as expected due to the mean - field character of the interactions . on the other hand , the case presents a distinct behavior. putting in eq .( [ eq3 ] ) , one obtains using this result , obtained from eq .( [ eq7 ] ) and putting in eq .( [ eq13 ] ) , the order parameter can be written as and thus , where and .furthermore , using in eq .( [ eq14 ] ) , we have , which implies in due to the normalization condition .this result implies that for and all agents will be in the neutral state .a look to the microscopic rules that define our model shows that the system will remain in this state forever , indicating an absorbing state .thus , we have for an active - absorbing ( ferromagnetic - absorbing ) phase transition , i.e. , the critical behavior is affected and the system can be mapped in the universality class of mean - field directed percolation ( or contact process ) .as previous discussed , if the order parameter goes to zero with , signaling a ferromagnetic - paramagnetic phase transition at critical points given by eq .( [ eq7 ] ) . for the ferromagnetic solution for , eq .( [ eq3 ] ) is not valid anymore . in this casethe valid solution for is given by eq .( [ eq6 ] ) , the solution in the paramagnetic phase .considering eq .( [ eq6 ] ) and that in the paramagnetic phase we have , one obtains that is the above - mentioned solution for in the paramagnetic phase , see fig .[ fig1 ] . from eq .( [ eq16 ] ) , one can see that and for .these solutions are not physically acceptable , and thus the valid solution for is and .in addition , as in the paramagnetic phase , we can use another order parameter to analyze the system in the vicinity of , namely where we used eq .( [ eq6 ] ) . in this case, we have in the absorbing phase ( where ) and in the paramagnetic phase ( where ) .one can rewrite eq .( [ eq17 ] ) as where and .thus , for the system is always in an absorbing phase with all individuals sharing the neutral state ( or in other words ) , independent of .thus , the transition may be of active - absorbing type for , or paramagnetic - absorbing for , both occurring at and belonging to the directed percolation universality class ( critical exponent ) . for the best of our knowledge , it is the first time that the para - absorbing transition appears in a keom . to complement our results , we performed numerical simulations for a population size .we computed the order parameter by eq .( [ eq1 ] ) . in fig .[ fig2 ] we exhibit the numerical results for versus and typical values of , together with the analytical result given by eq .( [ eq11 ] ) .one can observe transitions at different points that depend on , with the usual finite - size effects for , and ( see the inset ) .moreover , we can also see in the inset that the order parameter goes exactly to zero for , independent of the value of , confirming the transition to the absorbing state , as analytically predicted . finally , we performed simulations in order to obtain estimates of the other critical exponents , since our analytical results give us only the critical points and the exponent . for the ferro - paramagnetic transition , we considered the usual finite - size scaling ( fss ) relations , that are validy in the vicinity of the transition .in addition , for the transitions to the absorbing state ( ferro - absorbing and para - absorbing ) , we considered the dynamic fss relations in fig .[ fig3 ] we exhibit as an example results for . in pannel ( a )we show the data collapse for the susceptibility , near the ferro - paramagnetic transition , where we obtained the usual mean - field ising exponents and , that are the standard exponents of keom . in fig .[ fig3 ] ( b ) we show the time relaxation of the order parameter ( in the inset ) in the vicinity of the para - absorbing transition , as well as the data collapse ( main figure ) , for values near the transition point . our estimates for the critical exponents and are in agreement with the values for the directed percolation ( and contact process ) , and . to summarize , in fig . [ fig4 ]we exhibit the phase diagram of the model in the plane versus .the line separating the ferromagnetic and paramagnetic phases are given by eq .( [ eq7 ] ) , valid in the region and . the region for represents the absorbing phase , for all values of , and we also see the ferro - absorbing transition in the axis .in this work we have studied how the inclusion of negative pairwise interactions ( disorder ) and indecision ( noise ) affect the critical behavior of a kinetic exchange opinion model .the agents can be in one of three possible states ( opinions ) , represented by discrete variables and .the topology of the society is a fully - connected network .the disorder is ruled by a parameter , representing the fraction of negative interactions , and the noise is controlled by a parameter , representing the probability of a spontaneous change to the neutral state .we analyzed the critical behavior of the system in the stationary states .the consensus regarding one of the extreme opinions or , a macroscopic collective behavior representing a total agreement , is only reached when the two ingredients , disorder and noise , are completely absent ( ) . for suitable values of and ,the system is in an ordered ferromagnetic phase , where one of the two extreme opinions or are shared by the majority of the population .this state mimics the situation where one of the two topics under debate is winner , and presents an order parameter .we also found critical values that separate the ordered ferromagnetic phase from a disordered paramagnetic one . in this last phase ,the fractions of extreme opinions are equal and we have a null order parameter , . in this case , the debate does not present a winner side . the critical exponent associated with the order parameterwas found analytically to be , and we estimated through monte carlo simulations that the other exponents are and , typical mean - field exponents belonging to the ising model universality class . in this case ,our numerical results show the typical finite - size effects for the order parameter when the system undergoes the order - disorder transition . on the other hand , for ,i.e. , in the absence of negative interactions , the system is governed only by the noise . in this case, our analytical and numerical calculations showed that there is a critical point above which the system is in an absorbing state . in this state , all agents become neutral , and the dynamics does not evolve anymore , characterizing a typical noise - induced absorbing phase transition . in this case , the order parameter is identically null , and it does not present the usual finite - size effects of order - disorder transitions ( the `` tails '' of the order parameter ) . in this case , we considered the dynamic fss relations for transitions to absorbing states .our analytical results predicted an exponent for the order parameter in the vicinity of the transition , and numerically we found estimates for other exponents , namely and .these exponents are the typical ones for the mean - field directed percolation , a prototype of a nonequilibrium phase transition to an absorbing state .finally , considering the case and , when the system is in a paramagnetic state , we observed another transition . in this case , for the system is always in the absorbing state , .this phase is the same observed for the case , but now the system goes from a paramagnetic phase with null order parameter to an absorbing phase also with . in this case , in order to calculate the critical exponent , we defined another order parameter based on the stationary fraction of neutral agents , namely . in this case, we have in the paramagnetic phase , where the extreme opinions and coexist with the neutral state , and we have in the absorbing phase , where . in this case, we also observed an exponent , and the paramagnetic - absorbing transition also belongs to the directed percolation universality class .for the best of our knowledge , it is the first time that this para - absorbing transition appears in a keom .all our analysis was done considering the new parameter as the control parameter , but the exponents do not depend on that , the another parameter can be considered as well , and the same critical exponents are found .the universality of the exponents is expected due to the mean - field character of the model . as extensions of this work, it can be considered the inclusion of local or global mass media effects , as well as the inclusion of a neighborhood ( lattice , network ) .the authors acknowledge financial support from the brazilian funding agency cnpq .fox news , _ poll : key fraction of voters remain undecided , unexcited ahead of election _ , available on - line at http://www.foxnews.com/politics/2012/08/25/poll-large-fraction-voters-remain-undecided-unexcited-ahead-election.html
in this work we study a 3-state ( , , ) opinion model in the presence of noise and disorder . we consider pairwise competitive interactions , with a fraction of those interactions being negative ( disorder ) . moreover , there is a noise that represents the probability of an individual spontaneously change his opinion to the neutral state . our aim is to study how the increase / decrease of the fraction of neutral agents affects the critical behavior of the system and the evolution of opinions . we derive analytical expressions for the order parameter of the model , as well as for the stationary fraction of each opinion , and we show that there are distinct phase transitions . one is the usual ferro - paramagnetic transition , that is in the ising universality class . in addition , there are para - absorbing and ferro - absorbing transitions , presenting the directed percolation universality class . our results are complemented by numerical simulations . keywords : dynamics of social systems , collective phenomena , phase transitions , universality classes
ancient structures embody the culture and stories of people , who built , used and lived in them .this charm attracts tourists to the sites with well - preserved cultural heritage , which in turn has an enormous positive impact on the economy of the region . from this reason , the conservation and restoration of architectural heritageis encouraged in the majority of countries .however , an inappropriate intervention can cause a huge harm , and therefore the authorities established numerous requirements on the procedures and materials used for the conservation and repairs .a vast number of ancient structures are made of masonry , being a traditional construction material that exhibits an extraordinary durability if an adequate maintenance is provided .masonry bed joints are usually the weakest link and the deterioration and damage concentrates there .it has been established that the mortars used for repairs should be compatible with the original materials ; serious damage to a number of historic masonry structures has been caused by an extensive use of portland cement mortar over the past decades .the intention for its use was to avoid the inconveniences connected with the originally used lime - based mortars , such as slow setting , high shrinkage and low strength .however , the use of the portland cement mortars has been reconsidered for their low plasticity , excessive brittleness and early stiffness gain . moreover , the relatively high content of soluble salts that leach over time can severely damage the original masonry units because of large crystallization pressures and produce anaesthetic layers on their surface . the strict regulations with respect to the portland cement use led to the exploitation of traditional additives to lime - based mortars , such as volcanic ash , burnt clay shale or increasingly popular metakaolin .these additives , known as _pozzolans _ , have been used since the ancient times in combination with lime to improve a moisture and free - thaw resistance of mortars , to increase their durability and also their mechanical strength .the use of pozzolans is essential not only for bed joint mortar but also for rendering ones , because pure - lime mortars suffer from enormous shrinkage cracking that has a negative aesthetic impact and can even cause spalling of the facade surface layers .if there was no natural source of pozzolans available in the region , ancestors tried to find alternatives .phoenicians were probably the first ones to add crushed clay products , such as burnt bricks , tiles or pieces of pottery , to the mortars in order to increase their durability and strength .crushed bricks were often added to mortars used in load - bearing walls during the roman empire and romans called the material _ cocciopesto _cocciopesto mortars were then extensively used from the early hellenistic up to the ottoman period in water - retaining structures to protect the walls from moisture , typically in baths , canals and aqueducts .the brick dust was mainly used for rendering , while large pebbles up to 25 mm in diameter appeared mainly in masonry walls , arches and foundations .however , our previous studies revealed that the positive impact of ceramic fragments should not be attributed to the formation of hydration products due to limited reactivity , but rather to their compliance which limits shrinkage - induced cracking among aggregates and ensures a perfect bond with the surrounding matrix .the presented study was focused on the investigation of various mortars commonly used for repairs of cultural heritage and their structural performance through comprehensive experimental and numerical analyses .in particular , lime - based mortars with various additives and aggregates , introduced in section [ sec : materials ] , were used in bed joints of masonry piers subjected to a combination of quasi - static compression and bending .the purpose of the experimental analysis , described in section [ sec : experimentaltesting ] , was to study the failure modes and crack patterns using digital image correlation ( dic ) , assess the structural performance of individual mortars , and verify the proposed material model used for the finite element ( fe ) predictions , presented in section [ sec : numericalsimulations ] .the fe analysis was consequently utilized in section [ sec : casestudy ] to assess the key material parameters influencing the load - bearing capacity , and to study the failure modes of the masonry piers containing mortars with variable properties , subjected to a combination of compression and bending .compared to historic limes , today s commercial ones are very pure , despite the very benevolent regulating standard en 459 - 1 requiring the mass of cao and mgo in the commonly used cl-90 lime hydrate higher than 90% .however , the presence of impurities in historic limes mortars was not always harmful , since the content of silica ( sio ) and alumina ( al ) was responsible for their hydraulic character . the inconveniences connected to the use of modern lime , such as limited binder strength , slow hardening , enormous shrinkage , and consequent cracking and poor cohesion between the mortar and surrounding masonry blocks can be overcome by the use of reactive additives rich in aluminosilicates , such as metakaolin or portland cement . while metakaolin has been generally accepted by the restoration community ,the use of portland cement is on decline and the authorities for cultural heritage in many countries prohibit its additions to repair mortars . according to a few studies , calcium - silicate - hydrates( csh ) and calcium - aluminum - silicate - hydrates ( cash ) are the main hydrated phases formed at the room temperature after the pozzolanic reaction of metakaolin and ca(oh) .the metakaolin presence in lime - based mortars results in an enhanced strength and durability , while the vapor transport properties are superior to the mortars containing portland cement . beside the addition of pozzolans, shrinkage can be efficiently reduced by increasing the content of inert aggregates , since the stiff inclusions restrain the volume changes of the surrounding matrix , which is more pronounced in the case of bigger inclusions . however , large stiff pebbles are responsible for a formation of microcracks . that have a negative impact on the mortar integrity and reduce the mortar strength and stiffness .moreover , the shrinkage - induced cracking of mortars poor in pozzolans , or containing unsuitable aggregates , limits their use as renderings because of their poor aesthetic performance . even though it is generally accepted that the presence of sand aggregates increases the resistance of mortars against mechanical loading, there is a threshold beyond which any addition of aggregates makes the mortar weaker due to excessive microcracking and loss of cohesion between the grains and the surrounding matrix . by experience , the 1 : 3 binder to aggregate volume ratio has been established as the most suitable for repair mortars , providing a reasonable strength , shrinkage and porosity . based on the study by stefanidou and papayianni it seems most favorable to use the sand of grain - size ranging between 0 and 4 mm , resulting in mortars of the highest strength .vitrivius , roman author , architect and engineer , who lived in the first century bc , recommended in his _ ten books on architecture _ to add some portion of crushed bricks into mortars in order to increase their durability and strength . according to silva et al . , the amorphous components of brick fragments , mainly represented by aluminosilicates , are able to react with lime and make the interfacial surface alkaline .the reaction products are supposed to give mortars a hydraulic character , and fill the voids and discontinuities in the thickness of about 20 m from the interface between the crushed brick fragments and the surrounding matrix .however , such processes can take place only only if the ceramic clay is fired at appropriate temperatures between 600 and 900 , and the mortar is hardening in a sufficiently wet environment for a considerable amount of time .even if the reaction takes place , the reaction - rim thickness is very limited and does not have any significant impact on the mortar properties , as proven by the results of nanoindentation of ancient mortar samples in our previous work .more importantly , the relatively compliant crushed brick fragments relieve the shrinkage - induced stresses and reduce the number of microcracks within the mortar matrix . beside the positive impact of crushed brick fragments on the mechanical properties and durability of the cocciopesto mortars , the use of crushed bricks brings another benefit the use of waste by - products from ceramic plants leads to a cost reduction and production of a more sustainable material . for our study, we used a commonly available white air - slaked lime ( cl90 ) ertovy schody of a great purity ( 98.98% of cao + mgo ) , produced in the czech republic .the most frequent particle diameter found in the lime hydrate was equal to 15 m and its specific surface area , determined by the gas adsorption , was equal to 16.5 m/g .the finely ground burnt claystone metakaolin with a commercial name mefisto l05 , produced by esk lupkov zvody inc ., nov straec , czech republic , was chosen as the pozzolanic material .this additive is rich in sio ( 52.1 % ) and al ( 43.4 % ) .portland cement cem i 42.5 r produced in radotn , the czech republic was used as an alternative to metakaolin .the selected portland cement was rich in cao ( 66 % ) , sio ( 20 % ) , al ( 4 % ) , fe ( 3 % ) , so ( 3 % ) and mgo ( 2 % ) , as provided by xrf analysis . beside the investigation of metakaolin and portland cement additions on the mechanical properties of lime - based mortars ,the study was also focused on the influence of aggregate composition .river sand of grain size ranging between 0 and 4 mm from zlezlice was selected based on experience as the most suitable for the application as the bed joint mortar .the industrially produced crushed brick fragments of the grain - size 25 mm , from a brick plant bratronice , the czech republic , were chosen based on results of previous studies and experience of authors acquired by analyses of ancient mortar samples .the grain size distribution of the sand and crushed bricks aggregates , obtained by a sieve analysis , is presented in figure [ fig : gradingcurves ] .the mass ratio of lime and metakaolin / portland cement was equal to 7:3 in all mortars .the amount of water was adjusted so that the fresh mortars fulfilled the workability slump test in accordance with sn en 1015 - 3 and the mortar cone expansion reached 13.5.3 cm .such consistency ensured a sufficient workability while keeping the water to binder ratio ( w / b ) as low as possible to avoid shrinkage cracking .the amount of aggregates was designed based on our experience , previous studies and results of micromechanical modeling towards high strength and acceptable shrinkage .the composition of the tested mortars is summarized in table [ tab : mortarscomposition ] ..mass ratios of constituents in the tested mortars and their shrinkage after 90 days of hardening ; pc and cb abbreviations stand for portland cement and crushed bricks , respectively . [cols="<,^,^,^,^,^,^,^,^ " , ] the geometry of the 3d fe model was following the geometry of the experimentally tested masonry piers , as described in figure [ fig : schemepiers ] ; the fe mesh is presented in figure [ fig : piersmesh ] .the interface between bricks and surrounding mortar was not explicitly defined and modeled using interface elements , because the interface was not the weakest link in tension , recall section [ sec : parametersacquisition ] . in order to define the loading of the piers and boundary conditions in a realistic way ,the model also consisted , beside the auxiliary steel slabs , of the cylindrical load cell , both modeled as an isotropic elastic continuum .the loading was accomplished by an incremental displacement imposed to a single node at the crest of the steel cylinder in order to allow rotations around all axes .the load - step increments were adjusted for each loading stage individually in order to reach convergence for a minimum computational cost .the fe model was at first validated by comparing the predicted and measured load - displacement diagrams , as presented in figure [ fig : diagramspiers_vertical ] . both , the reactions in a load cell and the displacements obtained from dic by placing virtual extensometers at the top and bottom of the tested piers , were in a good agreement with the numerically obtained predictions , figure [ fig : schemepiers ] .the agreement between the predicted cracking patterns ( reflected in the field of damage distribution ) and the observed development of cracks ( visualized as a map of maximum principal strains obtained from dic ) is not perfect for all tested piers .however , considering the non - homogeneous microstructure of the tested materials , the fe analysis results cab be considered satisfactory . in the case of piers with lime - cement mortarlc - s ( figure [ fig : crackpattern_lc - s ] ) , the model correctly predicted the formation of multiple cracks at the compressed side of the tested piers and the formation of two major cracks at the opposite side due to tensile stresses from the pier bending .the dic results in the case of lime - metakaolin mortar lmk - s ( figure [ fig : crackpattern_lmk - s ] ) were influenced by a spalling of pier surface at the bottom , but the major crack formation in the middle of the tested pier can be identified in both , model predictions and dic results . on the other hand ,the formation of the major splitting crack in the case of mortar lmk - scb ( figure [ fig : crackpattern_lmk - scb ] ) , containing metakaolin and crushed bricks , was perfectly predicted by the fe simulation , as well as the distributed cracking at the compressed pier edge in the case of the weak mortar l - scb ( figure [ fig : crackpattern_l - scb ] ) . in conclusion , the strategy to model the masonry units and mortar separately allowed us to capture the failure mode quite realistically ( see figure [ fig : calibrationallpiers ] ) , enabling to study the relationship between the mechanical resistance of the masonry piers and bed - joint mortar properties .0.43 0.43 0.43 0.43 the aim of the presented study was to show the relationship between the individual mortar material parameters and the load - bearing capacity of masonry piers having the same configuration of geometry and loading conditions as described in sections [ sec : experimentaltesting ] and [ sec : numericalsimulationsofpiers ] .the lime - metakaolin mortar without crushed bricks ( lmk - s ) was chosen as the reference material , for which a single material parameter was changed at a time to assess its impact on the load - bearing capacity of the masonry pier . such analysis clearly indicated what the key material parameters were , and how to optimize the mortar composition towards a higher mechanical resistance of masonry structures .similar approach was adopted e.g. by sandoval and roca , who studied the influence of geometry and material properties of individual constituents on the buckling behavior of masonry walls .the plot in figure [ fig : dependence_e ] clearly demonstrates that the value of the mortar young s modulus has just a minor impact on the load - bearing capacity of the studied masonry pier , and that there is no abrupt change when the mortar stiffness becomes superior to the stiffness of masonry units .however , the failure mode changes quite significantly .the use of a compliant mortar results in a multiple cracking of bricks at the more loaded side due to poor supporting , while a major crack passing through the entire column in the middle forms if the bed joints are stiff , see figure [ fig : crackpatterns_e ]. it could seem advantageous to use mortars lacking pozzolanic additives because of their lower stiffness , in order to produce masonry of a higher deformation capacity within the elastic range .such masonry would in theory better resist seismic loading or imposed displacements , e.g. due to differential subsoil settlement .however , the compliant pure - lime mortars without additives promoting the hydraulic reactions are weak and suffer from an increased shrinkage cracking . = 2 gpa , left ) and stiff ( = 20 gpa , right ) mortar . ]0.48 = 2 gpa , left ) and stiff ( = 20 gpa , right ) mortar.,title="fig : " ] 0.48 = 2 gpa , left ) and stiff ( = 20 gpa , right ) mortar.,title="fig : " ] the tensile strength and fracture energy had to be modified carefully at the same time in order to avoid snap - back in the stress - strain diagram , and to preserve the same post - peak ductility for all investigated mortars . given the studied masonry pier and the boundary conditions , the tensile strength appears to have just a minor effect if it is lower than the strength of masonry units ( bricks ) , see figure [ fig : dependence_ft ] . on the other hand , the mortars of higher strength in tension act as a confinement of the eunits and the masonry reinforcement .since common lime- or cement - based mortars hardly attain the tensile strength superior to the strength of masonry units , the bed joint strengthening is accomplished e.g. by means of embedded steel rods . according to our numerical simulations , the confinement imposed by the strong mortars resulted in the cracking of the bricks and eventually the formation of the wedge - like failure as opposed to the vertical splitting of the pier containing a very weak mortar as indicated in figure [ fig : crackpatterns_ft ] .= 0.1 mpa , left ) and strong ( = 3.2 mpa , right ) mortar in tension . ]0.48 = 0.1 mpa , left ) and strong ( = 3.2 mpa , right ) mortar in tension.,title="fig : " ] 0.48 = 0.1 mpa , left ) and strong ( = 3.2 mpa , right ) mortar in tension.,title="fig : " ] the bed - joint mortar compressive strength appears to be the crucial parameter with respect to the load - bearing capacity of masonry subjected to a combination of compression and bending .mortars of a low compressive strength suffer an irreversible deformation at relatively low levels of external load , and masonry units are consequently subjected to uneven distribution of stresses due to imperfect supporting and excessive deformation of the bed joints . in the case of the modeled masonry piers ,the early crushing of the weak bed - joint mortar resulted in cracking of bricks at the more loaded pier periphery , figure [ fig : crackpatterns_fc ] .this phenomenon limited the load - bearing capacity of the tested pier rather significantly , especially in the case of very poor mortars ( mpa ) , see figure [ fig : dependence_fc ] .the bed joints containing mortars of a high compressive strength did not suffer the inelastic deformation before the major splitting vertical crack appeared due to transversal expansion , resulting in a high load - bearing capacity .therefore , the mortars with superior compressive strength should be used especially if a bed joint reinforcement is introduced so that the high strength can be efficiently exploited .= 3 mpa , left ) and strong ( = 33 mpa , right ) mortar in compression . ]0.48 = 3 mpa , left ) and strong ( = 33 mpa , right ) mortar in compression.,title="fig : " ] 0.48 = 3 mpa , left ) and strong ( = 33 mpa , right ) mortar in compression.,title="fig : " ]the eccentrically compressed masonry pier was selected as a model example to address both , behavior in compression , being the most frequent loading of masonry elements , and tension , which is considered critical for masonry .the performance of the conventionally used lime - cement mortar was compared with mortars containing the pozzolanic alternative metakaolin . to reach even better performance ,crushed brick fragments were also used to replace a portion of stiff river sand .such approach was adopted based on findings from the previous studies , e.g. , claiming that mortars containing active pozzolans and relatively compliant crushed brick fragments exhibit a superior strength .series of compression and three - point bending tests were carried out primarily in order to obtain the input parameters characterizing individual materials in the fe model .the results of the basic material tests conclusively indicate that the addition of metakaolin provides the lime - based mortars with significantly higher strength than the addition of portland cement .on the other hand , the pure - lime mortars lacking any additives appeared to be very poor .these findings are in agreement with several studies ; e.g. by vejmelkov et al . claiming that by replacing 20 % of lime with metakaolin the mortar compressive strength can increase up to five times and the flexural strength up to three times , which is in agreement with the study by velosa et al . .the partial replacement of sand grains by crushed brick fragments further improved the mechanical performance of the mortars , justifying their extensive use in ancient times .the higher strength is attributed to a reduction of shrinkage - induced cracking due to presence of the compliant brick fragments .this in turn leads to a better mortar integrity as suggested by neerka et al . , and lower stress concentrations in the vicinity of aggregates , as studied in detail in .the superior strength of the metakaolin - enriched mortars was also reflected by the increased load - bearing capacity of the tested piers , in particular from 360 kn and 600 kn , reached in the case of pure - lime ( l - scb ) and lime - cement mortar ( lc - s ) , respectively , up to 800 kn when 30 % of the binder was replaced by metakaolin ( lmk - s ) .moreover , the load - bearing capacity was further increased with the use of mortar containing crushed bricks ( lmk - scb ) , reaching up to 915 kn . this strength enhancement can explain the resistance and longevity of numerous ancient masonry structures containing cocciopesto .the extraordinary strength of the lmk - scb mortar together with the good adhesion between the mortar and bricks should also result in an increased seismic resistance , as suggested by costa et al . . knowing the basic material parameters , the damage - plastic material model used for the 3d fe simulations allowed to reproduce the experimental results with a relatively high accuracy , despite the complex composite action taking place in masonry .even the simplest case of uniaxial compression leads to a triaxial compression in mortar , while introducing a uniaxial compression and biaxial tension in usually stiffer masonry units .such scenario usually leads to the formation of vertical splitting cracks leading to a complete failure .the chosen strategy to model the bricks and mortars as two distinct materials allowed to investigate the relationship between the individual material parameters and structural behavior of the masonry pier .our findings that the mortar compressive strength has the biggest impact on the load - bearing capacity is in contradiction with the conclusions of gumaste et al . and pava and hanley .they claim that mortar compressive strength has just a minor impact on the behavior of masonry subjected to uniaxial compression .this discrepancy can be probably attributed to a different experimental set - up , in particular to the eccentricity of loading introduced in our study .the eccentric loading was responsible for a significant deformation of the bed joints , leading to a non - linear response at relatively early loading - stage .the assumption that the difference between the young s modulus of bricks and mortar is the precursor of the compression failure was not confirmed , and the load - bearing capacity of the masonry piers was almost independent of mortar stiffness .the chosen strategy to combine the comprehensive experimental analysis together with the numerical modeling revealed new findings to be considered during the design of bed joint mortars . even though the study was focused purely on the lime - based mortars , because these are accepted by the authorities for cultural heritage , our findings can also help with the design of mortars and masonry based on modern materials .the results of the basic material tests demonstrate the superior strength of mortars containing metakaolin , when compared to a pure - lime or lime - cement ones .the mortar strength was further increased by the addition of crushed bricks , which is attributed to the reduction of microcracking due to shrinkage around the relatively compliant ceramic fragments .it can be also conjectured that the hydraulic reaction in mortars containing metakaolin was promoted by the presence of water retained within the crushed brick fragments .the enhanced strength of the metakaolin - rich mortars , and especially those containing crushed bricks , was reflected in the significantly increased load - bearing capacity of masonry piers loaded by the combination of compression and bending .this can explain the extraordinary resistance and durability of ancient masonry structures with cocciopesto mortars .moreover , the utilization of the waste by - products from ceramic plants makes the material sustainable for a relatively low cost , since the fragments partially replace binder , being the most expensive mortar component .based on experimental observations the damage - plastic material model seemed to be the most appropriate to describe the constitutive behavior of mortars and bricks in the fe model .the chosen strategy to model the mortars and bricks as distinct materials allowed the relatively accurate reproduction of the experimentally obtained data in terms of the predicted crack patterns and load - displacement diagrams .results of the numerical simulations and dic analysis clearly demonstrate that the mortar properties have an enormous impact on the load - bearing capacity of masonry , strain localization , and the formation of cracks . the numerical analysis , based on the fe model verified through the comprehensive experimental analysis , revealed that mortar compressive strength is the key material parameter with respect to the load - bearing capacity of the piers subjected to the combination of bending and compression .considering the studied geometry and boundary conditions , tensile strength and mortar young s modulus influence the pier behavior and modes of failure , however , do not have any significant impact on the load - bearing capacity .the authors acknowledge financial support provided the czech science foundation , project no .ga13 - 15175s , and by the ministry of culture of the czech republic , project no .df11p01ovv008 .a. arizzi , g. cultrone , aerial lime - based mortars blended with a pozzolanic additive and different admixtures : a mineralogical , textural and physical - mechanical study , construction and building materials 31 ( 2012 ) 135143 . http://dx.doi.org/10.1016/j.cemconres.2012.03.008 [ ] .f. veniale , m. setti , c. rodriguez - navarro , s. lodola , w. palestra , a. busetto , thamasite as decay product of cement mortar in brick masonry of a church near venice , cement and concrete composites 25 ( 2003 ) 11231129 . http://dx.doi.org/10.1016/s0958-9465(03)00159-8 [ ] .k. callebaut , j. elsen , k. van balen , w. viaene , nineteenth century hydraulic restauration mortars in the saint michael s church ( leuven , belgium ) .natural hydraulic lime or cement ? , cement and concrete research 31 ( 2001 ) 397403 . http://dx.doi.org/10.1016/s0008-8846(00)00499-3 [ ] .m. seabra , j. labrincha , v. ferreira , rheological behaviour of hydraulic lime - based mortars , journal of the european ceramic society 27 ( 2007 ) 17351741 .http://dx.doi.org/10.1016/j.jeurceramsoc.2006.04.155 [ ] .a. sepulcre - aguilar , f. hernndez - olivares , assessment of phase formation in lime - based mortars with added metakaolin , portland cement and sepiolite , for grouting of historic masonry , cement and concrete research 40 ( 2010 ) 6676 .[ ] . a. moropoulou , a. bakolas , e. aggelakopoulou , evaluation of pozzolanic activity of natural and artificial pozzolans by thermal analysis , thermochimica acta 420 ( 2004 ) 135140 .[ ] . i. papayianni , m. stefanidou , durability aspects of ancient mortars of the archeological site of olynthos , journal of cultural heritage 8 ( 2007 ) 193196 . http://dx.doi.org/10.1016/j.culher.2007.03.001 [ ] .p. degryse , j. elsen , m. waelkens , study of ancient mortars from sagalassos ( turkey ) in view of their conservation , cement and concrete research 21 ( 2002 ) 14571463 .http://dx.doi.org/10.1016/s0008-8846(02)00807-4 [ ] .g. baronio , l. binda , n. lombardini , the role of brick pebbles and dust in conglomerates based on hydrated lime and crushed bricks , construction and building materials 11 ( 1997 ) 3340 . http://dx.doi.org/10.1016/s0950-0618(96)00031-1 [ ] .v. neerka , z. slkov , p. tesrek , t. plach , d. frankeov , v. petrov , comprehensive study on microstructure and mechanical properties of lime - pozzolan pastes , cement and concrete research 64 ( 2014 ) 1729 . http://dx.doi.org/10.1016/j.cemconres.2014.06.006 [ ] .v. neerka , j. nmeek , z. slkov , p. tesrek , investigation of crushed brick - matrix interface in lime - based ancient mortar by microscopy and nanoindentation , cement & concrete composites 55 ( 2015 ) 122128 .[ ] . e. vejmelkov , m. keppert , z. kerner , p. rovnankov , r. ern , mechanical , fracture - mechanical , hydric , thermal , and durability properties of lime - metakaolin plasters for renovation of historical buildings , construction and building materials 31 ( 2012 ) 2228 . http://dx.doi.org/10.1016/j.conbuildmat.2011.12.084 [ ] .j. lanas , j. prez bernal , m. bello , j. alvarez galindo , mechanical properties of natural hydraulic lime - based mortars , cement and concrete research 34 ( 2004 ) 21912201 .http://dx.doi.org/10.1016/j.cemconres.2004.02.005 [ ] .p. de silva , f. glasser , phase relations in the system relevant to metakaolin - calcium hydroxide hydration , cement and concrete research 23 ( 1993 ) 627639 .m. rojas , j. cabrera , the effect of temperature on the hydration rate and stability of the hydration phases of metakaolin - lime - water systems , cement and concrete research 32 ( 2002 ) 133138 .t. rougelot , f. skoczylas , n. burlion , water desorption and shrinkage in mortars and cement pastes : experimental study and poromechanical model , cement and concrete research 39 ( 2009 ) 3644 . http://dx.doi.org/10.1016/j.cemconres.2008.10.005 [ ] .m. stefanidou , i. papayianni , the role of aggregates on the structure and properties of lime mortars , cement & concrete composites 27 ( 2005 ) 914919 .j. lanas , j. alvarez , masonry repair lime - based mortars : factors affecting the mechanical behaviour , cement and concrete research 33 ( 2003 ) 18671876 .http://dx.doi.org/http://dx.doi.org/10.1016/s0008-8846(03)00210-2 [ ] .m. mosquera , b. silva , b. prieto , e. ruiz - herrera , addition of cement to lime - based mortars : effect on pore structure and vapor transport , cement and concrete research 36 ( 2006 ) 16351642 .http://dx.doi.org/10.1016/j.cemconres.2004.10.041 [ ] .a. moropoulou , a. cakmak , g. biscontin , a. bakolas , e. zendri , advanced byzantine cement based composites resisting earthquake stresses : the crushed brick / lime mortars of justinian s hagia sophia , construction and building materials 16 ( 2002 ) 543552 .http://dx.doi.org/10.1016/s0950-0618(02)00005-3 [ ] .h. bke , s. akkurt , b. pekolu , e. uurlu , characteristics of brick used as aggregate in historic brick - lime mortars and plasters , cement and concrete research 36 ( 2006 ) 11151122 .http://dx.doi.org/10.1016/j.cemconres.2006.03.011 [ ] .v. neerka , p. tesrek , j. zeman , fracture - micromechanics based model of mortars susceptible to shrinkage , key engineering materials 592593 ( 2014 ) 189192 . http://dx.doi.org/10.4028/www.scientific.net/kem.592-593.189 [ ] .p. tesrek , v. neerka , p. padevt , j. anto , t. plach , influence of aggregate stiffness on fracture - mechanical properties of lime - based mortars , applied mechanics and materials 486 ( 2014 ) 289294 .http://dx.doi.org/10.4028/www.scientific.net/amm.486.289 [ ] .a. moropoulou , a. bakolas , k. bisbikou , characterization of ancient , byzantine and later historic mortars by thermal and x - ray diffraction techniques , thermochemica acta 269/270 ( 1995 ) 779995 .http://dx.doi.org/10.1016/0040-6031(95)02571-5 [ ] .a. malaikah , k. al - saif , r. al - zaid , prediction of the dynamic modulus of elasticity of concrete under different loading conditions , international conference on concrete engineering and technology universiti malaya ( 2004 ) 3239 .m. radovic , e. curzio - lara , r. l. , comparison of different techniques for determination of elastic properties of solids , materials science and engineering a368 ( 2004 ) 5670 .http://dx.doi.org/10.1016/j.msea.2003.09.080 [ ] .p. lava , s. cooreman , s. coppieters , m. de strycker , d. debruyne , assessment of measuring errors in dic using deformation fields generated by plastic fea , optics and lasers in engineering 47 ( 2009 ) 747753 . http://dx.doi.org/10.1016/j.optlaseng.2009.03.007 [ ] .m. bornert , f. brmand , p. doumalin , j. dupr , m. fazinni , m. grdiac , f. hild , s. mistou , j. molimard , j. orteu , l. robert , y. surrel , p. vacher , b. wattrisse , assessment of digital image correlation errors : methodology and results , experimental mechanics 49 ( 2009 ) 353370 . http://dx.doi.org/10.1007/s11340-012-9704-3 [ ] .b. pan , l. yu , d. wu , l. tang , systematic errors in two - dimensional digital image correlation due to lens distortion , optics and lasers in engineering 51 ( 2013 ) 140147 .http://dx.doi.org/10.1016/j.optlaseng.2012.08.012 [ ] .a. wawrzynek , a. cincio , plastic - damage macro - model for non - linear masonry structures subjected to cyclic or dynamic loads , in : proc .conf . analytical models and new concepts in concrete and masonry structures , amcm , gliwice , poland .j. vorel , v. milauer , z. bittnar , multiscale simulations of concrete mechanical tests , journal of computational and applied mathematics 236 ( 2012 ) 48824892 . http://dx.doi.org/10.1016/j.cam.2012.01.009 [ ] .v. corinaldesi , mechanical behavior of masonry assemblages manufactured with recycled - aggregate mortars , cement and concrete composites 31 ( 2009 ) 505510 .[ ] . c. sandoval , p. roca , study of the influence of different parameters on the buckling behaviour of masonry walls , construction and building materials 35 ( 2012 ) 888899 .http://dx.doi.org/10.1016/j.conbuildmat.2012.04.053 [ ] .m. valluzzi , l. binda , c. modena , mechanical behaviour of historic masonry structures strengthened by bed joints structural repointing , construction and building materials 19 ( 2005 ) 6373 . http://dx.doi.org/10.1016/j.conbuildmat.2004.04.036 [ ] .a. costa , a. arde , a. costa , j. guedes , b. silva , experimental testing , numerical modelling and seismic strengthening of traditional stone masonry : comprehensive study of a real azorian pier , bulletin of earthquake engineering 10 ( 2012 ) 135159 .[ ] . h. kaushik , d. rai , s. jain , stress - strain characteristics of clay brick masonry under uniaxial compression , journal of materials in civil engineering 19 ( 2007 ) 728739. http://dx.doi.org/10.1061/(asce)0899-1561(2007)19:9(728 ) [ ] .k. gumaste , k. nanjunda rao , b. venkatarama reddy , k. jagadish , strength and elasticity of brick masonry prisms and wallettes under compression , materials and structures 40 ( 2007 ) 241253 .http://dx.doi.org/10.1617/s11527-006-9141-9 [ ] .a. zucchini , p. loureno , mechanics of masonry in compression : results from a homogenisation approach , computers and structures 85 ( 2007 ) 193204 . http://dx.doi.org/10.1016/j.compstruc.2006.08.054 [ ] .
architectural conservation and repair are becoming increasingly important issues in many countries due to numerous prior improper interventions , including the use of inappropriate repair materials over time . as a result , the composition of repair masonry mortars is now being more frequently addressed in mortar research . just recently , for example , it has become apparent that portland cement mortars , extensively exploited as repair mortars over the past few decades , are not suitable for repair because of their chemical , physical , mechanical , and aesthetic incompatibilities with original materials . this paper focuses on the performance of various lime - based alternative materials intended for application in repairing historic structures when subjected to mechanical loading . results of basic material tests indicate that the use of metakaolin as a pozzolanic additive produces mortars with superior strength and sufficiently low shrinkage . moreover , mortar strength can be further enhanced by the addition of crushed brick fragments , which explains the longevity of roman concretes rich in pozzolans and aggregates from crushed clay products such as tiles , pottery , or bricks . an integrated experimental - numerical approach was used to identify key mortar parameters influencing the load - bearing capacity of masonry piers subjected to a combination of compression and bending . the simulations indicate increased load - bearing capacities for masonry piers containing metakaolin - rich mortars with crushed brick fragments , as a result of their superior compressive strength . masonry , mortar , mechanical properties , load - bearing capacity , fem , dic
technological improvements in the field of digital cameras are strongly simplifying the study of collective behavior in animal groups .the use of single or multi - camera systems to record the time evolution of a group is by far the most common tool to understand collective motion .the emergence of collective behavior in human crowds , fish schools , bird flocks and insect swarms have been investigated .events of interest are recorded from one or more cameras and images are then processed to reconstruct the trajectory of each individual in the group .positions of single individuals at each instant of time are used to characterize the system .density , mean velocity , mean acceleration , size of the group , as well as single velocities and accelerations are computed to understand how collective behavior arises and following which rules .the reliability of these results is strictly connected to the accuracy of the reconstructed trajectories . in a previous work cavagnaet al . in suggested how to reduce the error on the retrieved trajectories choosing the proper set up for the system .more recently in , a tool to check the accuracy of a multicamera system on the reconstructed trajectories is provided , together with a software for the calibration of the intrinsic and extrinsic parameters . in ,towne et al . give a way to measure the reconstruction error and to quantify it when a dlt technique is used to calibrate the extrinsic parameters of the system .but a theoretical discussion on the propagation of the errors from experimental measures to the reconstructed positions and trajectories is still missing . from a collective behavior perspective , distances between targets are much more interesting than absolute positions .indeed , quantities like density , velocity , acceleration are not referred to the position of single animals , but they involve a measure of a distance . for this reason , in our discussion we will show how to estimate the error on the reconstructed position of a single target , but we will give more emphasis to the error on mutual distances between two targets . we will focus the analysis on how experimental measurements and calibration uncertainty affect mutual distances between targets and we will give some suggestions on how to choose a suitable set up in order to achieve the desired and acceptable error . in the first section of the paper we give a description of the pinhole model , which is by far the simplest but effective approximation of a digital camera .we introduce the nomenclature used in the entire paper .moreover we describe the mathematical relations holding between the position of a target in the three dimensional real world and the position of its image on the camera sensor . in the same section, we describe the general principles of the reconstruction making use of systems of two pinhole cameras . in the second sectionwe will show the error formulation for both absolute position of a single target and mutual distance between pair of targets considering at first one camera only , and then generalizing the results to the case of a camera system . in the third sectionwe give an interpretation of the error formulation to suggest how the reconstruction error can be reduced by properly choosing the suitable intrinsic and extrinsic parameters of the system . in the fourth and last section we consider the set up we use in the field to record starling flocks and midge swarms .we give a description on the tests we perform to check the accuracy of the retrieved trajectories .moreover we practically show how the results of these tests are affected by experimental measurements inaccuracy .the reconstruction of the position of a target imaged by one or more camera can be intuitively address with a geometric perspective . in some special situations , the images of only one cameragives enough information to determine where the target is located in the real world .however in the general case at least two cameras are needed . in this sectionwe address the geometric formulation of the reconstruction problem for one camera and for a system of two cameras , knowing intrinsic and extrinsic parameters of the system .intrinsic parameters fix the geometry of the camera and its lens : the position of the image center , the focal length and the distortion coefficients .we calibrate the intrinsic parameters taking into account the radial distortion up to the first order coefficient only , while we do not consider tangential distortion .tests presented in section [ section::test ] show that this is sufficient to obtain the desired accuracy in the reconstruction .extrinsic parameters , instead , describe the position and orientation of the camera system with respect to a world reference frame .they generally include a set of angles , fixing the orientation of the cameras and a set of length measures expressed in meters defining the cameras positions .depending on the experiment they can be practically measured with high precision instruments or calibrated through software tools . in the paper , we consider cameras in the pinhole approximation , which is the easiest but effective camera model .note that the pinhole approximation does not take into account lens distortion . in the followingwhen talking about the position of a target on the image plane , we will always refer to its coordinates already undistorted .coordinate of a target and the coordinate of its image by a factor .similarity between the two dashed and filled green triangles shows instead the proportionality between the coordinate of the point and the coordinate of its image . *b : * the two targets and lying on the same -plane are imaged in and . similarity between the two dashed and filled orange triangles shows that the distance is proportional to the distance through the coefficient . ]a camera maps the three dimensional real world into the two dimensional space of its image . in the pinhole approximationthe correspondence between the space and the image plane is defined as a central projection with center in the focal point , see fig.[fig : pinhole_model]a .the image plane is located at a distance from the focal point , with being the focal length of the lens coupled with the camera . under the pinhole camera model , a point in the three dimensional worldis mapped to the point on the image plane belonging to the line connecting and .note that in this approximation the image plane is located in front of the focal point .thus the resulting image on the sensor does not reproduce the scene rotated of as it would be in a real camera .we choose this representation for its simplicity and because it does not affect in any way the discussion of the present paper .the natural reference frame for a camera in the pinhole approximation is the one having the origin in the focal point , -axis coincident with the optical axis of the camera and the -plane parallel to the sensor , as shown in fig.[fig : pinhole_model]a .coordinates in this reference frame are expressed in meters , while coordinates of a point belonging to the sensor plane are usually expressed in pixels .the size of a single pixel , , defines the correspondence between the two units of measurements .thus the point belonging to the image plane and such that in the pinhole reference frame corresponds to in the sensor reference frame , with and .for the sake of simplicity , we will assume to deal with sensor made of squared pixels , i.e. so that the conversion factor from meters to pixel is the same in both the direction and .from now on , we denote the coordinates expressed in meters by , and , and the coordinate expressed in pixels by , while represents the focal length expressed in pixels . as shown in fig.[fig : pinhole_model]b , target in the real world is projected into a point on the image plane . using similarity between trianglesit can be shown that and .the correspondence between the real space and the image plane is then defined as : or equivalently : thus , knowing the position of a target we can determine the coordinates of its image .but in the general case we would like to do the opposite : knowing the position of a target on the sensor plane , we would like to retrieve the corresponding coordinates in the pinhole reference frame . if no other informations are available , eqs.([eq::pinhole ] ) do not have a unique solution in the unknown so that reconstruction is not feasible making use of one camera only , and at least two cameras are needed . in the special case of targets lying on a plane , extra information about the mutual position between the camera and the plane where the motion occurs , i.e. extrinsic parameters , can be used to define an homography .eqs.([eq::pinhole ] ) can then be inverted and the positions of the targets can be retrieved making use of one camera only .for the sake of simplicity , in this section we will not address this general case , but only the easier particular situation when the motion happens on a plane parallel to the camera sensor .the interested reader can retrieve the exact formulation for the general planar motion putting together the information written in this section and in the following one , where the general case is discussed . in the special case of targets lying on a plane parallel to the sensor , i.e. fixed , eqs.([eq::pinhole ] )can be inverted and the position of a target projected in can be computed as : consider now two targets and on the same -plane , and their images and as in fig.[fig : pinhole_model]b . thanks again to similarity between triangles , the distance between the two targets and the distance between their projections satisfy the following equation : so that fixes the ratio between distances in the and space . with the same argumentit can be shown that : where , , , . in the field of collective behavior when we deal with cells or bacteria moving on a glass slide or fish in swallow water, the motion happens on a preferential plane .setting up the camera in such a way that the sensor is parallel to this special plane , the displacement in the direction is negligible ; can be considered constant and the equations above hold .thus , measuring with an high precision instrument the distance , , between the sensor and the plane where the experiment takes place , it is possible to reconstruct the position of each target in the pinhole reference frame . in the followingwe will refer to these particular cases as experiments .the ambiguity of the position of a target from its image on one camera only , can be easily solved making use of two or more cameras . for the sake of simplicity , from now onwe take into account only system of two synchronized cameras , having the same focal length and sensors of the same size .consider , as in fig[fig : pinhole_system]a , a three dimensional target in the real world and its projections and on the sensor plane of two cameras , denoted by left and right . belongs to the line passing by and as well as to the line passing by and . is the crossing point between the two lines and as shown in fig.[fig : pinhole_system]a . in each of the two cameras , eq.([eq::pinhole ] )holds , and the two lines and can be defined as : * in the reference frame of the left camera a parametric equation , with parameter for the line is : ; * in the reference frame of the right camera a parametric equation , with parameter for the line is : . in order to find the crossing point , , between and we need to express both the lines in the same reference frame : we choose the reference frame of the left camera to be the world reference frame . in the world reference frame where , is the rotation matrix which brings the world reference frame parallel to the reference frame of the right camera , and the superscript denotes that the correspondent vector is transposed .the crossing point between the two lines is then obtained making use of the solution the system defined by in the unknown and : the solution identifies the position .( [ eq::lllr ] ) are well defined when and , i.e. the extrinsic parameters of the system , are known . represents the vector distance between the two cameras .its modulus is the distance expressed in meters between the two focal points and .instead , the orientation of can be expressed in spherical coordinates through two angles , and : , see fig.[fig : pinhole_system]a .denote by the projection of on the -plane . is defined as the angle between the -axis and , while is the angle between and .the rotation matrix can be parametrized by the three angles of yaw , pitch and roll denoted respectively by , and : .the mutual position and orientation of the cameras is then defined through the distance and the angles , , , and .these parameters are directly measured or calibrated when performing the experiment .we will show in the next section that inaccuracy on these quantities can strongly affect the retrieved position of the target .consider , as in fig.[fig : pinhole_system]b , two targets and and their images and in the left camera and and in the right camera .the expression for becomes more complicated passing from to experiments , since eq.([eq::2d_deltax ] ) does not hold anymore . depends on both and , as well as depends on and . from eq .( [ eq::pinhole ] ) , and .this implies that where , and . with the same argumentit can be proved that , with . as a consequence : for short , eq.([eq::3d_deltar ] ) becomes giving back eq.([eq::2d_deltar ] ) for the experiments .the introduction of a non constant third coordinate , makes the expression of the reconstructed position not transparent . for this reason we will not discuss the general case , for further information see , but in the following we will retrieve the exact solution of eq.([eq::lllr ] ) for the two special cases described in fig.[fig : common_fov ] andhighlighted respectively in black and red .in this special case , the two cameras have the same orientation and the two focal points both lie on the axis with a mutual distance equal to , see fig.[fig : common_fov ] where this set up is highlighted in black together with its field of view . andthe rotation matrix is equal to the identity matrix .eqs.([eq::lllr ] ) become : in order to retrieve the position of the target imaged in and the above system has to be solved in the unknown and .we find the solution . represents the ratio between the metric distance between the two focal points and the disparity expressed in pixels . from equations ( [ eq::lllr ] ) we obtain : , while in red a symmetric set up with a mutual rotation about the axis only .the two different system field of views are highlighted respectively in black and red .note that increasing the focal length of the two cameras makes the common field of view narrower .increasing moves the working distance further in the direction and also reduces the portion of space imaged by both cameras .moreover the parallel set up has an optimal field of view at large while the rotated set up is optimal for short , indicating that affects the optimal working distance . ]this is the special case obtained applying a translation along the axis and then rotating the left camera of an angle in the clockwise direction and the right camera of an angle in the counterclockwise direction about the axis , as shown in fig.[fig : common_fov ] where this set up is highlighted in red .the mutual angle of rotation about the axis between the cameras is equal to , so that the rotation matrix is : eqs .( [ eq::lllr ] ) are then : \\ av_l & = & bv_r\\ a\omega & = & d\sin\epsilon+b[u_r\sin\alpha + \omega\cos\alpha ] \end{array}\ ] ] the solution of the above system is not trivial and different approximations can be made to simplify the problem . in our casewe can assume that the angle of rotation is small and since the set up is symmetric . for small angles and , and the previous equations become : \\ av_l & = & bv_r\\ a\omega & = & d\alpha/2+b[\alpha u_r+\omega ] \end{array}\ ] ] solving the above system we obtain : and with the additional assumption that , , .so that , eqs.([eq::lllr ] ) become : note that for the solution is exactly what we obtained in the case of pure translation .the approximation and holds for angles approaching . for angles smaller than ,the error in the approximation is of the third order for the sine and of the second order for the cosine , so that if , and .when is not small , eq.([eq::lllr ] ) can not be simplified and the solution is not trivial anymore . for the sake of simplicity , we do not give here the formulation of the solution for the general case .in the two previous sections we described systems of one or more cameras in the pinhole approximation . we showed how to retrieve the three dimensional position of a target and the mutual distance between two targets knowing only the parameters of the system . through the error analysiswe want to quantify how errors in the experimental measures and calibration of the intrinsic and extrinsic parameters affect the reconstruction process .moreover we want to investigate the possibility to reduce the error choosing the proper experimental set up .we will focus our analysis on the reconstruction of the three dimensional position of a target , but we will give more emphasis to the propagation of the error in the retrieved mutual distance between two targets .we will first address the error theory in the case of experiments showing how to quantify the error making use of geometry only , then we will approach the error theory from a more formal and mathematical point of view . finally we will consider the more general case of experiments only in the formal way , since the geometric interpretation is not very intuitive .this is the special case where objects move on a plane parallel to the sensor at a distance from the focal point .the position of target projected in is : and .instead the distance between two targets and is computed making use of eq.([eq::2d_deltar ] ) ; so that where is the distance in pixels between the projections of the two targets and .the quantities involved are then and we want to investigate how each of these parameters affect , and .consider the case when is directly measured with an error . given a target , it is reconstructed in on the -plane on the line passing by , and , as shown in fig.[fig : error_camera_system]a .making use of similarity between triangles the following relation between the error on the -coordinate of , and can be shown : is constant for all the targets in the field of view .the equation above implies that the position of each target projected on the sensor is affected by the same relative error . in other words ,the error depends on the position of the target and the larger , the larger is the error , while the ratio is constant and equal to . in experiments , the error in is the accuracy of the instrument used to measure it .if an instrument with an accuracy of is used on , the relative error on , and as a consequence of , is equal to .note that the relative error is dimensionless .if instead the same instrument is used to take the measure of the relative error becomes negligible being equal to , and producing a negligible relative error on .while designing the set up an acceptable threshold for the relative error on has to be defined , then the working distance and the measure instrument can be chosen accordingly .fig.[fig : error_camera_system]b represents a system where the focal length is calibrated with an error .this makes the sensor of the camera to be at a distance from the focal point , instead of at a distance . is projected in with the same coordinate of , but on the -plane , while the retrieved lies on the same plane of but on the line passing by and and not on the correct one .so that its position on the -plane is .note that , meaning that the error is negative , as shown in fig.[fig : error_camera_system]b . making use of similarity between triangles , it can be shown that : and .putting together these two equations we obtain : the negative sign in this equation indicates that a positive error on , produces a negative relative error , i.e. if the incorrect focal length is bigger than the correct one than the retrieved is smaller than the correct , as shown in fig.[fig : error_camera_system]b . as for the error on , the error on fixes the relative error on .an error with produces a relative error on equal to .the error on is generally completely due to the calibration procedure used for , so that it can be easily reduce using a precise calibration software .fig.[fig : error_camera_system]c represents a system where an error on the determination of the position of the projection of the target occurs .the point is then considered to be projected in instead of .the retrieved position lies on the same -plane of , but on the line passing by and .similarity between triangles shows that : the error affects in a different way than the other two parameters . unlike the error on and , it does not produce a constant relative error .moreover it does not depend on the coordinate of the target . in a set up with and , a target in the image segmented with an error produces an error .if the position of a target is , an error of corresponds to a relative error of . if the error occurs on a target at , its retrieved position is corresponding to a relative error of .note that , since the camera pinhole model does not include distortion effect , the error includes the segmentation error due to noise on the picture and the error in the position of the point of interest when the distortion coefficient are not properly calibrated .the error can be kept under control choosing the proper parameters of the set up , in particular the ratio .if the maximum acceptable error on is defined as and , has to be chosen in order to verify : . in the general casethe error on is the sum of the three contributes due to , and , so that : for large , targets at the edge of the field of view , the dominant term of the error is the relative part due to and , while for small , targets in the center of the field of view , the dominant part is the one due to the error on .note that the entire discussion of this section could have been addressed in a more formal way simply computing the derivative of respect to the parameters , and : the difference between this last equation and eq.([eq::2d_percentage_deltax ] ) is only in the term depending on but in the general case we can assume that so that the two terms can be considered equal .the same arguments used to retrieve the error on can be used to write the formulation of the error on the coordinate only referring the schemes in fig.[fig : error_camera_system ] to the -plane : the error on the distance can be obtained deriving eq.([eq::2d_deltar ] ) with respect to , and : on the -plane the previous equation is : the error on and produce the same effect on the distance then on the absolute position of a target . they both induce a constant relative error on the distances between targets .the error on large is higher than on small .the third term , instead does not depend on . as for ,the first two terms of the error on can be reduced choosing a proper instrument to measure and a precise calibration software to calibrate , while the third term can be kept under a certain threshold choosing a set up with the proper ratio .referring the same arguments to the -plane : putting together the equation for and we find eq.([eq::2d_deltadeltar ] ) .the discussion made on can be referred to . for large the dominant term of the erroris the constant relative error , while for short the dominant term is which can be kept small choosing a set up with the proper ratio , as shown in the next section .the error analysis is not trivial when dealing with real experiments , i.e. targets are free to move in the entire space without any preferential plane .the graphical interpretation of the errors is not as intuitive as in the experiments . for this reason we find a formulation of the error on the position of a target and on distances between pairs of targets making use of derivatives .moreover we analyze in detail only the special case introduced in the previous section : a set up with the two cameras translated on the axis and symmetrically rotated of an angle about the axis , as shown in red in fig.[fig : common_fov ] .the expression of the error in the case of a set up with parallel cameras can then be obtained imposing . under the additional hypotheses that is a small angle , and , eq.([eq::3d_xyz ] ) holds and the position of a target projected in and in the left and in the right camera is defined by : . and strictly depend on , as well as and are affected by .for this reason in the analysis of the error on the absolute position of the targets we will focus first on the error on and then we will write the expression for and too .computing the derivative of defined in eq.([eq::3d_xyz ] ) , we find : note that negative signs in the previous equations indicate that a positive error and produce a negative error on .the relative error on is then : where : is the relative error on the measured baseline , is the relative error on the calibrated focal length , is the error on the measure of the angle and is the error on the disparity . represents the difference between the error in the determination of and . as for the case ,an error on can be due to noise in the image but also to an error in the calibration of the distortion coefficients . the relative error on is then made by one constant term , , and by three terms which grow linearly in .the constant term due to the error on the measure of the baseline can be reduced choosing the proper instrument , as already discussed in the previous section about the error on .the other three terms , instead , can be reduced choosing the system parameters , , and in the proper way .a typical working distance is generally chosen and typical and are estimated .the three linear terms of the equation above can then be kept smaller than a certain threshold , , imposing the following inequalities : and .these two relations fix a lower bound for and for . concerning , substituting eq.([eq::3d_percentage_deltaz ] ) in eq.([eq::2d_percentage_deltax ] )we find that : +\delta u\displaystyle\frac{z}{\omega}\ ] ] and substituting eq.([eq::3d_percentage_deltaz ] ) in eq.([eq::2d_percentage_deltay ] ) : +\delta v\displaystyle\frac{z}{\omega}\ ] ] note that an error on does not affect the three components , and in the same way . in the expression for ,the coefficient of the term due to is equal to , while for and it is equal to .a positive error on produces a negative error on and , while the error on can be positive or negative , depending on the position of the target .when the angle is not small the approximation does not hold anymore and the original system of equations has to be used . in the general casethe complete expression of has to be derived with respect to all the variables and the expression of the error gets much more complicated , including extra terms .the sources of errors are always the same , i.e. measure or calibration errors for intrinsic and extrinsic parameters and segmentation inaccuracy , but their contributions are different .for the sake of simplicity we do not discuss here the general case .the reader interested in error formulation for the general problem has only to compute partial derivatives of the complete solution with respect to each parameter included in the expression for .this is by far the more interesting issue .all the analysis we will do on trajectories is not based on the absolute position of the targets , but on their mutual position . in order to guarantee the accuracy we want on our analysis, we need to have accurate measure of the distances between targets .consider two targets in the three dimensional space , and , and their distance , as shown in fig.[fig : pinhole_system ] .in the special case when the error on the mutual distance between the two targets is essentially the case of the experiment and the error on the mutual distances is as shown in the previous section .instead when the error analysis is much more complicated . in the following we will retrieve a formulation of for the special set up with a translation along the axis and a symmetric rotation about the axis of an angle . from eq.([eq::3d_xyz ] )\ ] ] where , represent the disparity of the projection of and . deriving the above equationwe find : +\\ - & 2\displaystyle\frac{\bar{z}^2}{\omega d}\delta\delta s\end{aligned}\ ] ] where , and is the difference between the error on the disparity of the two targets . for large first term of the previous equation is the dominant part of the error and : passing from to experiments the relative error on the mutual distances between two targets is not constant anymore .the only constant term is , while all the others depend linearly on the position of the two targets and .the ratio controls how much is affected by , and . so that the error can be kept low choosing smaller than a desired value . on the other side , for small dominant part of the error is : this term is not relative , but absolute .each pair of targets segmented with an error is affected by the same error on , independently on the size of .thus , this error has a bigger effect on short distances then on large ones .moreover the dependence on makes the error growing very fast when the targets get farther from the cameras .when designing the experiment it is very important to estimate this error , and to choose the set up in order to keep it small , because it will affect all the small distances . in the general case the two expressions in eq.([eq::large_deltaz ] ) andeq.([eq::short_deltaz ] ) contribute together at the error on .note that the segmentation error appears in two different terms , one linearly depending on , , which is due to the error in the segmentation of the single pair of targets mostly affecting large distances .the other one grows with and depends on the difference between the errors on the segmentation of the two pairs of targets and mostly affecting short distances . with similar argumentsit can be shown that : + 2\displaystyle\frac{\bar{z}^2}{\omega d}\delta\delta u\end{aligned}\ ] ] and + 2\displaystyle\frac{\bar{z}^2}{\omega d}\delta\delta v\end{aligned}\ ] ] the error on is then : so that , for large the error , , is dominated by : while for short the dominant part of the error is the absolute term : , grows linearly with the sensor width , , with the coefficient of proportionality equal to .while through the similarity between the dashed and the filled green triangles it can be shown that . ] designing the set up of a experiments , intrinsic and extrinsic parameters of the system have to be chosen taking into account the volume of the space to be imaged by the cameras , and the accuracy of the reconstruction .in this section we give some suggestions on how to choose the properly set up when performing and experiments making use of the theoretical relations for the error described in the previous sections .when dealing with one camera only , the magnification ratio , plays a crucial role in the choice of the set up . as shown in section [ section::general ] , the magnification ratio fixes the correspondence between distances expressed in meters in the real world and distances expressed in pixels units on the sensor plane .the magnification ratio has to be chosen very carefully taking into account some properties of the objects to be tracked , but also taking care of the desired accuracy of the reconstruction .first of all , an object of size in the real world would be imaged in an object of size on the sensor , such that .the smaller the magnification ratio , the bigger the imaged object on the screen .the previous relation can be seen as a way to fix a lower bound for . as an example , in our experience we want the image of the objects of interest to be at least as large as four pixels .thus , when recording birds with body size , in order to have their image as large as pixels we need to be larger than , while when recording midges with body size of about , the magnification ratio should be larger than .a second issue is related to the minimum appreciable distance . with the same argument used above, it can be shown that the minimum reconstructable metric distance in the real world corresponds to one pixel on the sensor and it is defined by and expressed in meters .this means that two objects at a mutual distance shorter than can not be distinguished in the picture .it is generally very useful to have an estimate of the interparticle distance of the group of interest and choose in such a way that on average the distance between imaged objects is larger than or pixels .otherwise objects would be too close to each other and optical occlusions would occur frequently . as an exampleif the interparticle distance is about , we want to be greater than .the third aspect related to is the choice of the size of the field of view . denoting by and the width and the height of the field of view , it is easy to show that and where represents the size of the sensor , see fig.[fig : single_fov ] .the larger the ratio the larger the field of view . denoting by and the minimum size of the field of view , we would like to choose and such that : the fourth and last issue is related to the error control .eq([eq::2d_deltadeltar ] ) tells that the error on the distance is : .the first two terms of this equation are constant and depend only on the precision in the measure of and in the calibration of , so that they can be kept as small as we want only choosing the measurement instrument with the proper accuracy .instead the last term depends on the chosen set up and the larger the magnification ratio , the larger the absolute error on short distances .denote by the maximum acceptable error .given an estimate of we would like to choose and such that : the first three issues above give lower bound for the magnification ratio , while the last one gives an upper bound . in principle one would like to have a large field of view and a small error , but they are both controlled by , so that a compromise between the two issues has to be found . note that the pixel size is crucial for the two problems related to the object size and the interparticle distance , while the size of the sensor plays a crucial role in the two inequalities for and . in practice ,the ratio is chosen to guarantee the desired accuracy through eq.([eq::2d_setup_zo_cu ] ) and then the size of the sensor needed is determined by eq.([eq::2d_setup_zo_wh ] ) . in the case of real experimentsthe choice of the parameters is a bit more complicated . for the sake of simplicity, we refer only to a symmetric set up with a rotation about the -axis .this is the set up we use when performing our experiment on bird flocks .it has the big advantage that the angle can be derived from the measure of the angle , reducing the number of experimental parameters .the considerations made above about the lower bound for the magnification ratio in order to guarantee the desired size of the imaged objects and the desired interparticle distance on the sensor plane , are still valid when designing a multicamera set up , but unlike experiment , the volume of interest is not determined anymore by the field of view of one camera only . what matters now is the common field of view of the two cameras .the size of the common field of view does not depend only on and but also on and , see fig.[fig : common_fov ] . influences the angle of view of each camera , so that the larger the narrower each field of view and as a consequence the narrower the common field of view . affects the portion of space in the common field of view .the larger the smaller the portion of space imaged by the cameras . affects the distance from the cameras of the common field of view .an angle , see fig.[fig : common_fov ] where this set up is highlighted in black , makes the common field of view optimal for very large . while makes the common field of you optimal for short distances . in particular , the larger the shorter the working distance . the same parameter , , , and , control also the accuracy of the reconstructed distances .in fact , from eq.([eq::3d_rel_dr ] ) and eq.([eq::3d_abs_dr ] ) , the error on is +\\ & -2\displaystyle\frac{\bar{z}^2}{\omega d}\delta\delta s\end{aligned}\ ] ] the constant term depends only on the instrument used to take its measure and it can be strongly reduced choosing an instrument with the proper accuracy .the three terms linear in are controlled by the ratio , while the last term by the ratio . in principlethe larger and the lower the error , while should be as short as possible . in practice many environmental constraints are involved in the choice of the parameters and a trade off between the biological characteristic of the group of interest and the accuracy has to be found . as for the experiment ,if we denote by the acceptable threshold for the absolute error on short and by the acceptable relative error for the large distances , we can define the set up the system imposing the two following inequalities : the above inequalities can be used to define an upper bound for both the ratios and and find a set of suitable parameters which allow accuracy in the reconstruction in the desired common field of view , respecting also the constraint due to the objects size and interparticle distances . in many casessome of the parameters are fixed by the location where the experiments is performed . indeed ,when designing our experiment on bird flocks , we could not choose .the experiment is performed on the roof of a building and birds are almost at from the cameras .we can not go closer .moreover , the baseline can not be larger than .we put the cameras the furthest we can , so that the ratio is defined by the environmental constraint and it is equal to .we estimated , ( when directly measured through the method described in ) and .as a consequence , telling that the relative error on large distances is smaller than . instead , for short distances we choose , which is a typical bird to bird distance and we estimate .the previous inequality gives , than , the following lower bound for .instead , we perform the experiment on midge swarms in a park and we can go as close as we want to the swarm . the working distance is than not fixed by environmental constraints .but we can not choose as large as we want .in fact , we take pictures of midges using the scattering of the sun light , so that they appear as white dots on a black background . for very large is difficult to have a good scatter effects for both the cameras .for this reason when performing the experiments with swarms we first fix the maximum and then we choose and accordingly . in practicewe put the cameras the furthest possible and we set the working distance choosing in such a way to guarantee a small error on the short distances . for this experimentwe choose , because we do not need a wide field of view since swarms are generally very stable .we define which is the body length of a midge .the inequality above , implies that should verify : . if the baseline is , and we estimate than and we find that .every time the experiment is performed , an estimate of the reconstructed error should be taken , in order to check the experimental accuracy in measuring and calibrating that specific set up .the idea is to put some targets in the common field of view of the cameras , to measure their distance with a precise instrument , and to reconstruct their positions .the comparison between the measured distances between pairs of targets and the reconstructed distances tells how accurate the reconstruction is .moreover , a careful analysis of the results can reveal the source of inaccuracy and can be used when trying to fix problems .note that the theoretical formulation of the reconstruction problem described in this paper is meant to give an estimate of the errors when designing the experiment . in practice ,eqs.([eq::lllr ] ) in general does not have an exact solution .this happens because of the error in the segmented objects due to image noise and to all the errors in the measure and calibration of intrinsic and extrinsic parameters .an approximation of eqs.([eq::lllr ] ) is then found , generally making use of a least squares method .we perform experiments in the field with starling flocks and midge swarms .the camera system set up is similar in both cases .we use two synchronized cameras shooting at fps . for flocking eventswe choose a baseline of m and a working distance of m , while for swarming events the baseline is about m with a working distance of m .the main difference between the two systems is the way we measure and calibrate the extrinsic parameters . .] for swarming events we decide the orientation of each camera independently ; we find the interesting swarm , we fix the baseline and then we rotate each camera in order to center the swarm in the image .we measure the baseline but we do not directly measure the mutual orientation of the stereometric cameras .instead we retrieve the angles , , , and making use of a post calibration procedure .two targets , checkerboard , are mounted on a bar and their distance is accurately measured . pictures of the targets are taken in different positions , moving the bar in the volume where the event of interest take place .a montecarlo algorithm is then used to find the angles minimizing the error in the reconstruction of the distances between the postcalibration targets .in addition we take some pictures of the targets on the bar , which are not used for the calibration procedure but only to check the reconstruction error .typical reconstruction errors for the targets used during the calibration process are shown in fig.[fig : midges_postcal ] , orange circles , and compared with the reconstruction error on the control targets not used in the calibration process , green circles .the errors on the two sets of targets are comparable and in both cases the relative errors are lower than .this guarantees the reliability of our retrieved trajectories ., between and .relative errors are lower than . *b : * absolute reconstruction errors on short distances of about .for all the targets the absolute error is lower than .] in the set up for the experiment on birds , we can not use a post calibration procedure , because we would need to take pictures of targets in the sky at at least m from the cameras , nor we can take pictures of known targets to check the quality of the reconstruction . for this reason we fix the mutual orientation of the cameras a priori as described in , and we record only those events happening in the common field of view .but we still need to check the accuracy . for this aimwe perform reconstruction tests in a different location , setting up the cameras in a smaller set up .we want to check errors especially on the reconstruction of large distances , which are the ones affected by errors in the measure of intrinsic and extrinsic parameters .for this reason we perform reconstruction tests , keeping the ratio as in the field .thus we choose a baseline of m and we put targets at a distance in between m and m .we accurately measure the distances between all pairs of targets .we take a picture of those targets and then we use the measured extrinsic parameters to reconstruct the distances between pairs of targets .the difference between the measured distances and the reconstructed ones gives the error on the distances .fig.[fig : birds_precal]a shows typical relative errors for our reconstruction test on targets at a large mutual distance . as shown in the plot , our reconstruction error is smaller than . in fig.[fig : birds_precal]b absolute reconstruction errors on the distances of targets at short are shown .the short distance of is chosen to simulate the distance between birds in a quite dense flock .the results in fig.[fig : birds_precal]b show that we have errors of the order of , showing the high quality of the reconstructed distances . .* * a : error on the measure of the baseline . * relative reconstruction error when on the baseline , green circles , compared with the error using the correct measure of , orange circles . as expected relative errors are constant and equal to .* b : error on the measure of the mutual angle about the y axis . *relative reconstruction error when , green circles , compared with the error using the correct measure of , orange circles .the slope of the linear fit is equal to corresponding to .the error is quite big and it reaches the value for . * c : error on the focal length . *relative reconstruction error when , green circles , compared with the error using the correct measure of , orange circles .the slope of the linear fit is equal to corresponding to .relative error reaches the value for .note that the term is added to , in order to not affect the fit with a quantity not constant for all the pairs of target .* d : segmentation error .* relative reconstruction error when , green circles , compared with the error using the correct segmentation , orange circles . the slope of the linear fit is equal to corresponding to . the relative error at is quite close to . ]the average of the errors on the reconstructed distances is by far the first measure to look in the results of a reconstruction test .but it is also interesting and more useful to analyze the results looking for sources of errors .the big span of for targets used in the test , allows a more detailed analysis .fig.[fig : plots ] shows the results on the same reconstruction test of fig.[fig : birds_precal ] , but where we manually added errors on the intrinsic and extrinsic parameters of the system .these results perfectly match the theory described in the paper .the constant relative error due to a wrong measure of is shown in fig.[fig : birds_precal]a .fig.[fig : birds_precal]b,[fig : birds_precal]c and [ fig : birds_precal]d show the linear trend of the three terms of the relative error on depending respectively on , and . instead in fig.[fig : short_distances ] the effect of a wrong segmentation of targets at short distances of about are shown . as expectedthis term is quadratic in and it reaches for . .* relative reconstruction error when , green circles , compared with the error using the correct segmented points , orange circles .the coefficient of the quadratic fit is equal to and it is compatible with . ]note that we forced the system to have errors on intrinsic and extrinsic parameters to be much bigger than the typical experimental errors .a relative error of on would correspond , in our birds experimental set up , to an absolute error of about , which is not realistic at all .the only reasonable error is the one on , and as shown in fig.[fig : plots ] , it is the one mostly affecting the reconstruction accuracy .whenever we run a test on the reconstruction quality we plot the relative error on large vs ; we first look at the average value of the errors .if we obtain high and almost constant errors the most probable cause is a bad measure of and we check it taking again the distance , or measuring the baseline more carefully . then we look if there is a linear trend relating the relative error on to .if we find a clear linear trend we try to understand if the error is coming from a bad measure of , or performing again the test and in the worst case calibrating a new time the intrinsic parameters of the system . in the nasty case when we find high reconstruction errors due to a miscalibration of the intrinsic parameters or due to a bad measure of the extrinsic parameters , we throw away the correspondent collected data , so that we are sure that our analysis is based only on reliable trajectories .in the design of a experiment the choice of intrinsic and extrinsic parameters is very delicate .a trade off between biological necessity , environmental constraints and accuracy of the reconstruction of the position of the imaged targets has to be found . in the paper we showed how errors in the measurement of the system parameters affect the reconstruction of the mutual distance between targets . as a consequence they affect the analysis of quantities like velocity , acceleration and correlation functions .moreover errors on different parameters influence the reconstructed distances depending on their size and on their positions .in particular , large distances are mostly affected by errors on the orientation of the cameras , while short distances by segmentation errors . in the example of fig.[fig : plots]b , a small error on of produces relative errors up to , while in the example of fig.[fig : short_distances ] a segmentation error of produces errors up to at over mutual distances of about . in our experimentwe manage to keep relative errors on large distances smaller than and absolute errors on short distances below ( over distances of about ) . independently on the intrinsic and extrinsic parameters calibration procedures and on the segmentation software used , the best way to reduce the reconstruction error is to design the proper set up .the strategy is to choose large and trying to be as close as possible to the group of interest .but at the end of the day , the only way to guarantee the reliability of the retrieved trajectories is to take care of the error while planning the experiment and then test the accuracy .m. moussaid , d. helbing , s. garnier , a. johansson , m. combe , g. theraulaz , experimental study of the behavioural mechanisms underlying self - organization in human crowds .b _ * 276 * , 27552762 ( 2009 ) .a. attanasi , a. cavagna , l. del castello , i. giardina , t.s .grigera , a. jeli , s. melillo , l. parisi , o. pohl , e shen , and m. viale , information transfer and behavioural inertia in starling flocks . _ nature physics _ * 10 * , 9 , 691696 ( 2014 ) .a. attanasi , a. cavagna , l. del castello , i. giardina , a. jelic , s. melillo , l. parisi , o. pohl , e. shen , m. viale , emergence of collective changes in travel direction of starling flocks from individual birds fluctuations , arxiv:1410.3330 ( 2014 ) .a. attanasi , a. cavagna , l. del castello , i. giardina , s. melillo , l. parisi , o. pohl , b. rossaro , e. shen , e. silvestri , m. viale , finite - size scaling as a way to probe near - criticality in natural swarms .lett . _ * 113 * , 238102 , ( 2014 ) .a. attanasi , a. cavagna , l. del castello , i. giardina , s. melillo , l. parisi , o. pohl , b. rossaro , e. shen , e. silvestri , and m. viale , collective behaviour without collective order in wild swarms of midges , _ plos computational biology _ * 10 * , 7 , 115 ( 2014 ) .s. butail , n. manoukis , m. diallo , a.s .yaro , a. dao , s.f .traore , j.m .ribeiro , t. lehmann , d.a .paley , 3d tracking of mating events in wild swarms of the malaria mosquito anopheles gambiae ._ engineering in medicine and biology society , embc , 2011 annual international conference of the ieee _ * 75 * , 720723 ( 2011 ) .d. theriault , n.w .fuller , b.e .jackson , e. bluhm , d. evangelista , z. wu , m. betke , t.h .hedrick , a protocol and calibration method for accurate multi - camera field videography .biol _ * 217 * , 18431848 ( 2014 ) .g. towne , d. h. theriault , z. wu , n. fuller , t. h. kunz , and m. betke , error analysis and design considerations for stereo vision systems used to analyze animal behavior ._ proceeding of ieee workshop on vaib _ , ( 2012 ) .
three - dimensional tracking of animal systems is the key to the comprehension of collective behavior . experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group . trajectories can then be used to compute some quantities of interest to better understand collective motion , such as velocities , distances between individuals and correlation functions . the reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction . in this paper , we perform a careful analysis of the most significant errors affecting 3d reconstruction , showing how the accuracy depends on the camera system set - up and on the precision of the calibration parameters .
today understanding poker games , from the mental aptitide of their players to the underlying probabilistic structure , represents a great challenge for scientists belonging to several communities as psychologists , computer scientists , physicists , and mathematicians . in general, these games can be analyzed considering psychological aspects , information theory approaches and analytical descriptions .notably , approaches based on sociophysics allow to study the role of human behavior .on the other hand , information theory and analytical approaches allow to identify both new algorithms in the context of artificial intelligence , and universal properties of these games .an interesting problem , when dealing with poker , is constituted by its classification , i.e. , ` skill game ' or gambling .this issue has not yet been solved , although the nowadays available related answer has a long list of implications .a preliminary attempt to solve this question , by using the framework of statistical mechanics , has been developed in , where the author analyzed the role of rationality in a simplified scenario , referred to the poker variant called texas holdem . in general , all variants follow a similar logic : rounders ( i.e. , poker players ) receive a number of cards , and have to decide if to bet or not , by computing the possible combinations they can set with their cards ( called _ hand _ ) . after evaluating if the received hand is promising ornot , each rounder can take part to the pot by placing a bet ( money or chips ) , otherwise she / he folds the _hand_. therefore , the use of money makes the challenge meaningful , otherwise none would have a reason to fold her / his hand .poker challenges can follow two different formats , i.e. , cash game or tournament . during a tournament ,rounders pay , only once , an entry fee : a fraction goes into the prize pool , and the remain part is a fee to play . eventually , top players share the prize pool ( usually money ) . on the other hand ,playing poker in the cash game format means to use real money during the challenge . in this case, rounders can play until they have money and , although there no entry fees to pay , a fraction of each pot is taxed , i.e. , a small ` rake ' is applied . in this work ,we propose a framework to study the evolution of a poker challenge , considering the cash game format , by a thermodynamic description .in particular , we aim both to model these dynamics to achieve insights , and to link the resulting thermodynamic description with the probability theory tacitly governing these games .in this work , we aim to describe cash game poker challenges by the language of thermodynamics . in particular , since these challenges entail transfers of money among different parts , i.e. , rounders and dealers , we assume that the way thermodynamics explains equilibria and energy transfers between systems constitutes a fundamental tool to our investigations .firstly , we consider a simple thermodynamic system composed of the subsystem and its environment .the total energy of the system is given by the energy of and that of , i.e. , . in the proposed model ,the environment is a poker room , whereas corresponds to the table where two rounders , say and , face by a ` heads - up ' challenge .a ` heads - up ' is a challenge characterized by the presence of only two rounders .therefore , we can identify two subsystems of : and , corresponding to the two rounders . since poker challenges are performed following the cash game format ,the money is the exchanged quantity , hence mapping money to the energy of the systems come immediate . in doing so , we have and that correspond to the money of and , respectively .therefore and , as initial condition , we impose that at rounders have the same amount of money , i.e. , .figure [ fig : figure_1 ] offers a pictorial representation of the described system .inside the environment , with the arrow indicating the allowed direction of energy transfers , i.e. , from to . on the right , a zoom on the subsystem , showing the two subsystems and , representing the two rounders ( i.e. , and ) . inside , as shown by arrows , the energy can flow from to and vice versa . ] during the challenge , some rounds are won by and others by ; hence a fraction of energy is transferred from the subsystem to , and vice versa , over time .the amount of transferred energy corresponds to the total amount of money that flows from to and vice versa . for the sake of simplicity, we consider that at each round rounders bet the same amount of money , i.e. , pots are constant .in particular , is defined as with total flow of energy from the subsystem to , at time .then , indicates the total amount of enegy transferred from to , as result of all successes of the rounder .it is worth to note that , in real scenarios , poker rooms apply a small fee , called ` rake ' , to each pot .usually , the ` rake ' corresponds to about of the pot . as result ,the total energy of decreases over time by a factor , due to energy transfers between and .since we are dealing with a closed system ( i.e. , ) , the loss in energy can be thought in terms of energy reduction due to the entropy s growth .notably , this concept characterizes the helmholtz free energy potential with and , energy and temperature of the system , respectively . in few words ,the free energy corresponds to the energy a system can actually use . then , we can map this concept to our model by the following relation with free energy of our system , that is available at time , after all energy transfers .the energy lost by goes to the environment , hence .it is worth to observe that , as the entropy of a system does , the quantity increases over time , and can never be negative .now , we focus our attention on the evolution of the system .in particular , we consider one subsystem , i.e. , or , in order to analyze its amount of energy over time .let us consider , for instance , ( that represents the rounder ) whose evolution can be described by the following relation as the amount of energy in corresponds to the initial amount of energy in this subsystem ( i.e. , ) , minus the amount of energy that flowed to ( i.e. , ) , plus the amount of energy that flowed from to ( i.e. , ) reduced of a factor . in an equilibrium condition , , therefore the primary target of the rounder is to win all the money of , while avoiding to lose her / his money .hence , rounder aims to obtain . considering poker as a skill game , the flow depends on the ability of the rounder .then , has a probability to win each round , strongly related to her / his skills .as discussed before , to simplify the scenario , we suppose that rounders bet always the same amount of money , forming the pot , so that . in doing so, we can define the amount of energy transferred from to as and the amount of energy transferred from to going back to the equilibrium condition defined in equation [ eq : equilibrium ] , we can write working a little bit of algebra , from equation [ eq : equilibrium_probability ] , we obtain therefore , we can compute the minimal success probability that the rounder needs to reach her / his target , i.e. , to win ( or , at least , to not losing money ) .it is worth to highlight that , starting by a thermodynamic description of the system , we can define a relation between the ` rake ' , applied by a poker room , and the rounders skills , i.e. , their probability to success in poker cash game . in light of these results , it is interesting to evaluate both the amount of money rounders can win by playing poker cash game , and the amount of money the poker room generates during the challenge . considering the rounders s perspective , we are interested in computing the expected value of energy that flows in the subsystem , i.e. , ( note that similar considerations hold also for ) .the value of can be computed as follows corresponds to , and , with representing the total energy transfer from to , i.e. , .then , we obtain and we find the following relation = t \cdot \frac{\delta}{2 } [ p_a \cdot ( 2 - \epsilon ) - 1]\ ] ] that is in perfect accordance with results achieved in equation [ eq : probability_a_epsilon ] , as for the expected value of energy transferred to is .moreover , it is immediate to note that , considering equation [ eq : energy_evolution ] , the rate of variation of the energy of one subsystem ( e.g. , ) corresponds to it is worth to observe that , by all the illustrated equations , it is possible to evaluate the potential gain of a rounder , once her / his winning probability is known .on the other hand , considering the poker room perspective , the overall scenario becomes pretty nice , because as we are going to show , its profits can only increases over time without running any risk .notably , the environment ( i.e. , the poker room ) receives a constant amount of energy , at each time step , equal to .hence , in the event rounders have the same probability to win ( i.e. , ) , it is interesting to compute the number of time steps required to let the poker room drain almost all their money ( i.e. , their energy ) . since to perform a round both rounders have to bet the same amount of money , a minimal amount of energy always will remain in the subsystem . in particular , this quantity is equal to .hence , supposing both subsystems , at time , contain an energy equal to , the last round entails one subsystem loses completely energy and the other has , at the end , an energy equal to .therefore , the maximum amount of energy that the environment can receive is , so that the following relation holds then , it is possible to compute the number of time steps to let the poker room draining almost all the rounders s money : equation [ eq : poker_room_time ] shows a direct relation between the time and the rake applied by the poker room : as the latter increases the time to drain almost all the energy decreases .in this work , we propose a framework for studying poker challenges in the context of thermodynamics .in particular , we map a simple scenario , where two rounders face , to a thermodynamic system composed of a subsystem embedded in a larger environment .the former represents the two rounders , whereas the latter the poker room .remarkably , from a simplified description of the game dynamics , we achieve insights on poker challenges , in the cash game format . even considering this format of poker as a ` skill game ' ( see ) ,we identify a direct link between the rounders s skills and the fee applied by poker room , called ` rake ' . in doing so , it is possible to know the minimal probability to success a rounder needs to have in order to be a successful player . as shown , gaining by this activity is a very hard task , even for skilled rounders , as they have to keep their probability to win very high . in real scenarios ,many expert rounders are very good and fast in computing winning probabilities for their _ hands _ , hence they perform online cash game by a ` multitabling ' strategy : they face at the same time several opponents , with the aim to optimize their profits ( obviously , increasing the probability of losing a lot of money ) .moreover , we analyze profits of a poker room , obtained while rounders play the cash game poker . in particular , mapping this profit to the energy of the environment , we achieve the relation , with representing the ` rake ' , the pot of each round , and the number of time steps . it is worth noting that there are two different situations that allow the poker room to increase its profits : while the first should be kept low ( e.g. , or less ) as a strategy marketing to attract rounders in the poker room , the second requires more attention as , in principle, it can lead to a fraudulent strategy , now we briefly illustrate .people usually are not worried about frauds in poker , as they play against other people , and not against a dealer ( as in games like the roulette ) .therefore , in principle , there are no reasons for the electronic dealer to favor a particular rounder in the process of cards distribution .anyway , it is important to highlight that the poker room does not take an advantage when rounders perform ` all - in ' actions ( i.e. , the bet all their money in only one _ hand _ ) .then , supposing rounders are rational , i.e. , their actions are performed by considering their probability to win each round , a pseudo - random algorithm for cards distribution could be properly defined for generating uncertain scenarios . here ,for uncertain scenarios we indicate those situations where both rounders have low winning probabilities , by considering only the information they have ( i.e. , their _ hand _ and , in case , common cards ) .therefore , a fraudulent strategy could be implemented by using an algorithm to provide rounders with low winning probability at each hand , in order to avoid they perform ` all - in ' .it is evident that by this strategy , it would be possible to indirectly increasing for each challenge .moreover , it would be also very difficult to find this kind of fraud by analyzing the algorithm , used by a poker room , if this fraudulent scenario is not considered . in order to conclude, we would like to emphasize that some of the considerations about the probability to win a cash game challenge can be applied also in the context of financial trading .in particular , for the strategy adopted by ` scalpers ' , i.e. , traders that in few seconds open and close a position ( i.e. , buy and sell financial products as stocks , bonds , etc . ) , as also in those cases for each transaction the banking system applies a kind of ` rake ' .barra , a. , contucci , p. , sandell , r. , vernia , c. : an analysis of a large dataset on immigrant integration in spain .the statistical mechanics perspective on social action. _ scientific reports _ * 4 * 4174 ( 2014 )
poker is one of the most popular card games , whose rational investigation represents also one of the major challenges in several scientific areas , spanning from information theory and artificial intelligence to game theory and statistical physics . in principle , several variants of poker can be identified , although all of them make use of money to make the challenge meaningful and , moreover , can be played in two different formats : tournament and cash game . an important issue when dealing with poker is its classification , i.e. , as a ` skill game ' or as gambling . nowadays , its classification still represents an open question , having a long list of implications ( e.g. , legal and healthcare ) that vary from country to country . in this study , we analyze poker challenges , considering the cash game format , in terms of thermodynamics systems . notably , we propose a framework to represent a cash game poker challenge that , although based on a simplified scenario , allows both to obtain useful information for rounders ( i.e. , poker players ) , and to evaluate the role of poker room in this context . finally , starting from a model based on thermodynamics , we show the evolution of a poker challenge , making a direct connection with the probability theory underlying its dynamics and finding that , even if we consider these games as ` skill games ' , to take a real profit from poker is really hard .
models of learning in games fall roughly into two categories . in the first , the learning player forms beliefs about the future behavior of other players and nature , anddirects her behavior according to these beliefs .we refer to these as fictitious - player - like models . in the second ,the player is attuned only to her own performance in the game , and uses it to improve future performance .these are called models of reinforcement learning .reinforcement learning has been used extensively in artificial intelligence ( ai ) .samuel wrote a checkers - playing learning program as far back as 1955 , which marks the beginning of reinforcement learning ( see ) . since then many other sophisticated algorithms , heuristics , and computer programs , have been developed , which are based on reinforcement learning .( ) .such programs try neither to learn the behavior of a specific opponent , nor to find the distribution of opponents behavior in the population . instead , they learn how to improve their play from the achievements of past behavior .until recently , game theorists studied mostly fictitious - player - like models .reinforcement learning has only attracted the attention of game theorists in the last decade in theoretical works like , , , and in experimental works like .in all these studies the basic model is given in a strategic form , and the learning player identifies those of her strategies that perform better .this approach seems inadequate where learning of games in extensive form is concerned .except for the simplest games in extensive form , the size of the strategy space is so large that learning , by human beings or even machines , can not involve the set of all strategies .this is certainly true for the game of chess , where the number of strategies exceeds the number of particles in the universe .but even a simple game like tic - tac - toe is not perceived by human players in the full extent of its strategic form .the process of learning games in extensive form can involve only a relatively small number of simple strategies .but when the strategic form is the basic model , no subset of strategies can be singled out . thus , for games in extensive form the structure of the game tree should be taken into consideration . instead of _ strategies_ being reinforced , as for games in strategic form , it is the _ moves _ of the game that should be reinforced for games in extensive form .this , indeed , is the approach of heuristics for playing games which were developed by ai theorists .one of the most common building block of such heuristics is the _ valuation _ , which is a real valued function on the possible moves of the learning player .the valuation of a move reflects , very roughly , the desirability of the move . given a valuation , a learning process can be defined by specifying two rules : * a _ strategy rule _ , which specifies how the game is played for any given valuation of the player ; * a _ revision rule _ , which specifies how the valuation is revised after playing the game .our purpose here is to study learning - by - valuation processes , based on simple strategy and revision rules . in particular , we want to demonstrate the convergence properties of these processes in repeated games , where the stage game is given in an extensive form with perfect information and any number of players . converging results of the type we prove here are very common in the literature of game theory . but as noted before , convergence of reinforcement is limited in this literature to strategies rather than moves . to the best of our knowledge ,the ai literature while describing dynamic processes closely related to the ones we study here do not prove convergence results of this type .first , we study stage games in which the learning player has only two payoffs , 1 ( win ) and 0 ( lose ) .two - person win - lose games are a special case . but here , there is no restriction on the number of the other players or their payoffs . for these gameswe adopt the simple _ myopic strategy rule_. by this rule , the player chooses in each of her decision node a move which has the highest valuation among the moves available to her at this node . in casethere are several moves with the highest valuation , she chooses one of them at random . as a revision rulewe adopt the simple _ memoryless revision _ : after each round the player revises only the valuation of the moves made in the round .the valuation of such a move becomes the payoff ( 0 or 1 ) in that round .equipped with these rules , and an initial valuation , the player can play a repeated game . in eachround she plays according to the myopic strategy , using the current valuation , and at the end of the round she revises her valuation according to the memoryless revision .this learning process , together with the strategies of the other players in the repeated game , induce a probability distribution over the infinite histories of the repeated game .we show the following , with respect to this probability . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ suppose that the learning player can guarantee a win in the stage game .if she plays according to the myopic strategy and the memoryless revision rules , then starting with any nonnegative valuation , there exists , with probability 1 , a time after which the player always wins ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ when the learning player has more than two payoffs , the previous learning process is of no help . in this casewe study the _ exploratory myopic strategy rule _, by which the player opts for the maximally valued move , but chooses also , with small probability , moves that do not maximize the valuation .the introduction of such perturbations makes it necessary to strengthen the revision rule .we consider the _ averaging revision_. like the memoryless revision , the player revises only the valuation of moves made in the last round .the valuation of such a move is the average of the payoffs in all previous rounds in which this move was made . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if the learning player obeys the exploratory myopic strategy and the averaging revision rules , then starting with any valuation , there exists , with probability 1 , a time after which the player s payoff is close to her individually rational payoff ( the maxmin payoff ) in the stage game . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the two previous results indicate that reinforcement learning achieves learning of playing the stage game itself , rather than playing against certain opponents .the learning processes described guarantee the player her individually rational payoff ( which is the win in the first result ) .this is exactly the payoff that she can guarantee even when the other players are disregarded .our next result concerns the case where all the players learn the stage game . by the previous resultwe know that each can guarantee his individually rational payoff .but , it turns out that the synergy of the learning processes yields the players more than just learning the stage game . indeed, they learn in this case each other s behavior and act rationally on this information . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ suppose the stage game has a unique perfect equilibrium .if all the players employ the exploratory myopic strategy and the averaging revision rules , then starting with any valuation , with probability 1 , there is a time after which their strategy in the stage game is close to the perfect equilibrium . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ although valuation is defined for all moves , the learning player needs no information concerning the game when she start playing it .indeed , the initial valuation can be constant . to play the stage game with this valuation ,the player needs to know which moves are possible to her , only when it is her turn to play , and then choose one of them at random . during the repeated game , the player should be able to record the moves she made and their valuations .still , the learning procedure does not require that the player knows how many players there are , let alone the moves they can make and their payoffs .the learning processes discussed here treat separately the valuation for every node . for games with large number of nodes ( or states of the board ) , that may be unrealistic because the chance of meeting a given node several times is too small . in chess , for example , almost any state of the board , except for the few first ones , has been seen in recorded history only once . in order to make these processes more practical , similar moves ( or states of the board )should be grouped together , such that the number of similarity classes is manageable .when the valuation of a move is revised , so are all the moves similar to it .we will deal with such learning processes , as well as with games with incomplete information , in a later paper .consider a finite game with complete information and a finite set of players .the game is described by a tree , where and are the sets of terminal and non - terminal nodes , correspondingly , the root of the tree is , and the set of arcs is .elements of are ordered pairs , where is the immediate successor of .the set , for , is the set of nodes in which it is s turn to play .the sets form a partition of .the _ moves _ of player at node are the nodes in .denote .for each the function is s payoff function .the depth of the game is the length of the longest path in the tree .a game with depth 0 is one in which and . a behavioral strategy , ( strategy for short ) for player is a function defined on , such that for each , is a probability distribution on . the super game is the infinitely repeated game , with stage game .an infinite history in is an element of .a finite history of rounds , for , is an element of .a _ super strategy _ for player in is a function on finite histories , such that for , is a strategy of in , played in round .the super strategy induces a probability distribution on histories in the usual way .we fix one player ( the learning player ) and omit subscripts of this player when the context allows it .we first introduce the basic notions of playing by valuation .valuation _ for player is a function .playing the repeated game by valuation requires two rules that describe how the stage game is played for a given valuation , and how a valuation is revised after playing . * a _ strategy rule _ is a function .when player s valuation is , s strategy in is . * a _ revision rule _ is a function , such that for the empty history , . when player s initial valuation is , then after a history of plays , s valuation is .the _ valuation super strategy _ for player , induced by a strategy rule , a revision rule , and an initial valuation , is the super strategy , which is defined by for each finite history .we consider first the case where player has two possible payoffs in , which are , without loss of generality , 1 ( win ) and 0 ( lose ) .a two - person win - lose game is a special case , but here we place no restrictions on the number of players or their payoffs .we assume that learning by valuation is induced by a strategy rule and a revision rule of a simple form .this rule associates with each valuation the strategy , where for each node , is the the uniform distribution over the maximizers of on . that is , in each node of player , the player selects at random one of the moves with the highest valuation . for a history of length 1 , the valuation is revised to which is defined for each node by for a history , the current valuation is revised in each round according to the terminal node observed in this round .thus , . the temporal horizons , future and past , required for these two rules are very narrow .playing the game , the player takes into consideration just her next move .the revision of the valuation after playing depends only on the current valuation , and the result of this play , and not on the history of past valuations and plays .in addition , the revision is confined only to those moves that were made in the last round . [main ] let be a game in which player either wins or loses .assume that player has a strategy in that guarantees him a win .then for any initial nonnegative valuation of , and super strategies in , if is the valuation super strategy induced by the myopic strategy and the memoryless revision rules , then with probability 1 , there is a time after which is winning forever .the following example demonstrates learning by valuation .consider the game in figure [ f : goodcase ] , where the payoffs are player 1 s .( 600,380 ) ( 300,340)(1,1)200 [ ] [ ( 100,140)(1,1)100 [ l][[ that 1 s initial valuation of each of the moves and is 0 .the valuations that will follow can be one of , , and , where the first number in each pair is the valuation of and the second of .( the valuation can not be reached from any of these valuations ) .we can think of these possible valuations as states in a stochastic process .the state is absorbing .once it is reached , player 1 is choosing and being paid 1 forever .when the valuation is , player 1 goes . she will keep going , and winning 1 , as long as player 2 is choosing .once player 2 chooses , the valuation goes back to .thus , the only way player 1 can fail to be paid 1 from a certain time on is when recurs infinitely many times .but the probability of this is 0 , as the probability of reaching the absorbing state from state is 1/2 .note that the theorem does not state that with probability 1 there is a time after which player 1 s strategy is the one that guarantees him payoff 1 . indeed , in this example ,if player 2 s strategy is always , then there is a probability 1/2 that player 1 will play for ever , which is not the strategy that guarantees player 1 the payoff 1 .we now turn to the case in which payoff functions take more than two values .the next example shows that in this case the myopic strategy and the memoryless revision rules may lead the player astray .[ badcase ] player 1 is the only player in the game in figure [ f : badcase ] .( 600,380 ) ( 300,340)(1,1)200 [ ] [ ( 100,140)(1,1)100 [ l][[ in this game player 1 can guarantee a payoff of 10 , and therefore we expect a learning process to lead player 1 to this payoff .but , no reasonable restriction on the initial valuation can guarantee that the learning process induced by the myopic strategy and the memoryless revision results in the payoff 10 in the long run . for example , for any constant initial valuation, there is a positive probability that the valuation for is obtained , which is absorbing .we can not state for general payoff functions any theorem analogous to theorem [ main ] or even a weaker version of this theorem . but something meaningful can be stated when _ all _ players play the repeated game according to the myopic strategy and the memoryless revision rules .we say that game is _ generic _ if for every player and for every pair of distinct terminal nodes and , we have . [ generic ]let be a generic game .assume that each player plays according to the myopic strategy rule and uses the memoryless revision rule .then for any initial valuation profile , with probability 1 , there is a time after which the same terminal node is reached in each round .the limit plays guaranteed by this theorem depend on the initial valuations and have no special structure in general .moreover , it is obvious that for any terminal node there are initial valuations that guarantee that this terminal node is reached in all rounds .we return , now , to the case where only one player learns by reinforcement . in order to prevent a player from being paid an inferior payoff forever , like in example [ badcase ] , we change the strategy rule .we allow for exploratory moves that remind her of all possible payoffs in the game , so that she is not stuck in a bad valuation .assume , then , that having a certain valuation , the player opts for the highest valued nodes , but still allows for other nodes with a small probability .such a rule guarantees that player in example [ badcase ] will never be stuck in the valuation .we introduce formally this new rule .this rule associates with each valuation the strategy , where for each node , . here , is the strategy associated with by the myopic strategy rule , and is the strategy that uniformly selects one of the moves at .unfortunately , adding exploratory moves does not help the player to achieve 10 in the long run , as we show now .assume that the initial valuation of and is 10 and correspondingly , and the valuation of the fist two moves is also favorable : .we assume now that in each of the two nodes player 1 chooses the higher valued node with probability and the other with probability . the valuation of and can not change over time . the valuation of form an ergodic markov chain with the two states .thus , for example , the probability of transition from to itself occurs when the player chooses either and , with probability , or with probability , which sum to .the following is the transition matrix of this markov chain . the two states and are symmetric and therefore the stationary probability of each is 1/2 .thus , the player is paid 10 and 2 , half of the time each .note that the exploratory moves are required because the payoff function has more than two values .however , the failure to achieve the payoff 10 after introducing the the -exploratory myopic strategy rule is the result of this rule , and has nothing to do with the number of values of the payoff function .that is , even in a win - lose game , a player who has a winning strategy may fail to guarantee a win in the long run by playing according to the rules of -exploratory myopic strategy and memoryless revision .thus , the introduction of the -exploratory myopic strategy rule forces us also to strengthen the revision rule as follows . for a node , and a history , if the node was never reached in , then .else , let be the times at which was reached in , then we state , now , that by using little exploration , and averaging revision , player can guarantee to be close to his individually rational ( maxmin ) payoff in .[ maxmin ] let be a super strategy such that is the valuation super strategy induced by the -exploratory myopic strategy and the averaging revision rules .denote by the distribution over histories in induced by .let be s individually rational payoff in .then for every there exists such that for every , for -almost all infinite histories , we consider now the case where all players learn to play , using the -exploratory myopic strategy and the averaging revision rules .we show that in such a case , in the long run , the players strategy in the stage game is close to a perfect equilibrium .we assume for simplicity that the game has a unique perfect equilibrium ( which is true generically ) .[ perfect ] assume that has a unique perfect equilibrium .let be the super strategy such that for each , is the valuation super strategy induced by the -exploratory myopic strategy , and the averaging revision rules . let be the distribution over histories induced by .then there exists , such that for all , for -almost all infinite histories , there exists , such that for all , , for each player and node .we prove all the theorems by induction on the depth of the game tree . for thiswe need to be able to deduce properties of from properties of repeated games of stage games which are subgames of .this can be more naturally done when we consider a wider class of repeated games which we call _ stochastic repeated games_. within this class the repeated game of can be imbedded in the repeated game of , thus enabling us to make the required deductions .let be a countable set of states which also includes an _ end state _ .we consider a game in which the game is played repeatedly . beforeeach round a state from is selected according to a probability distribution which depends on the history of the previous terminal nodes and states . when the state is realized the game ends .the selected state is known to the players .the strategy played in each round depends on the history of the terminal nodes and states .we now describe formally. * histories . *the set of infinite histories in , is . for the set of finite history of rounds ,is , and the set of preplay histories of rounds is .denote and .the subset of of histories that terminate with is denoted by . for and denote by the history in which consists of the first rounds in . for finite and infinite histories we denote by the sequence of terminal nodes in . * transition probabilities . * for each , is a probability distribution on . for , is the probability of transition to state after history .the probability that the game ends after is .* super strategies . * after rounds the player observes the history of pairs of a state and a terminal node , and the state that follows them , and then plays .thus , a super strategy for player is a function from to s strategies in .we denote by the probability of reaching terminal node when is played . * the super play distribution . * the super strategy induces the _ super play distribution _ which is a probability distribution over .it is the unique extension of the distribution over finite histories which satisfies for , and for . for a node in , denote by the subgame starting at .fix a super strategy profile in and the induced super play distribution on . in what followswe describe a stochastic super game , in which the stage game is . for thiswe need to define the state space .we tag histories and states in the game , as well as terminal nodes in .our purpose in this construction is to imbed in .the idea is to regard these rounds in a history in in which node is not reached as states in .let be defined as the set of all , such that node is never reached in .obviously , subsumes , and in particular includes the end state .note that the set of infinite history in can be naturally viewed as a subset of , as a subset of , and as a subset of .we use this fact to define the transition probability distribution in as follows . for any in and with , where is the probability that node reached under the strategy profile . for , , where consists of all histories with initial segment such that is never reached after this initial segment .note that is the probability of all histories in that start with and followed by a terminal node of the game .these events and the event described above , form a partition of , and therefore is a probability distribution .[ thesamedistribution ] define a super strategy profile in , by for each , where the right - hand side is the restriction of to .then , the restriction of to coincides with the super play probability distribution , induced by .it is enough to show that and coincide on .the proof is by induction on the length of .suppose and consider the history .then , by the definition of the super play distribution ( [ playdistribution ] ) and ( [ preplaydistribution ] ) , by the induction hypothesis and the definitions of in ( [ transitionsub ] ) , the righthand side is . by the definition of in ( [ strategysub ] ) , this is just the right - hand side , in turn , is just .[ thesamestrategy ] suppose that s strategy in , , is the valuation super strategy starting with , and using either the myopic strategy and the memoryless revision rules , or the -exploratory myopic strategy and the averaging revision rules .then the induced strategy in , , is the valuation super strategy starting with the restriction of to the subgame following the corresponding rules . the valuation super strategy in , starting with , requires thatafter history , strategy is played . here, is the sequence of all terminal nodes in , which consists of terminal nodes in .these are also all the terminal nodes of , in , when the latter is viewed as a history in .when is considered as a history in , then the strategy is , where is the sequence of all terminal nodes in . is the restriction of to .but along the history , the valuation of nodes in the game does not change in rounds in which terminal nodes which are _ not _ in are reached. therefore , and are the same .the game is in particular a stochastic repeated game , where there is only one state , besides , and transition to ( that is , termination of the game ) has null probability .we prove all three theorems for the wider class of stochastic repeated games .the theorems can be stated verbatim for this wider class of games , with one obvious change : any claim about almost all histories should be replaced by a corresponding claim for almost all _ infinite _ histories .all the theorems are proved by induction on the depth of the game .the proofs for games of depth 0 ( that is , games in which payoffs are determined in the root , with no moves ) are straightforward and are omitted .in all the proofs , is the set of all the immediate successors of the root .assume that the claim of the theorem holds for all the subgames of .we examine first the case that the first player is not . by the stipulation of the theorem , player can guarantee payoff 1 in each of the games for .consider now the game , the super strategy profile , and the induced super play distribution . by the induction hypothesis , and claim 2 , for each , for -almost all infinite histories there is a time after which player is paid 1 . in view of claim 1 , for -almostall histories in in which is reached infinitely many times , there exist a time after which player is paid 1 , whenever is reached .consider now a nonempty subset of .let be the set of infinite histories in in which node is reached infinitely many times iff . then , for -almost all histories in there is a time after which player is paid 1 .the events when ranges over all nonempty subsets of , form a partition of the set of all infinite histories , which completes the proof in this case .consider now the case that is the first player in the game . in this casethere is at least one subgame in which can guarantee the payoff 1 .assume without loss of generality that this holds for .for a history denote by the random variable that takes as values the subset of the nodes in that have a positive valuation after rounds .when is not empty , then chooses at , with probability 1 , one the nodes in . as a resultthe valuation of this node after the next round is 0 or 1 , while the valuation of all other nodes does not change .therefore we conclude that is weakly decreasing when .that is , .let be the event that for only finitely many s .then , for -almost all histories in there exists time such that is decreasing for .hence , for -almost all histories in there is a nonempty subset of , and time , such that for .but in order for the set of nodes in with positive valuation not to change after , player must be paid 1 in each round after .thus we only need to show that .consider the event that is reached in infinitely many rounds .as proved before by the induction hypothesis , for -almost all histories in , there exists , such that the valuation of is 1 , for each round in which is reached .the valuation of this node does not change in rounds in which it is not reached .thus , -almost surely .we conclude that for -almost all histories in there is a time , such that is not reached after time .but -almost surely for such histories there are infinitely many s in which the valuation of all nodes in is 0 . in each such history ,the probability that is not reached is , which establishes .let be the player at the root of . by the induction hypothesis and claim 1 , for each of the supergames , , for -almost infinite histories in this super game, there is a time after which the same terminal node is reached . by claim 2 , for -almost all histories of which recurs infinitely many times there is a time after which s valuation of this node is constantly the payoff of the same terminal node of .it is enough that we show that for -almost all infinite histories in , there is a time after which the same node from is selected with probability 1 at the root .suppose that this is not the case .then there must be a set of histories with , two nodes and , and two terminal nodes and in and correspondingly , that recur infinitely many times in this set .therefore , for -almost all histories in , s valuation of and is and .since is generic , we may assume that .thus , for -almost all histories in , there is a time after which the conditional probability of given the history is 0 . which is a contradiction .we denote by , s average payoff at time in history .fix a subgame .histories in the game are tagged .thus , is s average payoff at time in history in .let be a history in in which recurs infinitely many times at .let . denote by s average payoff until _ at the times was reached _, that is , the history can be viewed as an infinite history in . moreover , for each , . by the definition of , it follows that if there exists such that for each , , then there exits such that for each , . by the induction hypothesis there is , such that for all , for -almost all histories there exists such an .thus , by claims [ thesamedistribution ] and [ thesamestrategy ] , there exists , such that for all and , for -almost all histories in in which recurs infinitely many times , there exists a time such that for each , .let be a nonempty subset of , and let be the set of all infinite histories in which the set of nodes that recurs infinitely many times is .consider a history in , with .let be the number of times is reached in until time .then , where the inequality holds , because , and for , .thus for -almost all histories in , since this is true for all , the conclusion of the theorem follows for all infinite histories .next , we examine the case that is the first player .note that in this case , for each node , .observe , also , that for -almost all infinite histories in , each of the subgames recurs infinitely many times in . indeed , after each finite history , each of the games is selected by with probability at least .thus , the event that one of these games is played only finitely many times has probability 0 .let be a binary random variable over histories such that for histories in which the node selected by player at time satisfies , and otherwise .[ epsilon1 ] there exists such that for all and any , for -almost all infinite histories in there is time such that for all , latexmath:[\[\label{nextround } the inequality ( [ convergence ] ) follows from the induction hypothesis . for ( [ nextround ] ) , note that if is not reached in round then the difference in ( [ nextround ] ) is 0 . if is reached then , where is the number of times was reached in and is the payoff in round .but , goes to infinity with , and thus ( [ nextround ] ) holds for large enough . for ( [ expectation ] ) , observe that ( [ convergence ] ) implies , as .then , by ( [ nextround ] ) , for each history such that . therefore , after , player chooses , with probability at least , a node that satisfies ( [ jzero ] ) , which shows ( [ expectation ] ) . the information about the conditional expectations in ( [ expectation ] ) has a simple implication for the averages of . to see it we use the following convergence theorem from love ( 1963 ) p. 387 .consider now the restriction of the random variables to the set of infinite histories with conditioned on this space . from ( [ expectation ] ) it follows that on this space , almost surely . therefore , almost surely .this is so , because the field generated by the the random variables is coarser than the field generated by histories .since condition ( [ variances ] ) holds for , it follows by the stability theorem that for -almost all infinite histories , by the definition of , where is the minimal payoff in . if we choose such that , then by ( [ limit ] ) , for each , for -almost all infinite histories .assume that the claim of the theorem holds for all the subgames of .we denote by the restriction of the valuation to , and by , s perfect equilibrium strategy there , which is also the restriction of to this game. then there exists such that for all , node , and player , for almost all infinite histories of there exists such that for all , for each node in , and latexmath:[\[\label{perfect - expectation } the equality ( [ perfect - induction ] ) is the induction hypothesis .consider a history for which ( [ perfect - induction ] ) holds . in the round that follows , the perfect equilibrium path in is played with probability at least , where is the depth of .player s payoff in this path is .thus for small enough , ( [ perfect - expectation ] ) holds . by claims [ thesamedistribution ] and [ thesamestrategy ] it follows from ( [ perfect - induction ] ) that for , for all histories in , there exists such that for all the strategies played in each of the games is the perfect equilibrium of .thus , to complete the proof it is enough to show that in addition , at the root , chooses in these rounds , with probability , the node for which . for this we need to show that s valuation of is higher than the valuation of all other nodes . to show it ,let be the difference between and the second highest payoffs . by the assumption of the uniqueness of the perfect equilibrium , .note that as all players strategies are fixed for , exists . using the stability theorem , as in theorem [ maxmin ], we conclude that exists , and by ( [ perfect - expectation ] ) the inequality holds , where is s average payoff until round of history , in the game . as in the proof of theorem [ maxmin ], it follows that for -almost all infinite histories in , .but then , for -almost all infinite histories there exists such that for all , is the highest valuation of all the nodes .
a valuation for a player in a game in extensive form is an assignment of numeric values to the players moves . the valuation reflects the desirability moves . we assume a myopic player , who chooses a move with the highest valuation . valuations can also be revised , and hopefully improved , after each play of the game . here , a very simple valuation revision is considered , in which the moves made in a play are assigned the payoff obtained in the play . we show that by adopting such a learning process a player who has a winning strategy in a win - lose game can almost surely guarantee a win in a repeated game . when a player has more than two payoffs , a more elaborate learning procedure is required . we consider one that associates with each move the average payoff in the rounds in which this move was made . when all players adopt this learning procedure , with some perturbations , then , with probability 1 , strategies that are close to subgame perfect equilibrium are played after some time . a single player who adopts this procedure can guarantee only her individually rational payoff . # 1*proof#1 . *
damage induced by mechanical or hydraulic perturbations influences the permeability of the rock mass , with significant effects on the pore pressure distribution .modifications in the pore pressure , in turn , affects the mechanical response of the material by poromechanical coupling . according to experimental observations at the microscopic scale , fracture evolution in rocks can be interpreted essentially as a progressive damage accumulation process , characterized by nucleation , growth and coalescence of numerous cracks following changes in the external load or in the internal pore pressure . in the particular case of hydraulic fracturing , a stimulation technique used in petroleum industry to increase the oil / gas production in low permeability reservoirs , fractures are produced by the artificial increase of the fluid pressure in a borehole . from the theoretical point of view ,it has been observed that the success of hydraulic fracturing is related to : i ) the creation of a dense system of hydraulic cracks with limited spacing ; and ii ) the prevention or mitigation of localization instabilities .models of distributed damage and permeability based on abstract damage mechanics are , of necessity , empirical in nature and the precise meaning and geometry of the damage variables often remains undefined or is associated with unrealistic microstructures such as distributions of isolated microcracks . in addition , the evolution of the damage variables and their relation to the deformation , stress and permeability of the rock mass is described by means of empirical and phenomenological laws that represent , at best , enlightened data fits .however , the permeability enhancement due to extensive fracturing of a rock mass depends sensitively on precise details of the topology , which needs to be _ connected _ , and geometry of the crack set , including the orientation and spacing of the cracks .in addition , the coupled hydro - mechanical response of the rock , especially when complex loading conditions and histories are of concern , is much too complex to yield to empirical data fitting .based on these considerations , in this paper we endeavor to develop a model of distributed fracturing of rock masses , and the attendant permeability enhancement thereof , based on an _ explicit micromechanical construction _ of connected patterns of cracks , or faults .the approach extends the multi - scale brittle damage material model introduced in , which is limited to mechanical damage .in contrast to abstract damage mechanics , the fracture patterns that form the basis of the theory are _ explicit _ and the rock mass undergoes deformations that are compatible and remain in static equilibrium down to the micromechanical level .the fracture patterns are not arbitrary : they are shown in to be optimal as regards their ability to relieve deviatoric stresses , and the inception , orientation and spacing of the fractures derive rigorously from energetic considerations . following inception, fractures can deform by frictional sliding or undergo opening .the extension of the theory presented here additionally accounts for fluid pressure by recourse to terzaghi s effective stress principle . when the fluid pressure is sufficiently high , existing fractures can open , thereby contributing to the permeability of the rock mass . by virtue of the explicit and connected nature of the predicted fractures, the attendant permeability enhancement can be estimated using simple relations from standard lubrication theory , resulting in a fully - coupled hydro - mechanical model .the paper is organized as follows .we begin in section [ sec : hydromechanics ] with illustrating the hydromechanical framework , recalling the basic equations and the terzaghi s effective stress principle . in section [ sec : brittledamage ] we recall the main features of the dry material model developed in , introducing a pressure dependent behavior at fault inception . in section [ sec : permeability ] we derive analytically the permeability associated to the presence of faults in the brittle damage material model . in section [ sec : examples ] we validate the material model by means of comparison with experimental results taken from the literature .in porous media saturated with freely moving fluids , deterioration of mechanical and hydraulic properties of rock masses and subsequent problems are closely related to changes in the stress state , formation of new cracks , and increase of permeability . in fully saturated rocks ,fluid and solid phases are fully interconnected and the interaction between fluid and rock is characterized by coupled diffusion - deformation mechanisms that convey an apparent time - dependent character to the mechanical properties of the system .the two governing equations of the coupled problem are the linear momentum balance and the continuity equation ( mass conservation ) .the kinematic quantities that characterize this picture are the porous solid displacement and the rate of fluid volume per unit area .hydro - mechanical coupling arises from the influence of the mechanical variables ( stress , strain and displacement ) on the continuity equation , where the primary variable is the fluid pressure , and from the influence of the hydraulic variables ( pore pressure and seepage velocity ) on the equilibrium equations , where the primary variables are the displacements .the energy of a fluid flowing in a porous medium is traditionally measured in terms of total hydraulic head , that for slow flowing fluids reads where is the fluid density and the gravitational acceleration .the pressure head is the equivalent gauge pressure of a column of water at the base of a piezometer .the elevation head expresses the relative potential energy .the kinetic energy contribution , , is disregarded , given the small velocity of the fluid .flow across packed porous media is generally characterized by laminar regime ( reynolds number re 1 ) and by a drop of the hydraulic head in the direction of the flow .analytical models of fluid flow in rocks use constitutive relations that link the average fluid velocity across the medium to the hydraulic head drop . as representative example of constitutive relation in material form , darcy s lawstates that the rate relative to the solid skeleton of the discharge per unit area of porous media , , is proportional to the hydraulic head gradient and inversely proportional to the fluid viscosity where denotes the material permeability tensor .permeability measures the ability for fluids ( gas or liquid ) to flow through a porous solid material ; it is intrinsically related to the void topology and does not account for the properties of the fluid . in anisotropic media , permeability is a symmetric ( consequence of the onsager reciprocal relations ) and positive definite ( a fluid can not flow against the pressure drop ) second order tensor .real eigenvalues of the permeability tensor are the principal permeabilities , and the corresponding eigenvectors indicate the principal directions of flow , i.e. , the directions where flow is parallel to the pressure drop .clearly , fractures modify the permeability tensor , introducing new preferential directions for fluid flow .although affected by many factors , in non fractured materials permeability is primarily related to the rock porosity ( or void fraction ) , expressing the ratio between the volume of the voids and the total volume , which includes also the volume of solids in finite kinematics , the porosity is naturally associated to the jacobian of the deformation gradient . by denoting the porosity of the stress - free material with , it holds ( see appendix a for details of the derivation , also cf .note that , for very low values of and , eq . may provide negative values for , thus a zero lower - bound must be enforced in calculations .the rate of fluid volume is linked to the porosity through the continuity equation , which for partially saturated voids in material form reads where the degree of saturation ( i. e. , the fraction of the fluid volume ) , the material divergence operator , and the partial derivative with respect to time . under the rather standard assumption of fully saturated voids and incompressible fluid ,the continuity equation becomes in the absence of any occluded porosity , the solid grains forming the matrix generally undergo negligible volume changes . in keeping with standard assumptions in geomechanics , we consider the solid phase of the matrix incompressible, thus we regard the change of the volume of the matrix as a change of the volume of the voids of the matrix .this assumption is consistent with the adoption of the terzaghi s theory , used here in lieu of the more sophisticated biot theory , chosen for the sake of simplicity and to limit the number of parameters .moreover , we consider fully saturated media . in finite kinematicsthe deformation is measured by the deformation gradient , where and denote the spatial and material coordinates , respectively .the stress measure work - conjugate to is the first piola - kirchhoff tensor .the linear momentum balance reads where is the material body force vector .given the material traction , the material boundary condition becomes in keeping with terzaghi s principle of effective stress we write where is the determinant of . the effective stress and the deformation gradient define the constitutive law brittle damage model presented in is characterized by a homogeneous matrix where nested microstructures of different length scales are embedded . at each level ( or rank ) of the nested architecture , microstructures assume the form of families of cohesive faults , characterized by an orientation and a uniform spacing , see fig . [fig : fpa ] . in keeping with well established mathematical procedures used to treat free discontinuity problems ,the brittle damage constitutive model is derived through a thermodynamically consistent approach , by assuming the existence of a free energy density which accounts for reversible and dissipative behaviors of the material .0.3 cm 0.15 the key of the brittle damage model is given by the kinematic assumptions .we begin by considering the particular case of a single family of fault planes of normal and spacing , and later extend the behavior to recursive nested families .the total deformation gradient of the material is assumed to decompose multiplicatively into a part pertaining the uniform deformation of the matrix , and a part describing the discontinuous kinematics of the cohesive faults , i. e. , the discontinuous deformation gradient is related to the kinematic activity of the faults , expressed through an opening displacement acting on each fault of the family as ( see fig .[ fig : fpb ] ) once and are supplied , and are in one - to - one correspondence . the fractured material , in turn ,may accommodate a second family of faults : this decomposition can be applied recursively for as many levels as necessary ; the innermost level will maintain a purely elastic behavior .the constitutive behavior of the brittle damage model follows from the introduction of a free energy density sum of two contributions with full separation of variables where is the strain - energy density per unit volume of the matrix , is the cohesive energy per unit surface of faults , suitably divided by the length to provide a specific energy per unit of volume , is the displacement jump , and is a scalar internal variable used to enforce irreversibility . note that the separation of the variables excludes strong coupling between the two energies .the operative form of the energy densities and can be selected freely according to the particular material considered . in the present model , the cohesive energy of a fault with orientation assumed to depend on an effective scalar opening displacement defined as where is the norm of the opening displacement and a material parameter measuring the ratio between the shear and tensile strengths of the material .it follows that the cohesive behavior is expressed in terms of an effective cohesive law , dependent on the effective opening displacement only . the effective traction is given by 0.5 in in applications, we use a simple effective cohesive law , visualized fig .[ fig : linearcohesive ] . during the first opening ,the cohesive law follows a linearly decreasing envelope , i. e. , where the tensile resistance , the critical opening displacement corresponding to the full decohesion of the faults , and is the critical energy release rate of the material .tractions acting on the cohesive surface follows as , cf . , \ , .\ ] ] in the derivation of the constitutive model is necessary to introduce the configurational force conjugate to , given by fracture is an irreversible process , thus decohered faults permanently damage the material .the extent of damage is expressed through the maximum attained effective opening displacement .irreversibility is enforced by assuming unloading and reloading to / from the origin , see fig .[ fig : linearcohesive ] , according to the kinetic equation damage irreversibility is a constraint of the brittle damage model , enforced in calculations through the growth condition . upon faultclosure the material model has to satisfy the impenetrability constraint , i. e. , the component of the opening displacement along the normal to the faults can not be negative , thus .more importantly , the model accounts for internal friction , a major dissipation mechanism in geological applications .we assume that friction operates at the faults concurrently with cohesion .clearly , friction can become the sole dissipative mechanism if the faults loose cohesion completely upon the attainment of the critical opening displacement . in considering friction , we resort to the approach proposed in pandolfi et al . and make use of a dual dissipation potential per unit area , where denotes the rate of the fault opening displacement . the behavior of irreversible materials with friction can be characterized variationally by recourse to time discretization , where a process of deformation is analyzed at distinct successive times , , , .we assume that the state of the material at time ( and ) is known and the total deformation at time is assigned .the problem is to determine the state of the material at time , accounting for material constraints and dissipation .we begin by considering a material with already a family of faults of spacing and orientation .following , the variational characterization of the material model requires to obtain an effective , incremental , strain - energy density by evaluating the infimum with respect to and of the extended constrained energy defined as the subindex used in signifies the dependence on the initial state .the irreversibility and the impenetrability constraints render the effective strain - energy density dependent on the initial conditions at time , and account for all the inelastic behaviors , such as damage , hysteresis , and path dependency. the constraints of the minimum problem can be enforced by means of two lagrange multipliers and , cf .optimization leads to a system of four equations , that provide , , , and .thus , acts as a potential for the first piola - kirchhoff stress tensor at time , i. e. , as the stable equilibrium configurations are the minimizers of the corresponding effective energy .note that the variational formulation eq . of fault friction is non - standard in that it results in an incremental minimization problem . in particular ,the tangent stiffness corresponding to the incremental equilibrium problem is symmetric , contrary to what is generally expected of non - associative materials . in calculations we assume rate independent coulomb friction and , for the linearly decreasing cohesive model , we set where is the coefficient of friction and we denote the symmetric second piola - kirchhoff stress tensor of the matrix of components the dual dissipation potential in eq .is rate - independent , i. e. , is positively homogeneous of degree in , and proportional to the contact pressure .the fault geometrical features and , which are defined by the surrounding stress state , can be determined with the aid of the time - discretized variational formulation , as described in .the solution presented in was addressing pressure independent materials under an extensive stress state . herewe provide a new solution , specific for stress states characterized by overall compression and for pressure sensitive materials .suppose that the material is undamaged at time and that we are given the deformation at time .we test two end states of the material , one with faults and another without faults , and choose the end state which results in the lowest incremental energy density .the time - discretized variational formulation allows to ascertain whether the insertion of faults is energetically favorable , and the optimal orientation of the faults in the fractured material .the orientation of the faults and the remaining state variables are obtained variationally from an extended constrained minimum problem , i. e. , constrained optimization leads to a set of six equations , whose solution provides the optimal orientation , , , and three lagrangian multipliers . forstress states in overall extension faults undergo simply opening .thus , the frictional dissipation is null , and the resulting normal aligns with the direction of the maximum principal value of , . with reference to pressure dependent materials in overall compressive states , the two optimization equations involving the normal we drop the index for the sake of clarity become : = - \frac{n_j}{l + \d\cdot\bn } \ , s^{\rm m}_{ji } + \frac{1}{l } \frac{\partial\phi}{\partial \delta_i } + \frac{\dt}{l } \frac{\partial \psi^*}{\partial \delta_i } + \lambda_1n_i = 0 \label{eq : x : update : kt1 } \\ & \frac{\partial}{\partial n_i } [ a + \frac{\dt}{l } \psi^ * + \lambda_1 \d\cdot\bn + \lambda_3 |\bn|^2 ] = \nonumber \\ & - \frac{\delta_j}{l + \d\cdot\bn } \ , s^{\rm m}_{ji } + \frac{1}{l}\frac{\partial \phi}{\partial n_i } + \frac{\dt}{l } \frac{\partial \psi^*}{\partial n_i } + \lambda_1 \delta_i + 2 \lambda_3 n_i = 0 \ , .\label{eq : x : update : kt3}\end{aligned}\ ] ] under a compressive stress , incipient faults are necessarily closed , , and can deform only by sliding , i. e. , .we denote with the unit vector in the direction of .thus , the dissipation potential can be written as being null at the inception , and eqs .- become multiplying the first of these equations by and the second by we obtain the identities the resulting equations imply that is a plane where the matrix shear stress satisfies the mohr - coulomb failure criterion , in the classical form where must be intended equal to at fault inception .thus , when faults form , corresponds to the cohesion ( shear resistance at null normal stress ) and the friction coefficient of the material .. sheds light on the meaning of the parameter that , for pressure sensitive materials , identifies with the friction coefficient .finally , eq . provides the lagrangian multiplier as + likewise , the length can be computed variationally by accounting for the misfit energy contained in the boundary layers that form at the junctions between faults and a confining boundary . in the model ,the compatibility between the faults and their container is satisfied only on average , and this gives rise to boundary layers that penetrate into the faulted region to a certain depth .the addition to the energy furnishes a selection mechanism among all possible microstructures leading to a relaxed energy , cf . .so far we have been considering either an intact material or a single family of parallel faults .the material with a single fault family is referred to as rank-1 faulting pattern material .more complex microstructures can be generated effectively by applying the previous construction recursively . in the first level of recursion, we simply replace the elastic strain - energy density of the matrix by , i. e. , by the effective strain - energy density of a rank-1 faulting pattern .this substitution can now be iterated , resulting in a recursive definition of .the recursion stops when the matrix between the faults remains elastic .the level of recursion is the rank of the microstructure .the resulting microstructures consist of faults within faults and are shown in fig .[ fig : faults](a ) .note that the implementation of the model in a numerical code is straightforward , and can be easily obtained by using recursive calls .according to the particular loading history , at the time and at the generic point the material is be characterized by a particular microstructure with several , determined in respect of equilibrium and compatibility conditions .the model is therefore able to account for variable opening of the faults .permeability is an overall important physical property of porous media very difficult to characterize theoretically . for simple and structured models of porous media, permeability can be estimated through analytical relationships that apply only under a narrow range of conditions .the class of kozeny - carman type models collects simple relations that , under the assumption of laminar flow of the pore fluid , link the permeability to the microstructural characteristics of the porous medium .the original kozeny - carman relation reads where is a scalar permeability , an empirical geometric parameter , the ratio of the exposed surface of the channels to the volume of the solids ( also called specific internal surface area ) , and the tortuosity , related to the ratio between , average length of the channels , and , macroscopic length of the flow path .the estimation of the shape coefficients and has been promoting an active research .the complexity of the relationship between the permeability tensor and a scalar property such as the porosity in rocks has been clearly pointed out .the scalar nature of variables and parameters used in analytical models leads to scalar definitions , and the correct tensor nature of the permeability is disregarded .therefore , such models are not meaningful if applied to soils characterized by the presence of sedimentation layers or fissures . moreover , these models do not allow for the modification of the porous medium microstructure due to fluid - porous matrix interactions , or by the presence of a variable confining pressure .in particular , permeability depends not only on the actual stress and on the strain during the loading history , but also on the evolution of the crack patterns , which is anisotropic in nature . considering the presence of a single fault family, the permeability tensor for the fractured brittle damage model due to the sole presence of the faults can be directly derived from the particular faults geometry .the permeability of a particular geometry of parallel and equidistant faults has been examined by irmay .snow and parsons obtained expression for anisotropic permeability , similar to the one described here , by considering networks of parallel fissures .we begin by recalling that the opening displacement decomposes into a normal and a sliding components , see fig .[ fig : jump ] , computed as : let us assume that a fluid flows within the faults , filling the open layers of constant width .the average fluid flow , in laminar regime , will take place in the plane of the layer . according to the solution of the navier - stokes equation ( poiseuille s solution ), the average velocity along the generic direction in the plane of the fault is where is the hydraulic head gradient in the direction .the assumption of laminar flow through a crack has been widely used in the literature , cf ., e. g. , . by considering a porous medium made of several parallel faults of equal width , the discharge in the direction of the flow is where is a measure of the material porosity due exclusively to the presence of faults . by comparing eqs . and, we obtain the permeability of the fractured material in direction as the directional gradient can be expressed as the scalar product of the hydraulic gradient and the flow direction , so that the magnitude of the fluid velocity reads and the average flow velocity vector , , becomes the hydraulic discharge can be written as thus the permeability tensor due to the presence of the faults derives as to account for a generic direction of the flow in the layer of normal , in eq .we must replace the unit vector with the projection , reaching the expression as a noteworthy feature of the brittle damage model , it follows that the permeability is described by an anisotropic tensor .if fault families are present in the porous medium , each characterized by a normal , a separation , and a normal opening displacement , the equivalent permeability is given by the sum of the corresponding permeabilities : the model does not exclude the presence of an initial porosity , see eq . , and permeability , see eq ., of the intact matrix . in this case , the resulting porosity and permeability will be given by the sum of the terms corresponding to the intact matrix and to the faults in practical applications we assume an isotropic matrix permeability of kozeny - carman type , with the simplified form where the constant accounts for shape coefficients .we observe that the hydraulic behavior of the brittle damage model is dependent on fracture orientation and spacing computed on the basis of the boundary conditions , and that its permeability can vary according to the kinematics of the faults .we remark that the solid phase incompressibility assumption adopted in the present model is not affecting substantially the hydraulic behavior , mostly because the porosity of the matrix plays a minor role in the hydraulic conductivity of the material .in fact in this model the porosity , and thus the permeability , is mostly imputable to the formation of faults , downsizing the relevance of the matrix porosity .numerical calculations of the dynamic multiaxial compression experiments on sintered aluminum nitride ( aln ) of chen and ravichandran were presented in by way of validation of the dry mechanical aspects of the model .the model was shown to correctly predict the general trends regarding the experimental observed damage patterns , as well as the brittle - to - ductile transition resulting under increasing confinement .therefore , in the present work we restrict validation to the hydro - mechanical aspects of the model .we describe selected examples of application of the porous damage model , starting from the response of the fully tridimensional dry model undergoing a loading that mimics a hydraulic fracturing process , and concluding with the validation of the model , reproducing a few representative experimental results on granite and sandstone .we specialize the strain energy density to a neo - hookean material extended to the compressible range , i. e. , where and are the lam coefficients , and is the determinant of .we study the response of the brittle damage model to the action of external loadings mimicking the in - field conditions observed during hydraulic fracturing procedures , and analyze the correspondent variation in permeability .we assume an intact material , with no pre - existent or natural faults , and limit our attention to the constitutive behavior .the material is characterized by the constants listed in table [ table : propertiesfracking ] . 0.1 in .rock material constants adopted in the illustrative examples [ cols="^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ] [ table : rockproperties ] we begin with the simulation of the triaxial tests on samples of lac di bonnet and beishan granites documented in .the tests consisted of the application of a confining pressure of 10 mpa , followed by an axial compressive load up to failure .experiments included the measurement of the permeability of the samples , limited to the pre - peak phase .we simulate the triaxial test with the brittle damage model and compare our numerical results with experiments .[ fig : granites ] shows the deviatoric stress , versus axial and lateral deformations , and , respectively , and the permeability versus deviatoric stress . during the simulated axial compression ,both granites develop one family of faults in shear .the failure plane of the faults corresponds to the one predicted by the mohr - coulomb criterion , inclined of an angle with respect to direction of maximum stress ( 21.8 for lac du bonnet and for beishan ) .the peak of resistance corresponds to the experimental values , but the brittle damage model predicts a post - peak behavior which is not available in the experimental papers .experiments show an initial reduction of the permeability , due to the compression of the matrix , followed by a marked increase when the samples begin to show a reduction of stiffness .by contrast , the brittle damage model predicts a constant permeability , which does not increase even after the formation of the shear faults .however , when the load becomes too high to be balanced by friction and the axial loading reduces , faults open and the permeability increases , showing a characteristic behavior often reported in experimental literature , cf . and the numerous references therein .the model is able to capture the dependence of the permeability on porosity and on deformation mechanisms , observed typically in low porosity rocks , where additionally dilatancy is observed when rock fails by brittle faulting . indeed , microstructural observations have clarified that dilation of the pore volume is primarily due to stress - induced microcracking , which increases permeability by widening the apertures and enhancing the connectivity of the flow paths . in high porosity rocks , such as sandstones ,the effect of stress on permeability is still far to be fully clarified .literature data testify apparently contradictory observations in the brittle regime . among the triaxial experiments on berea sandstone with different confinement reported in , we selected three small confinement triaxial experiments , characterized by a softening stress - strain curve .pre and post - peak porosity and permeability data are included in the experimental paper .we simulated the experimental tests at confining pressures of 5 , 10 and 40 mpa .experimental and numerical results are shown in fig .[ fig : sandstone ] . fig .[ fig : bereastress ] shows the deviatoric stress versus the axial deformation .simulations capture nicely the peak stress for the three tests , while the softening branch is not perfectly reproduced .[ fig : bereaporo ] compares numerical and experimental porosity for the two tests at lower confinement pressure . in both simulation and experiment ,porosity reduces progressively until the stress peak is reached , and grows during the softening phase , in correspondence to the reduction of the deviatoric stress .simulations predict qualitatively and quantitatively the variation of porosity during the test .contrariwise , the comparison between model predictions and experimental observations in terms of permeability evolution is not satisfactory , even from the qualitative point of view . in the experiments , permeability decreased markedly after the stress peak , showing a marked negative correlation between permeability and porosity changes fig .[ fig : bereapermexp ] . experimental data on different sandstones are qualitatively similar , suggesting that permeability evolution as a function of porosity does not follow any systematic trend .a possible explanation of the observed permeability reduction in particular dilating sandstone is that microcracking dramatically increases the tortuosity of the pore space .the brittle damage model predicts a post - peak increase in permeability , see fig .[ fig : bereapermnum ] , which is opposite to the berea sandstone experiments , but in line with many experimental results on low permeability geomaterials , and is also in agreement with the simulations on granites discussed here .we have developed a model of distributed fracturing of rock masses , and the attendant permeability enhancement thereof , based on an explicit micromechanical construction resulting in complex connected patterns of cracks , or faults .the approach extends the multi - scale brittle damage material model introduced in , which was limited to mechanical damage .the fracture patterns that form the basis of the theory are not implied but explicitly defined and the rock mass undergoes throughout compatible deformations and remains in static equilibrium , not just on average at the macroscopic scale , but also the micromechanical level .the sequential faulting construction used to generate the fracture patterns has been shown in to be optimal as regards the ability of the fracture patterns to relieve stress .in addition , the nucleation criterion , orientation and spacing of the faults derive rigorously from energetic considerations .following nucleation , fractures can deform by frictional sliding or undergo opening , thereby partially relieving the geostatic stresses in the rock mass .the extension of the theory presented in this paper additionally accounts for fluid pressure by recourse to terzaghi s effective stress principle .specifically , we estimate the permeability enhancement resulting from fracture enhancement using standard lubrication theory .this extension gives rise to a fully - coupled hydro - mechanical model .the formulation has been derived in finite kinematics to be consistent with the formulation of the damage model in [ 18 ] .a finite kinematics approach is able to describe both large and small strains , so that the model can be applied also to porous media different from rocks .a linear version of the model is currently under development , in view of heavy numerical applications in field problems .the dry mechanical aspects of the model were validated in by means of comparisons with the dynamic multiaxial compression experiments on sintered aluminum nitride ( aln ) of chen and ravichandran .the model was shown to correctly predict the general trends regarding the experimental observed damage patterns , as well as the brittle - to - ductile transition resulting under increasing confinement .the hydro - mechanical coupled model has been validated against three different sets of experimental data concerned with triaxial tests at different confinement pressure on granite and sandstone , including lac du bonnet and beisahn granites and berea sandstone .the ability of the model to reproduce qualitatively the experimental peak strength , post - peak stress - strain behavior , and permeability enhancement during loading and recovery during unloading is remarkable .the present coupled hydro - mechanical model has potential for use in applications , such as rocks under geostatic conditions , gravity dams , hydraulic fracture operations , and others , in which a solid deforms and undergoes extensive fracture under all - around confinement while simultaneously being infiltrated by a fluid .the particular case of hydraulic fracture is characterized by the injection of fluid at high pressure , which actively promotes the fracture process and the transport of fluid into the rock mass . under such conditions ,the present model is expected to predict the development of three - dimensional fracture patterns of great complexity over multiple scales .such complex fracture patterns have indeed been inferred from acoustic measurements in actual hydraulic fracture operations and are in sharp contrast to traditional models of hydraulic fracture , which posit the formation of a single mathematically - sharp crack .the present model thus represents a paradigm shift from said traditional models in its ability to account for complexity in the fracture pattern over multiple scale while simultaneously supplying macroscopic effective properties such as permeability and strength that can in turn be used , e. g. , in full - field finite element simulations .by denoting the time derivative with , we write assuming that the solid volume variation is small with respect to the void volume , and . thus the rate of porosity change becomes : this relation can be alternatively written in the form where is a constant , which can be derived by setting as initial values and , obtaining n. r. warpinski , r. c. kramm , j. r. heinze , and c. k. waltman .comparison of single- and dual - array microseismic mapping techniques in the barnett shale . in _spe annual technical conference and exhibition _ , volume spe 95568 , dallas , texas , october 2005 .society of petroleum engineers .r. wu , o. kresse , x. weng , c. e. cohen , and h. gu .modeling of interaction of hydraulic fractures in complex fracture networks . in _spe hydraulic fracturing technology conference _ , volume spe-152052-ms , the woodlands , texas , february 2012 .society of petroleum engineers .
we present a microstructural model of permeability in fractured solids , where the fractures are described in terms of recursive families of parallel , equidistant cohesive faults . faults originate upon the attainment of a tensile or shear resistance in the undamaged material . secondary faults may form in a hierarchical organization , creating a complex network of connected fractures that modify the permeability of the solid . the undamaged solid may possess initial porosity and permeability . the particular geometry of the superposed micro - faults lends itself to an explicit analytical quantification of the porosity and permeability of the damaged material . the approach is particularly appealing as a means of modeling a wide scope of engineering problems , ranging from the prevention of water or gas outburst into underground mines to the prediction of the integrity of reservoirs for co2 sequestration or hazardous waste storage . microstructured permeability ; parallel faults ; multi - scale permeability ; analytical models .
karl popper famously stated that unlimited tolerance leads to the demise of tolerance .the tag - based quantitative model of riolo , cohen , and axelrod indicates that a combination of kin selection and mutation causes times of high tolerance to be replaced by times of low tolerance towards those who are different .these tides of ( in)tolerance seemingly dismiss the role of human reasoning in the selection process as unable to stave off periods during which undesirable states of affairs prevail .in contrast , the basic negative result of evolutionary game theory that unconditional cooperators are vulnerable to rare occurrences of unconditional defectors can be avoided by appending an indirect reciprocity mechanism to the selection process , i.e. , a concern for such abstract realities as reputation , and the ability to use some form of language to spread information . it would thus seem that humans _ are _ able to draw on such collective mechanisms as democracy to adjust the course of selection in a preferable direction and sometimes they are not .humans are conditional cooperators , yet on occasion may benevolently help even those who can never repay in kind .although this cooperativeness is seen as the result of evolutionary selection , it is less clear why benevolence would permeate human societies .popper s statement , however , helps us identify how certain mechanisms may sustain benevolence .when a benevolent population is attracting an inflow from the outside , such that a society undergoes the transient dynamics , understanding the relationship between the inflow rate and human behavior is key .if the rate of inflow is low the original population may feel safe , but if the inflow is high it may be perceived as aggression and provoke in a sort of popperian twist a violent response .the very idea of these two limits suggests that benevolence is a relative category and is dependent on the inflow from the outside and the resulting state .little is known about these dependencies of benevolence .to quantify the dynamics of an open society with a benevolent population , we create a theoretical framework by combining the elements of evolutionary games and complex networks . as an idealized representation of human relationships , we place agents into a regular random network of friendships with the average degree 50a number consistent with refs .agents in the model are thus conditional cooperators in the sense that their interactions are restricted only to the nearest neighbors defined by the network . at each time , a total of donor - recipient pairs are randomly chosen among neighboring agents , whereupon a donor pays cost for the recipient to receive benefit . to this trivial scenariowe add an asymmetry in which insiders incur a higher cost of cooperation and provide more benefit to outsiders than they receive in return . in this way , benefit differential emerges between insider and outsider subpopulations , creating an incentive for outsiders to immigrate .the overall purpose is to form a mutualistic relationship in which everyone experiences a higher standard due to the extra labor provided by outsiders .when the selected donor - recipient pairs finish interacting , we calculate the fitness of both insiders and outsiders denoted and , respectively as the average per - capita benefit net of the cost of cooperation .the details on the mathematical representation of the described setup are found in supporting information ( si text ) . here, we just note that the quantities of interest are the cost - benefit ratio , , and the relative benefit differential , .a list of key symbols with some default parameter values is given in table [ t1 ] .we envision a society that is an open dynamic system in the sense that its size changes over time . to that endwe adopt replicator - type equations .specifically , if and denote the population sizes of insiders and outsiders , respectively , then the time - change of these two subpopulations is given by , .the fraction of outsiders in the system is defined as , yielding where is the ratio of the fitness of outsiders to the average fitness of the whole population , i.e. , ] .function defined in this way faithfully reproduces the curves in fig .[ 1 ] obtained by means of numerical simulations .more importantly , in a typical model setup whereby outsiders are attracted into the system , function r monotonically decreases until an equilibrium point is reached ( unless the model trajectory gets trapped beforehand in the absorbing state , i.e. , the state of antagonism ) . because is monotonically decreasing , there can be only one point that satisfies condition , thus indicating that the model equilibrium is globally stable .[ [ delayed - assimilation . ] ] delayed assimilation .+ + + + + + + + + + + + + + + + + + + + + in the main text , we assumed that the assimilation process is effective instantaneously , whereas in reality it may take some time for outsiders to assimilate new cultural patterns .this possibility is readily taken into account by introducing delay , such that the fraction of outsiders present in the system at time is assimilated at rate . here , we examine how introducing delayed assimilation affects the model dynamics and discuss several important implications of the obtained results .l70 mm + delay in the assimilation process brings about two noticeable changes in the model dynamics ( fig .first , if the fraction of outsiders is increasing , fewer individuals are influenced by delayed assimilation in comparison to the instantaneous assimilation .a consequence is that the fraction of outsiders increases at a higher rate when there is some delay than when there is none .second , delayed assimilation leaves the model equilibrium unaffected , but by trailing the current state of the system makes it possible that the fraction of outsiders temporarily overshoots the equilibrium point and sets off damped oscillations .these new features in the model dynamics change the critical average tolerance that marks the border between a mutualistic and an antagonistic relationship . to reach mutualism under delayed assimilation , a society must be more tolerant than under instantaneous assimilation ( fig . [ s2 ] ) .namely , the temporary accumulation of outsiders above the equilibrium point , which is impossible without a delay , may just be enough to push an otherwise mutualistic system into antagonism . because antagonism is a non - equilibrium absorbing state, it is irrelevant that the delay has no effect on the model s equilibrium . what matters is the path that leads to the equilibrium point , and that path turns less favorable for mutualism with the introduction of delayed assimilation . the higher the delay , the less likely the mutualistic relationship .accordingly , the critical average tolerance increases with the delay ( ) from a lower limit at to a theoretical upper limit ( see the main text ) beyond which the state of antagonism disappears altogether .this lower limit decreases with the assimilation rate ( ) , yet the theoretical upper limit is independent of .consequences are that ( i ) the described effect of delayed assimilation is stronger at high values of and ( ii ) the critical average tolerance becomes independent of at high enough .how are the overall outcomes of the social dynamics changed by delayed assimilation ?to answer this question , in fig .[ s3 ] , we provide a map of the parameter space under instantaneous assimilation ( ) overlaid with the same kind of map under delayed assimilation ( ) . as expected from point ( i ) above , there is very little change in the results if the assimilation rate is relatively low .the results , by contrast , change markedly at high assimilation rates , with mutualism giving way to antagonism .we notice that in line with point ( ii ) above , the border delineating the states of mutualism and antagonism exhibits less dependence on the assimilation rate ( ) than in the case of instantaneous assimilation . having longer delays than ( see fig .[ s2 ] ) would erase this dependence altogether because the border between mutualism and antagonism would be pushed toward its theoretical limit , which is independent of .l70 mm + the outcomes of the social dynamics under delayed assimilation have an important implication in the context of immigration policies .it turns out that having in place very efficient assimilation programs ( corresponding to a high value of ) may not mean they are effective . to achieve the effectiveness , it is also necessary that such programs produce the expected results rather quickly ( corresponding to a low value of ) .an alternative path to avoiding excessive radicalization is to introduce measures that improve the average tolerance of insiders .provided is high enough , a higher average tolerance favors the prospect of reaching a mutualistic equilibrium even if there is a considerable delay .an inevitable conclusion is that a successful immigration policy is a tough balancing act that requires people to make concessions in order to learn how to live together .we are grateful to dirk helbing , yoh iwasa , and tomislav lipic for helpful suggestions .b.p . and h.e.s .received support from the national science foundation ( nsf ) grant cmmi 1125290 .b.p . also received support from the university of rijeka .was partly supported by the japan science and technology agency ( jst ) program to disseminate tenure tracking system .z.w . was supported by the national natural science foundation of china , grant no .61201321 and 61471300 .99 popper k ( 2013 ) the open society and its enemies . with a foreword by a. ryan and an essay by e.h .( _ princeton univ . press , princeton _ ) .axelrod r ( 1984 ) the evolution of cooperation .( _ basic books , new york _ ) .riolo rl , cohen md , axelrod r ( 2001 ) evolution of cooperation without reciprocity ._ nature _ * 414*:441443 .nowak ma ( 2001 ) tides of tolerance ._ nature _ * 414*:403404 .uchida s , sigmund k ( 2010 ) the competition of assessment rules for indirect reciprocity ._ j. theor .biol . _ * 263*:1319 .hauser op , rand dg , peysakhovich , nowak ma ( 2014 ) cooperation with the future ._ nature _ * 511*:220213 .jusup m , matsuo t , iwasa y ( 2014 ) barriers to cooperation aid ideological rigidity and threaten societal collapse ._ plos comput .biol . _ * 10*:e1003618 .axelrod r , hamilton wd ( 1981 ) the evolution of cooperation ._ science _ * 211*:13901396 .nowak ma ( 2008 ) generosity : a winner s advice ._ nature _ * 456*:579 .nowak ma , may rm ( 1992 ) evolutionary games and spatial chaos ._ nature _ * 359*:826829 .nowak ma , sigmund k ( 1998 ) evolution of indirect reciprocity by image scoring ._ nature _ * 393*:573577. ohtsuki h , hauert c , lieberman e , nowak ma ( 2006 ) a simple rule for the evolution of cooperation on graphs and social networks . _ nature _ * 441*:502505 .bshary r , oliveira rf ( 2015 ) cooperation in animals : toward a game theory within the framework of social competence .sci . _ * 3*:3137 .huang w , hauert c , traulsen a ( 2015 ) stochastic game dynamics under demographic fluctuations .usa _ * 112*:90649069 .dunbar ri ( 1992 ) neocortex size as a constraint on group size in primates ._ * 22*:46993 .hill ra , dunbar ri ( 2003 ) social network size in humans ._ hum . nature _ * 14*:5372 . schelling tc ( 1971 ) dynamic models of segregation ._ j. math .sociol . _ * 1*:14386 .watts dj ( 2001 ) a simple model of global cascades on random networks . _ proc . natl .usa _ * 99*:57665771 .lee j - h , jusup m , podobnik b , iwasa y ( 2015 ) agent - based mapping of credit risk for sustainable microfinance . _plos one _ * 10*:e0126447 . helbing d , ed .( 2012 ) social self - organization .agent - based simulations and experiments to study emergent social behavior .( _ springer , berlin _ ) .ramos m , shao j , reis sds , anteneodo c , andrade js , havlin s , makse h ( 2015 ) a. how does public opinion become extreme ? _ sci .rep . _ * 5*:10032. hohhman m , yoeli e , nowak ma ( 2015 ) cooperate without looking : why we care what people think and not just what they do .usa _ * 112*:17271732 .traulsen a , semmann d , sommerfeld rd , krambeck hj , milinski m ( 2010 ) human strategy updating in evolutionary games .* 107*:29622966 .helbing d , wenjian y ( 2010 ) the future of social experimenting .usa _ * 107*:52655266 .betz hg ( 1993 ) the new politics of resentment : radical right - wing populist parties in western europe . _ comp .polit . _ * 25*:413427 .lim m , metzler r , bar - yam y ( 2007 ) global pattern formation and ethnic / cultural violence ._ science _ * 317*:15401544 .krueger ab , maleckova j ( 2009 ) attitudes and action : public opinion and occurrence of international terrorism ._ science _ * 325*:15341536 .dancygier rm ( 2013 ) immigration and conflict in europe .( _ cambridge univ . press ,cambridge _ ) .mcpherson m , smith - lovin l , cook jm ( 2001 ) birds of a feather : homophily in social networks .sociol . _ * 27*:415444 .barabasi a - l , albert r ( 1999 ) emergence of scaling in random networks. _ science _ * 286*:509512 .jusup m , iwami s , podobnik b , stanley , he ( 2015 ) dynamically rich , yet parameter - sparse models for spatial epidemiology : comment on `` coupled disease - behavior dynamics on complex networks : a review '' by z. wang et al ._ * 15*:43 - 46 .esteban j , mayoral l , ray d ( 2012 ) ethnicity and conflict : theory and facts _ science _* 336*:858865 .kiers et , rousseau ra , west sa , denison rf ( 2003 ) host sanctions and the legume rhizobium mutualism ._ nature _ * 425*:7881 .wang rw , sun bf , zheng q , shi l , zhu l ( 2011 ) asymmetric interaction and indeterminate fitness correlation between cooperative partners in the fig fig wasp mutualism ._ j. r. soc. interface _ * 8*:148796 .clutton - brock th , parker ga ( 1995 ) punishment in animal societies ._ nature _ * 373*:20916 .
mutualistic relationships among the different species are ubiquitous in nature . to prevent mutualism from slipping into antagonism , a host often invokes a `` carrot and stick '' approach towards symbionts with a stabilizing effect on their symbiosis . in open human societies , a mutualistic relationship arises when a native insider population attracts outsiders with benevolent incentives in hope that the additional labor will improve the standard of all . a lingering question , however , is the extent to which insiders are willing to tolerate outsiders before mutualism slips into antagonism . to test the assertion by karl popper that unlimited tolerance leads to the demise of tolerance , we model a society under a growing incursion from the outside . guided by their traditions of maintaining the social fabric and prizing tolerance , the insiders reduce their benevolence toward the growing subpopulation of outsiders but do not invoke punishment . this reduction of benevolence intensifies as less tolerant insiders ( e.g. , `` radicals '' ) openly renounce benevolence . although more tolerant insiders maintain some level of benevolence , they may also tacitly support radicals out of fear for the future . if radicals and their tacit supporters achieve a critical majority , herd behavior ensues and the relation between the insider and outsider subpopulations turns antagonistic . to control the risk of unwanted social dynamics , we map the parameter space within which the tolerance of insiders is in balance with the assimilation of outsiders , the tolerant insiders maintain a sustainable majority , and any reduction in benevolence occurs smoothly . we also identify the circumstances that cause the relations between insiders and outsiders to collapse or that lead to the dominance of the outsiders . [ [ keywords ] ] keywords : + + + + + + + + + game theory | complex networks | social thermodynamics | open systems | tolerance | herd behavior
the aim of this paper is to build an efficient numerical method for solving an anisotropic diffusion problem where the anisotropy is carried by a vector .this work is motivated by investigations of strongly magnetized plasmas , more specifically the study of the euler - lorentz model in a low mach number regime and in the presence of a large magnetic field .this framework is characteristic of the magnetically confined plasma fusion . in this context, the asymptotic parameter represents the gyro - period of particles as well as the square root of the mach number , the vector field being the magnetic field direction. therefore the values can be very small in some sub - regions of the computational domain where the magnetic field is large , inducing then a severe anisotropy of the medium , while being large in other sub - domains for intermediate and small strength of the magnetic field .another important property of this system is the time dependence of the magnetic field defining the anisotropy direction .these two main characteristics define the framework of the present paper whose purpose is to design a numerical scheme for anisotropy ratios ranging from to and for a time varying anisotropy direction . in order to address efficiently these requirements ,the numerical method should not rely on a coordinate system adapted to this anisotropy direction .the use of adapted coordinates would imply mesh modifications accordingly to the evolution of , an intricate and expensive procedure we wish to avoid .thus , the numerical method introduced here will carry out the anisotropic non - linear diffusion problem on a mesh independent of the anisotropy direction . +this scheme will be detailed on the following model problem in this system , is a bounded subset of ( ) , is a fixed constant parameter and , for any , stands for the unit outward normal vector . and stand for the gradient and the divergence operators with respect to the space variable .we assume that , , , are given and the unknown of the problem is the function .the tensor product of two vectors and is denoted .finally , we assume that , for any , the function is strictly increasing and can be non - linear .this equation is well suited for the plasma fusion context above depicted .it allows the computation of the plasma pressure in order to guarantee that the forces vanish in the low mach regime for strongly magnetized plasma .the function denoted defines the internal energy of the fluid with respect to the pressure .this relation may be non - linear , which motivates the investigation of non - linear anisotropic problems .however , the derivation of this equation is out of the scope of the present paper and we refer to related works ( see ) for detailed explanations .furthermore , we wish to present the numerical method in a context wider than the strict plasma context , since anisotropic diffusion problem are encountered in many applications .good examples of these applications are , for instance , image noise filtering , convection dominated diffusion equations and more generally diffusion problem with strong medium anisotropies .the model equation is representative of a large enough variety of problems , up to slight changes , and will be considered to detail the numerical method .developing an efficient numerical method to compute the solution of this diffusion problem , regardless to values , is a difficult task . indeed ,the limit is a singular limit for the problem ( [ elliptic_non - linear_eps_intro ] ) , the diffusion equation degenerating into the following one the system is ill - posed , its solution being non - unique .more precisely , if is a solution of ( [ elliptic_non - linear_0_intro ] ) and is a function verifying on , then defines a new solution of ( [ elliptic_non - linear_0_intro ] ) .however , the limit of solution of ( [ elliptic_non - linear_eps_intro ] ) is uniquely defined by the limit problem as demonstrated in section [ dec ] , but a direct discretization of the diffusion problem gives rise to a linear system with a conditioning number that blows up for vanishing .this property has been outlined for numerical studies of elliptic equation singular perturbations ( see ) .+ to tackle this difficulty , an _asymptotic - preserving _ ( ap ) scheme is introduced to compute the solution of the anisotropic diffusion problem for and to capture , the solution of the limit problem , for small values .this property should be provided without any limitations on the discretization parameters related to the value of .these requirements are compliant with the properties of ap - schemes originally introduced in and developed in for diffusive regimes of transport equations .these techniques have received numerous extensions to other singular perturbation problems : relaxation limits of kinetic plasma descriptions , quasi - neutral limit of fluid and kinetic plasma models , hydrodynamic low mach number limit , radiative hydrodynamics , fluid and particle flows and strongly magnetized plasmas as well as heterogeneous media .the asymptotic - preserving property of the presented method is obtained thanks to a decomposition of the solution , introduced in and also used in .it consists of the following identity , being the solution mean part , with respect to the anisotropy ( ) direction , the fluctuating part .these two components verify and , defining the functions constant along the -direction , the functions of zero mean value along .this decomposition was first developed for meshes adapted to the anisotropy direction , for which , the discretization of is straightforward .a direct discretization of the sub - space is , on the other side , much more intricate .this difficulty is overcome thanks to the introduction of a lagrangian multiplier , in order to penalize the zero mean value property of the functions belonging to .the method is extended in for computations with meshes independent of the anisotropy direction .this is achieved by introducing two more lagrangian multipliers to discretize the sub - spaces .the size of the linear system providing the problem solution is then significantly enlarged .however , this drawback may be corrected thanks to a slightly different decomposition . in solution is decomposed in two non - orthogonal parts which allows the definition of two sub - spaces whose direct discretization is readily obtained without any lagrangian multipliers .the size of the linear system obtained with this approach is considerably lowered compared to the previous method .this method has been extended in to non - linear diffusion equations .the path followed in the present paper still relies on the decomposition in and .however , the discretization of these sub - spaces is achieved using a differential characterization , similar to the one introduced in .this finally allows the computation of the solution thanks to a second - order problem for and a fourth - order problem for .this latter problem , in the framework of neumann boundary conditions considered in this paper , can be recast into two elliptic problems .the method proposed finally reduces to the computation of three standard elliptic problems for which very efficient solvers can be used ( for instance multi - grid solvers ) . for the former approaches , the equations providing both components are not classical elliptic equations and the resolution of the linear system requires more sophisticated solversthis complexity is resource demanding and may be challenging for realistic three - dimensional computations . finally this paper also presents an extension to non - linear reaction diffusion problems , a class of problems that has never been investigated in the previous works .the paper is organized as follows : in section 2 , the decomposition methodology is presented .the linear case , _i.e. _ with where is a given function sequence , is first investigated : more precisely , we describe the decomposition procedure in the specific case where is a strictly positive constant denoted , then we generalize this procedure to any function by using well - chosen sobolev spaces . finally the non - linear problems are addressed by invoking gummel s iterative algorithm .section 3 is devoted to presentation of the discretization .finally , the efficiency of the numerical method is demonstrated in section 4 .in this section a scale separation is introduced to ensure the asymptotic - preserving property of the scheme .this is achieved by transforming the singular perturbation problem into an equivalent system for which the limit is regular . for simplicity reasons , the linear case with constant first considered for detailing the decomposition method . in this framework , the singular nature of the limit is outlined and the limit problem , providing , is stated . a development to linear cases with variable positive functions is then presented and finally , thanks to gummel s iterative method , the non - linear case is addressed by using a sequence of linear problems .we assume here that the given sequence is of the form where is a known constant for any . then the diffusion problem ( [ elliptic_non - linear_eps_intro ] ) writes the limit solution of the singular perturbation problem verifies the limit problem this algebraic equation admits a unique solution under the assumption a requirement that must be fulfilled by the numerical method . to ensure this property ,the methodology consists in using a decomposition similar to that of .the solution is decomposed into its mean part with respect to the anisotropy direction and the fluctuating part , which exhibits the property to have a zero mean value along the anisotropy direction .these two functions verify and , being the kernel of the elliptic operator defined by equation [ elliptic_non - linear_0_intro ] .these properties are capitalized on , to isolate in the problem [ elliptic_linear_eps ] the macro scale ( providing ) from the micro scale ( giving ) and thereby , build the asymptotic - preserving scheme .the main difficulty of the procedure lies in the characterization of the sub - spaces associated to the different scales . in the property of the functions populating or are imposed by a penalization technique .the methodology developed in this paper operates a similar decomposition on to and , but with a different characterization of these sub - spaces . here , we shape the technique introduced in for a very specific framework , in order to discriminate the functions in and thanks to differential properties , providing thus , an easy discretization . with this aim , we introduce the following sobolev spaces : and we define as the goal is to reproduce the function decomposition into its mean and fluctuating parts .the functions of correspond to the mean part and the complementary part is demonstrated to belong to .this is the purpose of the following theorem : [ theorem_decompo_linear ] we denote by the subspace of functions such that and we equip it with the usual norm on .then : * equipped with the norm is a hilbert space , * is a closed subspace in , * is a closed subspace in .* we have the orthogonal decomposition the demonstration of this theorem will be omitted .it can be readily adapted from that of theorem 2.1 from . as a consequence of this theorem , the decomposition exists and is unique for any . therefore finding the particular solution which is exactly the limit of is equivalent to find and as the respective limits of and .then , our goal is now to find some equations for and which are well - posed for any value of , including . for this purpose ,the decomposition ( [ decompo_linear ] ) is introduced into ( [ elliptic_linear_eps ] ) , yielding the variational formulation on writes for any test function .+ in order to exhibit the equation providing , the variational formulation ( [ weak_linear_eps ] ) is tested against giving which means that for any . according to theorem [ theorem_decompo_linear ] ,there exists a function such that this equation furnishes a means of computation for .firstly , applying the differential operator onto ( [ eq ] ) leads to an equation for then , is retrieved thanks to \ , .\ ] ] note that the system ( [ problem_heps_linear])-([def_pieps_linear ] ) is well - posed and does not degenerate for any value of , including .it provides a means of computing the macro component of the solution regardless to values .+ to derive an equation for , we now assume that the test function in ( [ weak_linear_eps ] ) is in . according to theorem [ theorem_decompo_linear ] ,there exist two functions and in such that and as a consequence , the variational formulation of ( [ elliptic_linear_eps_decomposed ] ) can be rewritten as follows : we recognize the variational formulation of therefore , coupling this system with ( [ def_qeps_linear ] ) , we recognize a complete definition of which is well - posed for any , including .moreover this computation of is totally compliant with the condition and guarantees the asymptotic - preserving property of the scheme .+ at this point , we have established a system of equations for and which is well - posed for any but also for . then , solving the well - posed equations ( [ problem_heps_linear ] ) , ( [ problem_leps_linear ] ) , ( [ def_pieps_linear ] ) and ( [ def_qeps_linear ] ) provides and as the respective limits of and when . as a consequence , the sum is exactly the solution of ( [ eq : def : limit : problem ] ) .furthermore , we can remark that the limit is regular for the reformulated model ( [ problem_heps_linear])-([problem_leps_linear])-([def_pieps_linear])-([def_qeps_linear ] ) . in this paragraph, we extend the method we have presented to the general linear case , _i.e. _ to cases where is of the form is given for any , and is supposed to be strictly positive on .in such a case , the diffusion problem ( [ elliptic_non - linear_eps_intro ] ) writes the study of these cases is motivated by the fact that the use of gummel s algorithm on the non - linear case leads to the resolution of a sequence of linearized problems which are similar to ( [ elliptic_quasi - linear_eps ] ) .we refer to section [ gummel_algo ] for more details about the linearization procedure . + in order to solve the linear problem ( [ elliptic_quasi - linear_eps ] ) for any value of , we use the method presented in the previous paragraph .firstly , we define by then we introduce the following weighted sobolev spaces : and the set representing the functions constant along the magnetic field lines following the methodology presented in the previous paragraph and in , we deduce [ theorem_decompo_quasi - linear ] equipped with the norm is a hilbert space and is a closed space in . furthermore , is also a closed space in and we have the orthogonal decomposition from the orthogonal decomposition ( [ orthogonal_decomposition_quasi - linear ] ) , the solution of ( [ elliptic_quasi - linear_eps ] ) can be uniquely decomposed as then , if we identify the limits and of the sequences and , we will find the limit of by taking . in order to identify a set of equations satisfied by and , we follow the same procedure as in the previous paragraph : we multiply ( [ elliptic_quasi - linear_eps ] ) by a test function and we integrate over . by choosing in or in , we prove that and are respectively of the form \ , , \ ; q_{\epsilon } = \cfrac{1}{g_{\epsilon } } \ , \nabla_{\mathbf{x } } \cdot ( g_{\epsilon } \ , \mathbf{b } \ , l_{\epsilon } ) \ , , \end{split}\ ] ] where and are solutions of and \cdot \bm{\nu } \equiv\left(h_{\epsilon}\,(\mathbf{b}\otimes\mathbf{b})\,\mathbf{s}_{\epsilon}\right ) \cdot \bm{\nu } \ , , & \textnormal{on , } \\ ( g_{\epsilon } \ , \mathbf{b } \ , l_{\epsilon } ) \cdot \bm{\nu } \equiv 0 \ , , & \textnormal{on . } \end{array } \right.\ ] ] as in the previous paragraph , we observe that the equations ( [ quasi - linear_pieps_qeps])-([quasi - linear_h])-([quasi - linear_l ] ) remain well - posed for any . as a consequence ,the particular solution of the limit problem we are looking for is exactly the sum where and are computed by solving ( [ quasi - linear_pieps_qeps])-([quasi - linear_h])-([quasi - linear_l ] ) with .+ furthermore , the resolution of the fourth order problem ( [ quasi - linear_l ] ) can be replaced by the successive resolution of two homogeneous dirichlet type problems which are \ , , & \textnormal{in , } \\ ( h_{\epsilon } \ ,\mathbf{b } \ , l_{\epsilon } ) \cdot \bm{\nu } \equiv 0 \ , , & \textnormal{on , } \end{array } \right.\ ] ] and finally , we consider the general model ( [ elliptic_non - linear_eps_intro ] ) given in the introduction when the function is non - linear .when goes to 0 , the model becomes due to the non - linearity of the function the orthogonal decomposition method can not be used .then we choose to linearize the diffusion equation ( [ elliptic_non - linear_eps_intro ] ) by using gummel s algorithm developed in .this iterative method consists in the approximation of the solution by a sequence defined by and initialized with an arbitrary . in this method , each viewed as a small correction of in order to obtain .then , assuming that is a solution of ( [ elliptic_non - linear_eps_intro ] ) , it holds that then , neglecting second order terms in , we obtain a linear diffusion problem for which writes where , and are defined by for each value of , the problem ( [ elliptic_non - linear_linearized_eps ] ) is of the same kind as ( [ elliptic_quasi - linear_eps ] ) . so we can solve it by applying the method described in the paragraph 2.1.2 . + this sequence of linearized problemscan also be obtained from newton s iterative method to solve where the differential operator is defined as indeed , newton s method for solving ( [ newton_problem ] ) writes where is the derivative in of the differential operator and is of the form this section , we present a numerical method which allows to solve the diffusion problems ( [ elliptic_quasi - linear_eps ] ) and ( [ elliptic_non - linear_eps_intro ] ) by using the decomposition approaches we have presented .first , we introduce some notations which will be used for the construction of the scheme , then we present the scheme itself for the general linear case ( [ elliptic_quasi - linear_eps ] ) . finally , we present the discretized version of gummel s algorithm for the non - linear case .we consider a uniform mesh defined by and we assume that the simulation domain is \times [ y_{-1/2},y_{n_{y}+1/2}] ] thanks to then the expression of , as defined by is used to analytically compute and with these definitions are inserted in the numerical method described in section [ ql_fd ] to compute the numerical approximation finally compared to the exact solution .the relative errors denoted , , are defined by these quantities are displayed on figure [ cv_inh_linear : error ] as functions of the space step and for different anisotropy strengths , and . a linear decrease of the errors is observed with the mesh refinement , the slope being equal to , which is consistent with the definitions ( [ def_dh ] ) and ( [ def_dhstar ] ) of and as second order accurate approximations of the differential operators and .furthermore , this property holds for all considered values of , including .this demonstrates the -invariance of the numerical scheme second order accuracy with respect to the space step .the ability of the scheme to compute a solution component with no gradient in the anisotropy direction is also investigated .the numerical approximation of , provided by ( [ elliptic_quasi - linear_fd])-([elliptic_quasi - linear_fd : pi ] ) , should verify a discrete analogous of the property .this is analyzed thanks to figure [ cv_inh_linear : bdotgradpi ] , where the evolution of as a function of the space step is displayed for , , and .note that the quantity is the residual of the linear system solved to compute the solution of , and consequently characterizes the precision of the linear system solver .for these test cases , a sparse direct solver being used , the accuracy is very close to the computer arithmetic precision , at least for small linear system sizes .this precision is observed to deteriorate moderately with the increase of the system size which explains the growth of the error with vanishing mesh sizes .however this does not affect the precision of the scheme , as demonstrated by the results of figure [ cv_inh_linear : error ] . in this section ,we quantify the sensitivity of the numerical method with respect to the anisotropy direction variations .more precisely , we wish to analyze the accuracy of the method as a function of , the angle measured between the anisotropy direction and the first direction ( associated to the first coordinate ) . the anisotropy direction is assumed to be uniform and defined as .\ ] ] in order to manufacture an analytic solution for the problem , we introduce a system of coordinates which is adapted to .these coordinates are denoted and are deduced from by the relations in these coordinates , the linear diffusion problem ( [ elliptic_quasi - linear_eps ] ) writes with , , , , and .it is straightforward to verify that the function given by is the solution of ( [ eq_quasi - linear_eps_aligned ] ) provided that and satisfy and where with .this requirement is met by the following definition which ensures for any .the problem is stated in cartesian coordinates thanks to the change of variables yielding to with the other coefficients being manufactured similarly with and given by ( [ variable_gh ] ) . in the following tests , the computation domain \times [ 1,2] ] , the anisotropy direction is a function of the space variables whose expression is given by equation , and being defined as note that this choice of introduces a severe non - linearity in the problem .several tests have also been performed with other definitions of , for instance which defines an anisotropic diffusion - reaction equation similar to the steady - state allen - cahn equation ( see ) used in phase transition problems .these tests produce results almost identical to the results which are obtained when so we only consider the strongly non - linear reaction term defined in ( [ def_hepsgeps ] ) within the presentation of the numerical results in the next lines .+ the solution is constructed thanks to a cubic spline , precisely with for 0 \leq |z| < 1 $ , } \\ \end{array } \right.\ ] ] with and . to analyze the convergence with respect to the number of gummel s iterations , the sequence is initiated with , a perturbation of the non - linear problem solution , reading where and are parameters controlling the support and the magnitude of the perturbation .since gummel s method is constructed on a linearization of the problem its convergence can not be guaranteed with a poor estimation of the solution as initial guess .it means that the parameters and can not be chosen completely arbitrarily : indeed , several simulations have been performed , all with the same parameters except ranging in and ranging in and it has been observed that gummel s method does not converge as when is larger than .concerning the parameter , the simulation sequence reveals that the convergence of gummel s method is almost not affected by the amplitude of .the successive relative errors measured between the iterates of the gummel s loop and the exact solution are plotted on figure [ convergence_inn_figs : error ] .the computations are carried out on two different meshes , and with and cells , with and and for anisotropy strengths including .along with the graphical representation of the solution approximation error , the evolution of the corrector norm relative to that of the solution , namely the quantity , is also plotted in figure [ convergence_inn_figs : corrector ] . these last results being almost identical for both meshes , the plot related to the finest mesh is omitted in this figure ..relative error ( ) for the non - linear problem defined by .the computations are carried out on uniform meshes constituted of cells ( ) with several values of and after a number of iteration of gummel s loop large enough for the convergence to be effective . [ cols="^,^,^,^,^,^",options="header " , ] in spite of the large perturbation amplitude , gummel s iterative method converges in a small number of iterations , for both meshes and for all -values .the corrector term rapidly decreases to reach the computer precision threshold ( ) after 4 iterations . in the same time , the relative error also decreases but the approximation is not improved by subsequent iterations , the error remaining constant for iteration numbers greater than 4 . at this stage ,the precision of the approximation is not limited by the linearization process of the gummel s loop anymore , but by the discretization error of the linearized problem , explaining the plateau described by the error . to document this analyzis further ,we summarize in table [ convergence_inn_tab ] the values of the relative error measured between the exact solution and the approximation obtained after iterations of the gummel s loop .this quantity is referred to as ( ) and computed for large enough to ensure that the plateau above mentioned is reached . forthe investigations carried out , this requirement is met as soon as .the approximation error is observed to quadratically decrease with the space mesh : the error norms related to the computations performed on a mesh are for instance times as small as those carried out on a mesh with cells .this is a consequence of the second order accurate discretization of the spatial operator already outlined in section [ sec : convergence : linear ] .finally , the results of table [ convergence_inn_tab ] also demonstrate the independence of the numerical method precision with respect to the anisotropy intensity . these last experiments are devoted to illustrate the asymptotic - preserving property of the numerical method , _i.e. _ its ability to compute an accurate approximation of , the solution of the limit problem .the solution of the problem is constructed as a sequence defined by with the functions , and are defined as in the previous test sequence , the initial guess for gummel s loop being constructed following ( [ def_peps0 ] ) using the same perturbation .we now wish to evaluate the error measured between the exact solution of the limit problem and the approximation computed thanks to the ap - scheme for vanishing .this error , denoted and defined as is plotted on figure [ convergence_ineps_nonlinear_figs : exact ] as a function of .the data represented on this figure are obtained after convergence of the gummel s loop .two regimes can be identified .the first one is related to the largest values of for which a linear decrease of the error is observed .the second one is a plateau whose value depends on the mesh step , this value being lower for refined meshes . precisely we note a quadratic decrease of this value with the mesh size . to explain these features, we use the following identity this yields where represents the approximation error of , being the numerical approximation of provided by the ap - scheme with , and . the error linearly decreases with as long as the approximation error is negligible compared to ( see figure [ convergence_ineps_nonlinear_figs : approx ] ) . below a given -value , varying with the mesh size, the total error can be assimilated to and the decrease of is ineffective .the discrete operators being second order accurate is quadratically decreasing with the mesh step . as a consequence, we can conclude that converges to when converges to 0 alongwith .this is exactly the asymptotic - preserving property of the scheme we intended to validate .in this paper we have presented an asymptotic - preserving numerical method for singular perturbation of non - linear anisotropic reaction - diffusion problems .the asymptotic - preserving property of the scheme is ensured thanks to a solution decomposition explained in full details in the most simple framework of a linear problem .this method is then generalized to non - linear problems thanks to gummel s linearization method . in a second part, several two - dimensional numerical investigations of the ap - scheme are performed .these tests reveal a very weak dependence of the scheme accuracy with respect to the anisotropy direction , demonstrating the relevance of the use of non - adapted coordinates .the asymptotic - preserving property of the scheme is also validated for vanishing on linear as well as non - linear problems .the solution of the limit problem is accurately captured with no restrictions on the anisotropy strength .furthermore , the computational efficiency of the method , in terms of memory as well as cpu usage , does not depend on this anisotropy strength .several applications of the present work can be investigated : at present time , the method has been used for the resolution of linear anisotropic diffusion problems for a two - fluid euler - lorentz model ( see ) and the non - linear version of the method will be coupled to an asymptotic - preserving scheme for a one - fluid full euler - lorentz model ( see ). * acknowledgement .* this work has been supported by the french magnetic fusion programme fr - fcm , by the inria large - scale initiative fusion , by boost and iodissee anr projects and by the cea - cadarache in the frame of the contract appla ( # v3629.001 av .the authors wish to thank p. degond for suggesting this problem and for very fruitful discussions on the topic .n. crouseilles and m. lemou , _ an asymptotic - preserving scheme based on a micro - macro decomposition for collisional vlasov equations : diffusion and high - field scaling limits _ , kinet .models * 4 * -2 , 441477 , 2011 .p. degond , f. deluzet , j. narski and c. negulescu , _ an asymptotic - preserving method for highly anisotropic elliptic equations based on a micro - macro decomposition _ , j. comput. phys . * 231 * -7 , 27242740 , 2012 .p. degond , f. deluzet , l. navoret , a .-sun and m .- h .vignal , _ asymptotic - preserving particle - in - cell method for the vlasov - poisson system near quasi - neutrality _, j. comput. phys . * 229 * -16 , 56305652 , 2010 .
this paper is devoted to the numerical resolution of an anisotropic non - linear diffusion problem involving a small parameter , defined as the anisotropy strength reciprocal . in this work , the anisotropy is carried by a variable vector function . the equation being supplemented with neumann boundary conditions , the limit is demonstrated to be a singular perturbation of the original diffusion equation . to address efficiently this problem , an asymptotic - preserving scheme is derived . this numerical method does not require the use of coordinates adapted to the anisotropy direction and exhibits an accuracy as well as a computational cost independent of the anisotropy strength . [ [ keywords ] ] keywords + + + + + + + + anisotropic diffusion problems ; singular perturbation ; asymptotic - preserving schemes . [ [ ams - subject - classification ] ] ams subject classification + + + + + + + + + + + + + + + + + + + + + + + + + +
the inverse document frequency ( idf ) has been `` incorporated in ( probably ) all information retrieval systems '' ( , pg .attempts to theoretically explain its empirical successes abound ( , _ inter alia _ ) .our focus here is on explanations based on robertson and sprck jones s _ probabilistic - model _( rsj - pm ) paradigm of information retrieval , not because of any prejudice against other paradigms , but because a certain rsj - pm - based justification of the idf in the absence of relevance information has been promulgated by several influential authors .rsj - pm - based accounts use either an assumption due to croft and harper that is mathematically convenient but not plausible in real settings , or a complex assumption due to robertson and walker .we show that the idf can be derived within the rsj - pmframework via a new assumption that directly instantiates a highly intuitive notion , and that , while conceptually simple , solves an estimation problem deemed intractable by robertson and walker .in the ( binary - independence version of the ) rsj - pm , the term is assigned weight where , , is an indicator variable for the presence of the term , and is a relevance random variable .croft and harper proposed the use of two assumptions to estimate and in the absence of relevance information . * * , which is unobjectionable , simply states that most of the documents in the corpus are not relevant to the query .this allows us to set where is the number of documents in the corpus that contain the term , and is the number of documents in the corpus .the second assumption , * * , is that all query terms share the same probability of occurring in a relevant document . under, one sets , and thus ( [ eq : rsj ] ) becomes where is constant ( and is 0 if ) .quantity ( [ eq : chmatch ] ) is essentially the idf .is an ingenious device for pushing the derivation above through .however , intuition suggests that the occurrence probability of query terms in relevant documents should be at least somewhat correlated with their occurrence probability in arbitrary documents within the corpus , and hence not constant .for example , a very frequent term can be expected to occur in a noticeably large fraction of any particular subset of the corpus , including the relevant documents .contrariwise , a query term might be relatively infrequent overall due to having a more commonly used synonym ; such a term would still occur relatively infrequently even within the set of ( truly ) relevant documents . increasing with . ]robertson and walker ( rw ) also object to , on the grounds that for query terms with very large document frequencies , weight ( [ eq : chmatch ] ) can be negative .this anomaly , they show , arises precisely because is constant .they then propose the following alternative : where is the croft - harper constant , but reinterpreted as the estimate for just when .one can check that $ ] slopes up hyperbolically in . applying and to the term - weight scheme ( [ eq : rsj ] ) yields ( which is positive as long as ) .the estimate increases monotonically in , which is a desirable property , as we have argued above .however , its exact functional form does not seem particularly intuitive .rwmotivate it simply as an approximation to a linear form ; approximation is necessary , they claim , because _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the straight - line model [ i.e. , linear in and hence by ] is actually rather intractable , and does not lead to a simple weighting formula ( , pg ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ despite this claim , we show here that there exists a highly intuitive linear estimate that leads to a term weight varying inversely with document frequency .there are two main principles that motivate our new estimate .first , as already stated , any estimate of should be positively correlated with .the second and key insight is that _ query terms should have a higher occurrence probability within relevant documents than within the document collection as a whole_. thus , if the term appears in the query , we should `` lift '' its estimated occurrence probability in relevant documents above , which is its estimated occurrence probability in general documents .this leads us to the following intuitive estimate , which is reminiscent of `` add - one smoothing '' used in language modeling ( more on this below ) : here the in the numerator in the denominator ensures that . ]is a `` lift '' or `` boost '' constant.s offline , that is , before the query is seen . ]plugging and into ( [ eq : rsj ] ) yields the term weight which varies inversely in , as desired . furthermore , as hinted at above , selecting s value is equivalent to selecting s value for query terms whose document frequency is 0 .that is , is directly analogous to in rw s derivation . indeed , choosing is just like choosing , which is commonly done in presentations of the croft - harper derivation in order to eliminate the leading constant in ( [ eq : chmatch ] ) ; doing so in our case yields the following term weight , which is the `` usual '' form of the idf ( , pg .184 ) : finally , note that is linear in ; we have thus contradicted the assertion quoted above that developing a `` straight - line '' model is `` intractable '' .an interesting direction for future work is to consider lift that depend on .it can be shown that different choices of allow one to model _ non - linear _ dependencies of on that occur in real data , such as the approximately logarithmic dependence observed in trec corpora by greiff .importantly , seemingly similar choices of yield strikingly different term - weighting schemes ; it would be interesting to empirically compare these new schemes against the classic idf .we thank jon kleinberg and the anonymous reviewers for helpful comments .this paper is based upon work supported in part by the national science foundation under grant no .iis-0329064 , a yahoo ! research alliance gift , and an alfred p. sloan research fellowship .any opinions , findings , and conclusions or recommendations expressed are those of the author and do not necessarily reflect the views or official policies , either expressed or implied , of any sponsoring institutions , the u.s .government , or any other entity .w. b. croft and d. j. harper . using probabilistic models of document retrieval without relevance information . , 35(4):285295 , 1979 .reprinted in karen sprck jones and peter willett , eds ., _ readings in information retrieval _ , morgan kaufmann , pp . 339344 , 1997 .d. harman . the history of idf and its influences on ir and other fields . in_ charting a new course : natural language processing and information retrieval : essays in honour of karen sprck jones _ ,pages 6979 .springer , 2005 .
there have been a number of prior attempts to theoretically justify the effectiveness of the inverse document frequency ( idf ) . those that take as their starting point robertson and sprck jones s probabilistic model are based on strong or complex assumptions . we show that a more intuitively plausible assumption suffices . moreover , the new assumption , while conceptually very simple , provides a solution to an estimation problem that had been deemed intractable by robertson and walker ( 1997 ) . * categories and subject descriptors : * h.3.3 [ information search and retrieval ] : retrieval models * general terms : * theory , algorithms * keywords : * inverse document frequency , idf , probabilistic model , term weighting
the dynamics of discrete maps can be complicated , and various methods may be introduced to control their asymptotic behaviour . in addition , both the intrinsic dynamics and the control may involve stochasticity . we may ask the following of stochastically perturbed difference equations : 1 . if the original ( non - stochastic ) map has chaotic or unknown dynamics , can we stabilise the equation by introducing a control with a stochastic component ? 2 .if the non - stochastic equation is either stable or has known dynamics ( for example , a stable two - cycle ) , do those dynamics persist when a stochastic perturbation is introduced ? in this article , we consider both these questions in the context of prediction - based control ( pbc , or predictive control ) .ushio and yamamoto introduced pbc as a method of stabilising unstable periodic orbits of where .the method overcomes some of the limitations of delayed feedback control ( introduced by pyragas ) , and does not require the a priori approximation of periodic orbits , as does the ogy method developed by ott et al .the general form of pbc is where and is the iteration of .if , pbc becomes recently , it has been shown how pbc can be used to manage population size via population reduction by ensuring that the positive equilibrium of a class of one - dimensional maps commonly used to model population dynamics is globally asymptotically stable after the application of the control .similar effects are also possible if it is not feasible to apply the control at every timestep .this variation on the technique is referred to as pbc - based pulse stabilisation . here ,we investigate the influence of stochastic perturbations on the ability of pbc to induce global asymptotic stability of a positive point equilibrium of a class of equations of the form .it is reasonable to introduce noise in one of two ways .first , the implementation of pbc relies upon a controlling agent to change the state of the system in a way characterised by the value of the control parameter . in realitywe expect that such precise control is impossible , and the actual change will be characterised by a control sequence with terms that vary randomly around with some distribution .this will lead to a state - dependent , or multiplicative , stochastic perturbation .second , the system itself may be subject to extrinsic noise , which may be modelled by a state - independent , or additive , perturbation .the fact that stochastic perturbation can stabilise an unstable equilibrium has been understood since the 1950s : consider the well - known example of the pendulum of kapica .more recently , a general theory of stochastic stabilisation and destabilisation of ordinary differential equations has developed from : a comprehensive review of the literature is presented in .this theory extends to functional differential equations : for example and references therein .stochastic stabilisation and destabilisation is also possible for difference equations ; see for example .however , the qualitative behaviour of stochastic difference equations may be dramatically different from that seen in the continuous - time case , and must be investigated separately .for example , in , solutions of a nonlinear stochastic difference equation with multiplicative noise arising from an euler discretisation of an it - type sde are shown to demonstrate monotonic convergence to a point equilibrium with high probability .this behaviour is not possible in the continuous - time limit .now , consider the structure of the map .we impose the lipschitz - type assumption on the function around the unique positive equilibrium .[ as : slope ] is a continuous function , for , for , for , and there exists such that function has only a single positive point equilibrium .we will also suppose that is decreasing on an interval that includes : [ as:3 ] there is a point such that is monotone decreasing on .it is quite common for assumptions [ as : slope ] and [ as:3 ] to hold for models of population dynamics , and in particular for models characterised by a unimodal map : we illustrate this with examples [ ex : ric]-[ex : bh ] .it follows from singer that , when additionally has a negative schwarzian derivative , the equilibrium is globally asymptotically stable if and only if it is locally asymptotically stable . in each case , as the system parameter grows , a stable cycle replaces a stable equilibrium which loses its stability , there are period - doubling bifurcations and eventually chaotic behaviour .[ ex : ric ] for the ricker model assumptions [ as : slope ] and [ as:3 ] both hold with , and the global maximum is attained at for .let us note that for the positive equilibrium is globally asymptotically stable and the convergence of solutions to is monotone .however , for the equilibrium becomes unstable .[ ex : log ] the truncated logistic model with and , also satisfies assumptions [ as : slope ] and [ as:3 ] .again , for , the equilibrium is globally asymptotically stable , with monotone convergence to , while for the equilibrium is unstable .[ ex : bh ] for the modifications of the beverton - holt equation and assumption [ as : slope ] holds .also , and satisfy assumption [ as:3 ] as long as the point at which the map on the right - hand side takes its maximum value is less than that of the point equilibrium . if assumption [ as:3 ] is not satisfied , the function is monotone increasing up to the unique positive point equilibrium , and thus all solutions converge to the positive equilibrium , and the convergence is monotone .if all , we have a monotonically decreasing sequence .if we fix in and and consider the growing , the equation loses stability and experiences transition to chaos through a series of period - doubling bifurcations .the article has the following structure . in section 2we relax the control parameter , replacing it with the variable control sequence , and yielding the equation we identify a range over which may vary deterministically while still ensuring the global asymptotic stability of the positive equilibrium .we confirm that , without imposing any constraints on the range of values over which the control sequence may vary , there exists an invariant interval , containing , under the controlled map .we then introduce constraints on terms of the sequence which ensure that all solutions will eventually enter this invariant interval . in section 3 ,we assume that the variation of around is bounded and stochastic , which results in a pbc equation with multiplicative noise of intensity . after identifying constraints on and under which a domain of local stability for exists for all trajectories ,we demonstrate that the presence of an appropriate noise perturbation in fact ensures that almost all trajectories will eventually enter this domain of local stability , hence providing global a.s .asymptotic stabilisation of .the known range of values of under which this stabilisation occurs is larger than for the deterministic pbc equation , and in this sense the stochastic perturbation improves the stabilising properties of pbc . in section 4 , we suppose that the noise is acting systemically rather than through the control parameter , which results in a pbc equation with an additive noise . in this settingit is possible to show that , under certain conditions on the noise intensity , the noise causes a `` blurring '' of the positive equilibrium in the sense that the controlled solutions will enter and remain within a neighbourhood of , and the size of that neighbourhood can be made arbitrarily small by an appropriate choice of .finally , section 5 contains some simulations that illustrate the results of the article , and a brief summary .we begin by relaxing the control variable in the deterministic pbc equation , both as a generalisation to equations of form and to support our analysis of the system with stochastically varying control in section [ sec:3 ] .deterministic pbc equation with variable control parameter may be written in the form where the following result extends ( * ? ? ? * theorem 2.2 ) to develop conditions on the magnitude of variation of for solutions of to approach the positive equilbrium at some minimum rate .[ lem : pbc ] let assumption [ as : slope ] hold and each satisfy , where let be any solution of with . then 1 .the sequence is non - increasing ; 2 .if there is for which , for any 3 .if in addition assumption [ as:3 ] holds , there exists such that for and where we address each part in turn . *first , we prove convergence in the case where the signs of are eventually constant : solutions eventually remain either above or below the positive equilibrium .suppose that there exists such that for .then the subsequence is monotone increasing , since by assumption [ as : slope ] and , + next , we consider the case when the terms of change signs infinitely often .note that we need to take into consideration only the indices where and or and . at any where , we have . subsequences of that do not switch in this way will approach monotonically , as proven above. we must prove that at these switches as well .+ suppose first that and . then necessarily , since otherwise it is also the case that , since + note that implies . since , it follows that , and we have from and that by similar reasoning , if and we have and .therefore thus , is a non - increasing sequence , and part ( i ) of the statement of the lemma is verified .* is a decreasing positive sequence if no terms of the sequence coincide with . if for all then .this implies in turn that the left - hand side of tends to zero , and so the right - hand side also tends to zero . from and continuity of , we have , so the limit can only be . the case where , for all is treated similarly . if then , which implies therefore , and part ( ii ) of the statement of the lemma is confirmed .* let assumption [ as:3 ] hold .by , for any there exists some such that , for , , and thus .further we consider only .also , it has been established above that under the common conditions holding for parts ( i)-(iii ) in the statement of the lemma , is decreasing .let be an index where a switch across the equilibirum occurs , i.e. .then , from the analysis above , if , then for all , so is satisfied in this situation .it remains to consider the case where , .suppose first that , for some .then ] into , and satisfies by lemma [ lem : pbc ] , part ( i ) , thus , maps the interval ] .next , let . by lemma [ lem : pbc ] , part ( iii ), for ] .note first that if assumptions [ as : slope ] and [ as:3 ] hold , the maximum of on is attained on ] to be the smallest point where the maximum of is attained : \left| f(x)= \max_{s \in [ 0,\infty ) } f(s ) \right\ } \right . ; \ ] ] 2 . to be the value of this maximum : 3 . to be the image of under : by assumption [ as:3 ] , decreases on and , thus [ lemma_mu ] suppose that assumptions [ as : slope ] and [ as:3 ] hold , and let be the pbc map defined in . for any ] . by parts ( 1 ) and ( 2 ) of definition [ def :mus ] , we have for any ] , we consider the subintervals ] in turn . if ] , due to the fact that is decreasing on this interval , .thus , \right)\subseteq [ \mu_1,\mu_2] ] , and we conclude that \right)\subseteq [ \mu_1,\mu_2] ] .we will use this approach to obtain global stochastic stability conditions later in the article .[ lemma_add2 ] suppose that assumptions [ as : slope ] and [ as:3 ] hold and there exists such that for every .then , for each , there exists such that the solution of satisfies ] .it will then follow from lemma [ lemma_mu ] that ] .suppose next that .if there is an such that , then either we revert to the previous case or ] for .let be a complete , filtered probability space , and let be a sequence of independent and identically distributed random variables with common density function .the filtration is naturally generated by this sequence : , for . among all sequences of random variables we consider those for which is -measurable for all .we use the standard abbreviation `` a.s . '' for the wordings `` almost sure '' or `` almost surely '' with respect to . in this section , we allow the control parameter to vary stochastically , by setting for each , where controls the intensity of the perturbation and the sequence additionally satisfies the following assumption . [ as : chi1 ] let be a sequence of independent and identically distributed continuous random variables with common density function supported on the interval ] in an -dependent number of steps and stays there forever .in part ( ii ) we prove that each trajectory then enters ] in fewer than steps .* let be the constant associated with the map in assumption [ as:3 ] , and let and be as defined by and in definition [ def : mus ] .since for any ] is an integer part of .next , we denote as in and fix some satisfying denote where was defined in and from which it follows that . additionally denote + 2.\ ] ] from , we have where can be chosen , for example , to satisfy the inequality so the conditions of lemma [ lemma_add2 ] are satisfied and we can deduce that for each trajectory , there is a finite such that ] , we have and hence at least one of is in ] and successive terms of the subsequence satisfy we have . for the proofwe assume that at least one of , is not less than , otherwise ] .it follows from and that on this trajectory satisfies , for .+ applying lemma [ lem : pbc ] with , as chosen in in place of , and defined in , we arrive at for . if ] , now we bring together all parts of the proof . in part( i ) , we verified that there exists a finite random number such that , for any , ] for all . in part ( ii ), we proved that there exists ] , then all , for . + since , then by in corollary [ new_add1 ] , assumptions [ as:3 ] and [ as : chi1 ] ( ) , and , if ] for any . +* we prove that for any ] then the following is true for all : either ] for all , denote + 1.\ ] ] by lemma [ cor : barprob ] , =1\ ] ] note that if is such that for some , ] for all , impossible .+ if then , by assumption [ as:3 ] , , so , by our choice of , together with part ( i ) , this allows us to conclude that \right]=1.\ ] ] * we may now proceed to prove . by in the statement of corollary[ new_add1 ] , .\ ] ] by parts ( i ) and ( ii ) we only need to consider trajectories belonging to the a.s .event referred to in . introducing the nonnegative sequence with , we notice that there are two possibilitiesif then and therefore all successive terms will be on the interval .2 . if then and thus is a positive decreasing sequence for as long as each .+ hence , either has a limit satisfying , by , or it eventually drops below .therefore , for any , =1,\ ] ] which immediately implies , and the statement of the lemma . finally we show that it follows from lemma [ lem : add_noise ] that the neighbourhood of into which solutions eventually settle can be made arbitrarily small by placing an additional constraint on the noise intensity .suppose that assumptions [ as : slope ] , [ as:3 ] and [ as : chi1 ] ( with ) hold , and that let be any solution of equation with .for any , there exists satisfying such that ,\ , n\geq\mathcal{n}\right]=1.\ ] ] let us choose in the statement of lemma [ lem : add_noise ] , then a reference to in lemma [ lem : add_noise ] completes the proof .our numerical experiments are mostly concerned with the stabilising effect of the multiplicative noise .first , let us illustrate stabilisation of the chaotic ricker model using pbc with multiplicative noise .[ ex_ricker_1 ] consider the chaotic ricker map with .as mentioned in example [ ex_ricker ] , inequality is satisfied with , while for and , holds . as , we can take in further computations , will be satisfied .thus , according to theorem [ multi_local ] , we should choose such that , let us take , , .then , , fig .[ figure1 ] shows fast convergence of solutions to the equilibrium .next , let us take , for example , .[ figure2 ] illustrates the dynamics of the ricker equation with deterministic pbc ( ) , and the multiplicative uniformly distributed noise with the growing perturbation amplitudes . as in ( ) with and multiplicative stochastic perturbations , where , , , and ( from left to right ) , and uniformly distributed noise.,title="fig : " ] as in ( ) with and multiplicative stochastic perturbations , where , , , and ( from left to right ) , and uniformly distributed noise.,title="fig : " ] as in ( ) with and multiplicative stochastic perturbations , where , , , and ( from left to right ) , and uniformly distributed noise.,title="fig : " ] as in ( ) with and multiplicative stochastic perturbations , where , , , and ( from left to right ) , and uniformly distributed noise.,title="fig : " ] finally , let us fix , and increase .the distribution function of is chosen to be , where is uniformly distributed on .as leads to , half of the perturbations are negative .we can observe the stabilising effect of larger in fig .[ figure3 ] . as in ( ) with and multiplicative stochastic perturbations , where , , and ( left ) , ( middle ) , ( right ) .,title="fig : " ] as in ( ) with and multiplicative stochastic perturbations , where , , and ( left ) , ( middle ) , ( right ) .,title="fig : " ] as in ( ) with and multiplicative stochastic perturbations , where , , and ( left ) , ( middle ) , ( right ) .,title="fig : " ] denote consider the function , \\\frac{f(0.99)}{x+0.01 } , & \mbox { if~~ } x \in ( 0.99 , \infty ) . \end{array } \right.\ ] ] thus , \\ \frac{100 f(0.99)}{100x+1 } , & \mbox { if~~ } x \in ( 0.99 , \infty ) . \end{array } \right .\label{singfunc}\ ] ] following , we notice that has a locally stable fixed point together with a locally stable period two orbit . here . in fig .[ figure5a ] , we illustrate the function introduced in on the segment ] perturbations is , is uniformly distributed in $ ] we observe that the stochastic perturbation can make a locally ( though not globally ) asymptotically stable equilibrium , globally asymptotically stable .an important condition of this global stability is that there is a neighbourhood of the equilibrium which is invariant for any perturbations . on the other hand, the occasional perturbations amplitude should be large enough to leave the stable 2-orbit .if we increase the amplitude to , the process of attraction of solution to the locally stable equilibrium is faster , see fig .[ figure5 ] , right . as in ( ) and multiplicative stochastic perturbations with ( left ) , and ( right ) , .,title="fig : " ] as in ( ) and multiplicative stochastic perturbations with ( left ) , and ( right ) , .,title="fig : " ] 1 .as expected , in the presence of either multiplicative or additive stochastic perturbations , the unique positive equilibrium can become blurred .however , for a class of maps that includes commonly occurring models of population dynamics , stochasticity can contribute to the stability of this equilibrium .first , the bounds of the control parameter for which any solution of the controlled system converges to this ( blurred ) equilibrium expand .the second relevant issue is that even in the case when the positive equilibrium of the deterministic equation is not globally attractive , its blurred version can become attractive under perturbations , as numerical examples illustrate .j. a. d. appleby , c. kelly , x. mao and a. rodkina , positivity and stabilization for nonlinear stochastic delay differential equations , _ stochastics : an international journal of probability and stochastic processes _ , * 81*:1 ( 2009 ) , 2954 .
we consider the influence of stochastic perturbations on stability of a unique positive equilibrium of a difference equation subject to prediction - based control . these perturbations may be multiplicative if they arise from stochastic variation of the control parameter , or additive if they reflect the presence of systemic noise . we begin by relaxing the control parameter in the deterministic equation , and deriving a range of values for the parameter over which all solutions eventually enter an invariant interval . then , by allowing the variation to be stochastic , we derive sufficient conditions ( less restrictive than known ones for the unperturbed equation ) under which the positive equilibrium will be globally a.s . asymptotically stable : i.e. the presence of noise improves the known effectiveness of prediction - based control . finally , we show that systemic noise has a `` blurring '' effect on the positive equilibrium , which can be made arbitrarily small by controlling the noise intensity . numerical examples illustrate our results . * ams subject classification : * 39a50 , 37h10 , 34f05 , 39a30 , 93d15 , 93c55 * keywords : * stochastic difference equations ; prediction - based control , multiplicative noise , additive noise
the efficient distribution of resources is a challenging problem relevant for many types of complex networks .the examples include social networks , power transmission grids , communication systems , and road infrastructures .physicists , in the past years , have studied their structure and considerably contributed to the understanding of processes going on these networks .the efficient immunization against the epidemic spreading of diseases and strategies for failure prevention are important topics with many practical implications in real systems .scientists have recently demonstrated the benefits of targeted and acquaintance immunization in scale - free networks , have studied the applicability of `` flooding '' dissemination strategies based only on local information , and have proposed efficient strategies for eliminating cascading effects in networks .in contrast to these works we would like focus on interdependent systems and on the spreading dynamics of disastrous events between the networked components .disastrous events are bothering mankind from the earliest days .the ability to recover the functionality of damaged infrastructures promptly is crucial for survival and determines , whether the affected areas will overcome the consequences of catastrophe or not .emergency response and recovery call for external resources , which are limited and , therefore , have to be deployed as efficiently as possible. the question how to effectively distribute resources in order to fight disasters best has already been addressed by many researchers . as examples ,we mention the redistribution of medical material , the mitigation of large - scale forest fires , and the fighting of floods .an experimental study of disasters under real world conditions is almost impossible , and therefore , mathematical and computer models are often very helpful tools to extend human knowledge. however , the complexity of systems struck by disasters does not allow one to model the interactions of all involved entities and processes in detail .therefore , we have to capture them by an appropriate generic model .disastrous events are often characterized by cascading failures propagating in the system due to the causal dependencies between system components .these casual dependencies result from structural and functional interdependencies and can be modeled by directed networks .note that there were several attempts to quantify such networks for particular cases , using interaction network approaches or fuzzy cognitive maps .loops in these networks are crucial , since the amplification of negative effects through the loops may considerably deteriorate the situation .such loops are sometimes called `` vicious circles '' .the above mentioned view of disasters has led us to the formulation of a general spreading model of failures in networks . to assess the importance of the availability of information about the network on the efficiency of disaster recovery , in this paper we will study the effect of different protection strategies .these strategies are based on different information evaluation and control the distribution of resources over the system components . as parameters in our model , we consider the overall quantity of resources , the recovery time delay , and the network topology .as our simulations did not give qualitatively different results for varying link weights , we will not discuss the case of heterogeneous network links in this paper .our presented simulation results rather focus on the average efficiency of the considered strategies and on the `` worst - case '' scenario , which is given by the most `` unfriendly '' realization of all random parameters .our paper is organized as follows : sec .[ sec : mod ] presents our mathematical model of disaster spreading . in sec .[ sec : mob ] , we describe the mobilization process of resources . disaster recovery modeling issues and protection strategies are discussed in sec .[ sec : cri ] , while the results of our computer simulations are presented in sec .[ sec : res ] . to conclude this paper , sec .[ sec : con ] summarizes the most important findings and outlines possible directions of future research .in this section , we briefly summarize our model of disaster spreading originally proposed in .the model is based on a graph of interconnected system components .the directed links , with , represent structural and functional dependencies between the components .the state of a node at time is described by a continuous variable , where corresponds to a normal functioning of the component .the deviation from this state , caused by disturbances , represents the level of challenge of system component . at the present stage of abstraction, we do not consider diverse functionalities of the components and we assume an additive impact of external disturbances coming from neighboring components .each real system exhibits a natural level of resistance to challenges .we reflect this tolerance by a special threshold and assume that a node tends to fail , when the sum of all disturbances acting on it exceeds this value . rather than by a discontinuous step function we describe this by the sigmoidal function },\ ] ] where is a `` gain parameter '' . the interactions between the components are quantified by the connection strengths and by the link transmission time delays .the overall dynamics of a node is then given by : where the first term on the right - hand side models the ability of component to recover from perturbations and the second term describes the superposition of all pertubative influences by adjacent nodes on node . if , the recovery term tends to drive back to zero .the recovery rate characterizes the speed of the recovery process at node .the function introduces an additional weight to reflect that the impact of highly connected neighboring nodes is smaller , because their influence is distributed among many nodes and in this way `` dissipated '' . is the out - degree of node while and are fit parameters .the disturbances , as they are transmitted over the links , can be strengthened or weakened by different factors like for instance the time delays or physical properties of the surrounding . the intensity of this process can , in our model , be controlled by the parameter . in the experimentswe have used , which corresponds to relatively weak damping of disturbances on links .our simulation studies were performed for four types of directed networks representing different systems .specifically , we have studied networks such as regular ( grid ) networks , random networks , scale - free networks , and small - world networks . only regular ( grid ) networks were specified with bidirectional links .the directed scale - free networks were generated using the algorithm by bollobs , borgs , chayes , and riordan , where the attachment of new node is controlled by probabilities , , with and by non - negative parameters and .these parameters have been set to , , , and .small - world networks have been generated using the procedure described in ref .this procedure slightly generalized the generation of _ undirected _ small - world graphs proposed by watts and strogatz : in contrast to their original algorithm , we have randomly assigned directions to links , with probabilities for clockwise and counter - clockwise direction of 0.3 each , while a bidirectional link has been assumed with probability 0.4 . finally , a random rewiring procedure with rewiring probability has been applied .in addition , we have generated random networks of the erds - rnyi type .all networks have been generated in a way that the resulting average node degree was approximately 3.6 .the grid network was organized in 25 rows each containing 20 nodes . throughout this paper , all computer - generated networks are composed of 500 nodes . moreover, our homogeneous parameter settings assume that all and all , where a link from node to exists , otherwise .the time delays are -distributed , where we have chosen for the number of degrees freedom of the -function .however , the distribution was stretched multiplying by factor 0.05 and shifted by adding the value 1.2 in order to get an average delay of .let us assume that the emergency forces and all material flows are entering the affected area continuously in time .this process can be modeled by a continuous function , which defines , how much resources have reached an affected area at time .the shape of this function is an essential point of our model , because the prompt mobilization of resources a has strong influence on the efficiency of countermeasures . despite the frequent occurrence of disasters, we found only a few publications that provide a detailed information about the progress of mobilization in time .for example , fig .[ fig : data ] shows the manpower and vehicles , which were involved in the recovery activities to fight the elbe river flooding in germany in august 2002 .both curves are quantitatively similar and can be well approximated using the function , where , and are fit parameters .the mobilization itself is represented by the growing part of the curve . to reflect the progress of mobilization of external resources in our simulations, we have used the approximate fit curve for manpower .besides time progress of the mobilization , further important parameters are the overall quantity of external resources and the response time .the response time is the time interval between the occurrence of the initial disastrous event and the first provision of resources .the resources used for the recovery are assumed to be distributed in time according to the manpower data presented in fig .1 . we have normalized the magnitude of this curve according the total amount of resources , keeping its shape .the time period during which the distribution of resources takes place was set to half of the simulation time horizon , i.e. time steps . .the best fit parameters for manpower ( top ) are , , , while for vehicles ( bottom ) they are , , .,title="fig:",scaledwidth=38.0% ] .the best fit parameters for manpower ( top ) are , , , while for vehicles ( bottom ) they are , , .,title="fig:",scaledwidth=38.0% ]disasters come mostly unexpected , and the first moments after their occurrence are characterized by a high uncertainty in the estimation of the overall impact .crisis management coordinates the work of all emergency units and often has to take decisions based on scarce information .this requires a reliable organization in term of information flows , their evaluation and the choice of appropriate respose strategies . to uncover what information is most important for efficient disaster response , we study here the properties of several recovery strategies , allocating the resources to components based on different information . as first kind of information ,let us consider the knowledge of the component s connectivity , i.e. the out - degrees and in - degrees of the nodes .this information allows one to uncover those components , which influence most other components and those which are easily vulnerable , because they have many in - going links . as second kind of informationwe assume that the locations and seriousness of malfunctions in the network are well - known .this information reflects the current level of node damage and allows one to prioritize the nodes which are more seriously damaged .considering these two kinds of information , we have formulated the following recovery strategies : * _ uniform dissemination _ , i.e. each node gets the same amount of resources , * _ out - degree based dissemination _ ,i.e. the resources are distributed over nodes proportionally to their out - degrees , * _ uniform reinforcement of challenged nodes _ , i.e. all nodes with are equally provided with resources , * _ simple targeted reinforcement of destroyed nodes _ , i.e. damaged nodes ( ) are equally provided with resources with priority , while challenged nodes ( ) are uniformly reinforced if no damaged nodes exist , * _ simple targeted reinforcement of highly connected nodes _ , i.e. a fraction of highly connected nodes is uniformly provided with resources by using the fraction of all resources , while the remaining resources are applied according to strategy , * _ out - degree based targeted reinforcement of destroyed nodes _ , i.e. application of strategy , but with a distribution of resources proportional to the out - degrees of nodes rather than a uniform distribution .equation ( [ eq : node ] ) represents the mitigation activities in the nodes by the recovery rates .it models a situation without additional external forces sent to challenged system components to perform mitigation actions .thus , at the beginning it is assumed that the mitigation activities are weak ( ) , because they are based only on internal resources . if these internal resources are not sufficient to cope with the evolving disaster , external resources have to be mobilized .the assignment of external resources to a node is assumed to increase the recovery rate of a node according to our model assumes that , once resources have been assigned to a node , they will remain at the selected node and are not reassigned again . in eq .( [ eq : healing ] ) , the cumulative amount of resources assigned to node is denoted by .the formula reflects the fact that each new unit of resources has a smaller effect than the previous one , which is due to the decreasing efficiency of recovery activities , when the concentration of forces grows .these effects are well - known and may be explained by increasing efforts for communication and the coordination of forces .the influence of this effect is represented by the fit parameter .the parameter defines an upper bound of the recovery rate . when developing formula ( [ eq : healing ] ) , we have required the following : 1 .resources have only positive influence on the state of the node . in other words ,the function should grow monotonously with the parameter .2 . when there are no resources applied in node i.e. , , then .3 . finally , we expect a limited speed of recovery process .in fact , for we have .formula ( [ eq : healing ] ) obeys all three conditions , and we expect qualitatively similar results for all continuous functions satisfying these conditions .we have extensively studied the properties of protection strategies by means of computer simulations . due to the existence of random parameters , such as , the results of the simulation experiments varied with the realizations of the random variables .experiments started at time , when the variable of one randomly selected node was set to the value for 10 time units .figure [ fig : damage ] shows an example how the average number of damaged nodes than develops in the course of time .the existence of hubs causes that the perturbation propagates much faster in scale - free networks than in grids , but on the other hand , the protection strategies work more efficiently , when they can focus on these highly connected nodes . to assess the behavior of our model ,we have evaluated the most unpleasant scenario , which occurs when we consider the most unfriendly realization of the random parameters .one possible characteristics , which reflects this `` worst - case '' scenario , is the dependence of the minimal quantity of resources required to recover the network on the response time delay .it defines a success threshold for each considered strategy . except for this, we have evaluated the average damage of the respective network .therefore , all experiments have been performed with the same simulation time horizon ( ) . ) for scale - free networks and a regular grid networks , applying different protection strategies .dashed - dotted line : no disposition of resources for recovery .solid line : strategy .long - dashed line : strategy .short - dashed line : strategy .dotted line : strategy .the value of the response time was set to and the overall disposition of resources to ( apart from the dashed - dotted line , where r = 0).,title="fig:",scaledwidth=38.0% ] ) for scale - free networks and a regular grid networks , applying different protection strategies .dashed - dotted line : no disposition of resources for recovery .solid line : strategy .long - dashed line : strategy .short - dashed line : strategy .dotted line : strategy .the value of the response time was set to and the overall disposition of resources to ( apart from the dashed - dotted line , where r = 0).,title="fig:",scaledwidth=38.0% ] in this subsection we determine the minimum required resources as a function of response strategy and the network topology , and we study how changes when the response time delay increases . is the minimum quantity of resources which guarantees the complete recovery of the network for each particular scenario .we estimate this quantity by performing a huge amount of numerical calculations separately for each studied network . in each simulation run , the location of the initial disturbance and the time delays are randomly varied . to obtain , we use the bisection method ..[tab : tab1]values of obtained for strategies and .the rows correspond to the different network types : square grid ( gr ) , small - world networks ( sw ) , erds - rnyi networks ( er ) , and scale - free networks ( sf ) .the variance in data was obtained by moving over values .[ cols="<,^,^,^,^,^ , > " , ] as the simplest strategies and do not take into account the current level of damage , the failures propagate over the whole network , and the minimum required resources are independent of the response time delay .the values are listed in table [ tab : tab1 ] .strategy demands the highest disposition of resources in scale - free structures .this adverse behavior of scale - free networks arises due to the difficulties in the recovery of hubs and can be eliminated by preferential reinforcement of nodes with high out - degrees ( compare the values of strategies and ) . for the damage - based strategies and ( see fig .[ fig : rmin ] ) we observe two basic types of behavior : within the studied range of response time delays , the values of are either growing , or they stay approximately constant . if they are growing with , the resources are sufficient to repair the network before the failures affected the whole network . in the region where does not change significantly with increasing ,damage spreads all over the network .therefore , the resources required to restore the failure - free state of the network are always the same .our data show the highest spreading velocity for scale - free networks and the slowest spreading for regular grids .the erds - rnyi and small - world networks are somewhere in between and the transition point between the growing and the constant part of represents the critical value of beyond which failures paralyze the complete network .small - world networks and , to some degree , scale - free networks as well show a decrease of for large values of the response delay time , which is surprising ( see fig . [fig : rmin ] , strategy ) .this decrease indicates the unbalanced distribution of resources , where there is a surplus of resources in some nodes and a deficit elsewhere .the relationship between the velocity of failure propagation and resources mobilization is crucial for damage - based protection strategies .the spreading velocity is increased by the existence of a small - world effect , which is based on the existence of long - range links ( shortcuts ) . over these shortcuts , failures spread very fast to distant parts of the network .consequently , the resources must be distributed over a large area .however , if is small , they are deployed less uniformly , because the majority of resources is deployed during the time when only a small part of the network is affected by failures . in such situations , we can find groups of interconnected nodes , which have been less provided with resources .later on , these nodes require an additional effort to be repaired .in contrast , when is large , the resources are distributed more uniformly and the overall demanded quantity of resources is smaller . in practice , this calls for a precise assessment of the propagation velocity and mobilization rates , which is possible only when the eventually occurring damages can be identified in advance .taking into account information about the network structure , which determines the possible sequence of failure occurrence , this problem can be significantly reduced ( see fig .[ fig : rmin ] , strategy ) . needed to recover a challenged network as a function of the response time delay .squares correspond to bidirectional grid networks , plus signs to scale - free networks , multiplication signs to erds - rnyi networks and circles to small - world networks .the inset shows obtained for scale - free networks after the applying of strategy ( , ),title="fig:",scaledwidth=38.0% ] needed to recover a challenged network as a function of the response time delay .squares correspond to bidirectional grid networks , plus signs to scale - free networks , multiplication signs to erds - rnyi networks and circles to small - world networks .the inset shows obtained for scale - free networks after the applying of strategy ( , ),title="fig:",scaledwidth=38.0% ] .[ fig : rmin ] in order to decrease the spreading velocity in scale - free networks , we suggest to apply strategy , which stresses the protection of highly connected nodes . employing a simple heuristic algorithm ,we have found values of the parameters and , which minimize .the reduction is highest for and ( see fig . [fig : rmin ] ) .although strategy utilizes the detailed information about the current damage and network structure , the values of for scale - free networks are larger for small values of compared to other networks treated by strategies and . on the other hand , for long response time delays , the smallest disposition of resources is sufficient to recover scale - free networks . between the application of the efficient strategy and the inefficient strategy .the dashed line corresponds to parameter combinations for which the difference between the strategies is 20% , while the solid line corresponds to a difference of 80% .the curves have been obtained by simulations using the bisection method.,scaledwidth=48.0% ] before we compare the efficiency of the different disaster response strategies , we will shortly discuss the influence of the strategy parameters on the efficiency of the recovery strategies and take a look at the probabilistic distribution of damage . a shortage of resources or a large response time delay hardly be compensated for , even by sophisticated protection strategies . in the fig .[ fig : data2 ] , we compare the typical damage when applying strategy or strategy .strategy was found to be the most efficient one in simulation experiments , while strategy was the most inefficient one ( see below ) .the damage related to strategy , was quantified by the time integral over the number of destroyed nodes .all results in this subsection are expressed through the average damage , where we varied the initially disturbed node .our results show only small differences between the strategies , when is large or is small. however the overall damage of strategies and declines , when r grows and decreases .the superiority of strategy over the strategy is most significant in the region of large resources and short response delays .thus , improvements in the protection strategy have the highest effect when the response time delay and the disposition of resources for recovery are within reasonable limits , while late response can not be compensated for even by the best strategies .similar results have been found for smallpox outbreaks in social networks . for a sample of numerical experiments for erds - rnyi networks with a fixed disposition of resources and different values of .the dashed line corresponds to , the dashed - dotted line to , and the solid line to .,scaledwidth=38.0% ] a growing response time delay has a strong impact on the distribution of damage .when we fix the amount of resources and vary , the damage is typically distributed in the way shown in fig .[ fig : dist ] . for small values of ,the recovery process is able to repair the network in a very short time ( dashed line ) . for intermediate values of two distinct situationsare observed ( dashed - dotted ) : depending on the initial disturbance and on the random parameters , the spreading is either quickly stopped and the network is recovered . or , the recovery process is not able to interrupt cascade failure over the entire network , when the number of infected nodes exceeds a certain quantity . for , the systemis still repaired , but much later , than for small values of .thus , for intermediate response time delays we can expect a big discrepancy between the damage in the best and the worst case scenario .this behavior strongly reminds of the initial phase of real disasters , where an apparently irrelevant event like a small social conflict , a thrown cigarette or a delayed disposal of waste can , under similar conditions , either vanish without any significant impact or trigger riots , forest fires or epidemics . in order to answer the question which strategies are more proper for which kinds of networks , we have compared the average damage for a matrix of parameter combinations ( with and ) . asthe behavior of small - world networks is very similar to erds - rnyi networks , we omitted them in fig .[ fig : comp ] .the strategy has been particularly suited for scale - free networks to reach the minimum disposition of resources required for network recovery .this strategy is most efficient for values of close to . for erds - rnyi , small - world and grid networks ,the success of this strategy depends on the respective values of .strategy is relatively effective , when is small and is large ( note , that for this combination of parameters the differences between the strategies are very small , see fig .[ fig : data2 ] ) .however , when is large , strategy performs poorly , due to the excessive provision of resources to a small group of nodes regardless of the damage .the most universal and also most effective of all investigated strategies is strategy . on the other hand ,( together with strategy ) , it also requires the most detailed information .the overall results of our comparison can be summarized as follows : if we have the option to choose whether to orient the disaster recovery strategy at the network structure or at the current damage , then , regular grids with a small spreading velocity are protected best by strategies reacting to the level of damage . in contrast , for scale - free networks it is more effective to take the network structure into account .the choice of the proper strategy for erds- rnyi and small - world structures depends on the response time delay . for short time delays, there is a good chance to reduce the spreading by preferential protection of damaged nodes , but when the time delay is large and many nodes have already been affected , the damage is minimized by protection of nodes with high out - degrees .disaster recovery and the operation of inter - connected infrastructures involve an intricate decision making where each action can invoke a variety of hardly predictable reactions .here the network type plays an important role , and the theory of complex systems and the statistical physics of networks offer powerful methods .these allow one to gain a better understanding of the dynamics of disaster spreading and to derive valuable results how to fight them best . in this paper, we have specifically studied the efficiency of several strategies to distribute resources for the recovery of disaster - struck networks .these strategies use information about the network structure and knowledge about the current damage . as main parameters ,we have considered the overall quantity of resources and the response time delay . by means of simulations ,we have determined the minimum disposition of resources , which is necessary to stop disaster spreading and recover from it .the behavior of scale - free networks was found to be ambiguous . in comparison with other network structures ,the highest quantity of resources for recovery is needed in case of small response time delays , while the required disposition of resources is smallest for large time delays .when the response time delay and disposition of resources are within reasonable limits , the optimization of protection strategies has the largest effect .furthermore , strategies oriented at the network structure are efficient for scale - free networks , while strategies based on the damage are more appropriate for regular grid networks .the suitable strategy for erds - rnyi and small - world networks depends on the response time delay . in case of short time delays ,the damage reduction is higher for damage - based strategies , whereas strategies oriented at information about the network structure are better for large response time delays .therefore , we expect that the properties of response strategies could be further improved by switching between different strategies in time . this will be a subject of our forthcoming investigations .the authors are grateful for partial financial support by the german research foundation ( dfg project he 2789/6 - 1 ) and the eu projects irriis and mmcomnet .10 j. davidsen , h. ebel , and s. bornholdt , phys .lett . * 88 * , 128701 ( 2002 ) .a.e . motter and y.c .lai , phys .e * 66 * , 065102 ( 2002 ) .v. rosato and f. tiriticco , europhys. lett . * 66 * , 471 , ( 2004 ) .m. newman , s. forrest , and j. balthorp , phys .e * 66 * , 035101 ( 2002 ) .v. kalapala , v. sanwalani , a. clauset , and ch .moore , phys.rev .e * 73 * 026130 ( 2006 ) .z. dezso and a.l .barabasi , phys .e * 65 * , 055103 ( 2002 ) .satorras and a. vespignani , phys .e * 65 * , 036104 ( 2002 ) .j. goldenberg , y. shavitt , e. shir , and s. solomon , nature physics * 1 * , 184 ( 2005 ) .r. cohen , s. havlin , and d. avraham , phys .lett . * 91 * , 247901 ( 2003 ) .a. o. stauffer and v. barbosa , phys.rev .e * 74 * 056105 ( 2006 ) .motter , phys .lett . * 93 * , 098701 ( 2004 ) .m. schfer , j. scholz , and m. greiner , phys .lett . * 96 * , 108701 ( 2006 ) .tuson , r. wheeler , and p. ross , in _ proceedings of the second international conference on genetic algorithms in engineering systems : innovations and applications ( galesia 97 ) _( ieee , 1997 ) , p. 245 .p. fiorucci , f. gaetani , r. minciardi , r. sacil , and e. trasforini , in _ proceedings of 15th international workshop on database and expert systems applications _( ieee computer society , washington , 2004 ) , p. 603 .e.g. altmann , s. hallerberg , and h. kantz , physica a * 364 * , 435 ( 2006 ) .d. helbing , h. ammoser , and c. khnert , in _ the unimaginable and unpredictable : extreme events in nature and society _ , edited by s. albeverio , v. jentsch , and h. kantz ( springer , berlin , 2005 ) .d. helbing and c. khnert , physica a * 328 * , 584 ( 2003 ) .papageorgiou , e.p .konstantinos , s.s .chrysostomos , p.p .groumpos , and m.n .vrahatis , j. intell .syst . * 25 * , 95 ( 2005 ) .l. buzna , k. peters , and d. helbing , physica a * 363 * , 132 ( 2006 ) .the function is motivated by the assumption that the impact of highly connected nodes is distributed among many neighboring nodes and , therefore , may decrease with the outdegree . a simple linear dependence would be more appropriate in cases , when the spreading depends on the transmission of some conserved quantity like , for example , in electrical circuits or in road traffic .however , in some cases the spreading does not obey any conservation law , as for the spreading of forest conflagrations or epidemics .since we do not consider any concrete scenario in this paper , we decided to use a formula that is more general than a linear function .other decaying functions in are expected to have qualitatively similar effects .b. bollobas , c. borgs , j .chayes , and o. riordan , in _ proceedings of the 14th acm - siam symposium on discrete algorithms ( soda ) _( soc . for industrial & applied math . ,baltimore , 2003 ) , p. 132 .t. murai , master thesis , aoyama gakuin university ( japan ) , 2003 .watts and s.h .strogatz , nature ( london ) * 393 * , 440 ( 1998 ) .s. eubank , h. guclu , v.s.a .kumar , m.v .marathe , a. srinivasan , z. toroczkai , and n. wang , nature ( london ) * 429 * , 180 ( 2004 ) .bericht der unabhngigen kommission der schsischen staatregierung flutkatastrophe 2002 .d. helbing , h. ammoser , and c. khnert , physica a * 363 * , 141 ( 2006 ) .d. stauffer and p.m. oliveira , int .c * 17 * , 09 , 1367 , ( 2006 ) .
we study the effectiveness of recovery strategies for a dynamic model of failure spreading in networks . these strategies control the distribution of resources based on information about the current network state and network topology . in order to assess their success , we have performed a series of simulation experiments . the considered parameters of these experiments are the network topology , the response time delay and the overall disposition of resources . our investigations are focused on the comparison of strategies for different scenarios and the determination of the most appropriate strategy . the importance of prompt response and the minimum sufficient quantity of resources are discussed as well .
broadcasting a message from a source node to every other node of a network is one of the most basic communication primitives .since this operation should be performed by making use of a both sparse and fast infrastructure , the natural solution is to root at the source node a _shortest - path tree _ ( spt ) of the underlying graph. however , the spt , as any tree - based network topology , is highly sensitive to a link / node malfunctioning , which will unavoidably cause the disconnection of a subset of nodes from the source . to be readily prepared to react to any possible ( transient ) failure in a spt , one has then to enrich the tree by adding to it a set of edges selected from the underlying graph , so that the resulting structure will be 2-edge / vertex - connected w.r.t .the source .thus , after an edge / vertex failure , these edges will be used to build up the alternative paths emanating from the root , each one of them in replacement of a corresponding original shortest path which was affected by the failure . however , if these paths are constrained to be _, then it can be easily seen that for a non - negatively real weighted and undirected graph of nodes and edges , this may require as much as additional edges , also in the case in which . in other words , the set - up costs of the strengthened network may become unaffordable .thus , a reasonable compromise is that of building a _ sparse _ and _ fault - tolerant _ structure which _ accurately approximates _ the shortest paths from the source , i.e. , that contains paths which are longer than the corresponding shortest paths by at most a multiplicative _ stretch _ factor , for any possible edge / vertex failure .the aim of this paper is to show that very efficient structures of this sort do actually exist .[ [ related - work . ] ] related work .+ + + + + + + + + + + + + let denote a distinguished source vertex of a non - negatively real weighted and undirected graph .we say that a spanning subgraph of is an _ edge - fault - tolerant -approximate spt _ ( in short , -easpt ) , with , if it satisfies the following condition : for each edge , all the distances from in the subgraph are -stretched w.r.t .the corresponding distances in .vertex failures _ are considered , then the easpt is correspondingly called vaspt .our work is inspired by the paper of parter and peleg , which were concerned with the same problem but on _ unweighted _ graphs ( and so they were focusing on the construction of an _ edge - fault - tolerant -approximate breadth - first search tree _ ( in short , -eabfs ) . in that paperthe authors present a -eabfs having at most edges .rooted at the source node , but , as we will point out in more detail later , a -easpt of size at most ( and then , _ a fortiori _ , a -eabfs of the same size ) , can actually be obtained as a by - product of the results given in .] moreover , the authors also present a set of lower and upper bounds to the size of -eabfs , i.e. , edge - fault - tolerant structures for which the length of a path is stretched by at most a factor of plus an additive term of . finally , assuming at most edge failurescan take place , they show the existence of a -eabfs of size .on the other hand , if one wants to have an _ exact _ edge - fault - tolerant spt ( say espt ) , then as we said before this may require edges .this is now in contrast with the unweighted case , where it can be shown the existence ( see ) of an _ edge / vertex - fault - tolerant bfs _( say ebfs / vbfs ) of size , where denotes the eccentricity of in . in the same paper, the authors also exhibit a corresponding lower bound of for the size of a ebfs .moreover , they also treat the _ multisource _case , i.e. , that in which we look for a structure which incorporates an ebfs rooted at each vertex of a set . for this , they show the existence of a solution of size , which is tight . finally , the authors provide an -approximation algorithm for constructing an optimal ( in terms of size ) ebfs ( also for the multisource case ) , and they show this is tight .as far as the vertex - failure problem is concerned , in the authors study the related problem of computing _ distance sensitivity oracles _ ( dso ) structures .designing an efficient dso means to compute , with a _ low _ preprocessing time , a _compact _ data structure which is functional to _ quickly _ answer to some distance query following a component failure .classically , dso cope with single edge / vertex failures , and they have to answer to a point - to - point post - failure ( approximate ) distance query , or they have to report a point - to - point replacement short(est ) path .in particular , in the vertex - failure case w.r.t . a spt is analyzed , and the authors compute in time a dso of size , that returns a 3-stretched replacement path in time proportional to the path s size . as the authors specify in the paper , this dso can be used to build a -vaspt of size , and a -vabfs of size .actually , we point out that the latter structure can be easily sparsified so as to obtain a -eabfs of size : in fact , its size term is associated with an auxiliary substructure that , in the case of edge failures , can be made of linear size .this result is of independent interest , since it qualifies itself as the best current solution for the eabfs problem .[ [ our - results . ] ] our results .+ + + + + + + + + + + + our main result is the construction in polynomial time of a -vaspt of size , for any .this substantially improves on the -vaspt of size given in . to obtain our result, we perform a careful selection of edges that will be added to an initial spt .the somewhat surprising outcome of our approach is that if we accept to have slightly stretched fault - tolerant paths , then we can drastically reduce the size of the structure that we would have to pay for having fault - tolerant _ shortest _ paths ! actually , the analysis of the stretch factor and of the structure s size induced by our algorithm is quite involved .thus , for clarity of presentation , we give our result in two steps : first , we show an approach to build a -easpt of size , then we outline how this approach can be extended to the vertex - failure case .furthermore , we also focus on the unweighted case , and we exhibit an interesting connection between a fault - tolerant bfs and an _ -spanner_. an -spanner of a graph is a spanning subgraph of such that _ all _ the intra - node distances in are stretched by at most a multiplicative factor of and an additive term of w.r.t . the corresponding distances in .we show how an ordinary -spanner of size can be used to build in polynomial time an -eabfs and an -vabfs of size and , respectively . as a consequence ,the eabfs problem is easier than the corresponding ( non fault - tolerant ) spanner problem , and we regard this as an interesting hardness characterization .notice also that for all the significant values of and , the size of an -spanner is , which essentially means that the vabfs problem is easier than the corresponding spanner problem as well .this bridge between the two problems is useful for building sparse -vabfs structures by making use of the vast literature on additive -spanners .for instance , the -spanner of size given in , and the -spanner of size given in , can be used to build corresponding vertex - fault - tolerant structures . another interesting implication arises for the multisource eabfs problem . indeed , given a set of multiple sources , the -spanner of size can be used to build a multisource -eabfs of size .this allows to improve , for , the multisource -eabfs of size given in : indeed , it suffices to plug - in in our method the -spanner of size given in .[ [ other - related - results . ] ] other related results .+ + + + + + + + + + + + + + + + + + + + + + besides fault - tolerant ( approximate ) spt and bfs , there is a large body of literature on fault - tolerant short(est ) paths in graphs .a natural counterpart of the structures considered in this paper , as we have seen before , are the dso . for recent achievements on dso, we refer the reader to , and more in particular to , where single - source distances are considered .another setting which is very close in spirit to ours is that of _ fault - tolerant spanners_. in , for weighted graphs and any integer , the authors present a -spanner resilient to vertex ( resp . , edge ) failures of size ( resp . , ) .this was later improved through a randomized construction in . on the other hand , for the unweighted case ,in the authors present a general result for building a -spanner resilient to edge failures , by unioning an ordinary -spanner with a fault - tolerant -spanner resilient against up to edge faults .finally , we mention that in it was introduced the resembling concept of _ resilient spanners _ , i.e. , spanners such that whenever any edge in fails , then the relative distance increases in the spanner are very close to those in , and it was shown how to build a resilient spanner by augmenting an ordinary spanner .we start by introducing our notation . for the sake of brevity , we give it for the case of edge failures , but it can be naturally extended to the node failure case . given a non - negatively real weighted , undirected , and 2-edge - connected graph , we will denote by or the weight of the edge .we also define .given an edge , we denote by or ( resp . , or ) the graph obtained from by removing ( resp ., adding ) the edge .similarly , for a set of edges , ( resp . , ) will denote the graph obtained from by removing ( resp . , adding ) the edges in .we will call a shortest path between two vertices , its ( weighted ) length , and a spt of rooted at .whenever the graph and/or the vertex are clear from the context , we might omit them , i.e. , we will write and instead of and , respectively . when considering an edge of an spt we will assume and to be the closest and the furthest endpoints from , respectively .given an edge , we define , and to be , respectively , a shortest path between and , its length , and a spt in the graph .moreover , if is a path from to and is a path from to , with , we will denote by the path from to obtained by concatenating and .given , a vertex , and an edge , we denote by and the partition of induced by the two connected components of , such that contains and , and contains .then , will denote the _ cutset _ of , i.e. , the set of edges crossing the cut . for the sake of simplicity we consider only edge weights that are strictly positive .however our entire analysis also extends to non - negative weights . throughout the rest of the paperwe will assume that , when multiple shortest paths exist , ties will be broken in a consistent manner .in particular we fix a spt of and , given a graph and , whenever we compute the path and ties arise , we will prefer the edges in .we will also assume that if we are considering a shortest path between and passing through vertices and , then . compute a 3-`easpt ` of size using the algorithm in sect .3.1.1 of . first , we give a high - level description of our algorithm for computing a -`easpt ` ( see algorithm [ alg:1_epsilon_ftspt ] ) .we build our structure , say , by starting from an spt rooted at which is suitably augmented with at most edges in order to make it become a -`easpt ` .then , we enrich incrementally by considering the tree edge failures in preorder , and by checking the disconnected vertices . when an edge fails and a vertex happens to be too stretched in w.r.t .its distance from in , we add a suitable subset of edges to , selected from the new shortest path to .this is done so that we not only adjust the distance of , but we also improve the stretch factor of a _ subset _ of its predecessors .this is exactly the key for the efficiency of our method , since altogether , up to a logarithmic factor , we maintain constant in an amortized sense the ratio between the size of the set of added edges and the overall distance improvement .let us now provide a detailed description of our algorithm . to build the initial -`easpt` , it augments by making use of a _ swap algorithm _ devised in .more precisely , in that paper the authors were concerned with the problem of reconnecting in a best possible way ( w.r.t . to a set of distance criteria ) the two subtrees of an spt undergoing an edge failure , through a careful selection of a _ swap edge _ ,i.e. , an edge with an endvertex in each of the two subtrees . in particular, they show that if we select as a swap edge for with closer to the source than the edge that lies on a shortest path in from to , then the distances from the source towards all the disconnected vertices is stretched at most by a factor of 3 .therefore , a -easpt of size at most can be obtained by simply adding to a spt rooted at a such swap edge for each corresponding tree edge , and interestingly this improves the -easpt of size at most provided in . then , our algorithm works in _ phases _ , where each phase considers an edge of w.r.t . to a fixed preorder of the edges ,say . in the -th phase ,the algorithm considers the failure of , and when a vertex happens to be too stretched in w.r.t . , then we say that is _ bad _ for and we add a suitable subset of edges to .these edges are selected from and they always include the last edge of . we now show that this suffices to prove the correctness of the algorithm : [ lemma:1_epsilon_ftspt_correctness ] the structure returned by the algorithm is a -`easpt ` .let be the structure built by the algorithm just before a bad vertex for an edge is considered .assume by induction that , for every vertex in already considered in phase , we have .let be the last edge of and recall that is always added to .hence we have : it remains to describe the edge selection process and to analyze the size of our final structure .let be the initial -`easpt ` structure .let us fix the failed edge and a single bad vertex for .we call the structure built by the algorithm just before is considered .let be the unique edge in .consider the subpath of going from to and let be its vertices , in order .we consider the set , we name its vertices with , in order and we let ( see figure [ fig : tree_edge ] ) . we define .it follows from the definitions and from lemma [ lemma:1_epsilon_ftspt_correctness ] that we have , for and .think of the edges in as being directed towards for a moment . in the followingwe will describe how to select the set of edges used by the algorithm .in particular , we will select edges entering into the last vertices in .this choice of will ensure that the overall decrease of the values in will be at least where denotes the -th_ harmonic number_. when a bad vertex for the failing edge is considered .bold edges belong to while the black path is . ]we exploit the fact that , after adding the set , each `` new value '' with , will not be larger than as we will show in the following .consider the sequence where .notice that the sequence is monotonically increasing from to .let be the largest index such that .notice that always exists as and that .we set so that the set is defined accordingly .let be the set of vertices for which an incoming edges has been added in .for every vertex we define the following path in : .notice that is entirely contained in .we define , and note that is an upper bound to the stretch of in .[ lemma : alpha_order ] for , . by definition of , we have .now we prove : we now lower - bound the overall decrease of the values s w.r.t . the corresponding s by using the following inequalities : where in the last but one step we used the well - known equality that for every , .the above selection procedure is repeated by the algorithm for every failed edge and for every corresponding bad vertex .we now focus on the -th phase of the algorithm .let be the union of all the sets used when considering the bad vertices of the phase .moreover let and notice that . for a vertex ,let be the _path built by the algorithm , as defined above .let ( resp . , ) be the structure built by the algorithm at the end ( resp . ,start ) of the phase and let be the number of new edges added during the phase . by summing over all the bad vertices for edge , we have : [ lemma : stretch_decrease ] .now , let us define a function for every : the proofs of next three lemmas are postponed to the full version of the paper .[ lemma : stretch_bound ] for every we have .[ lemma : phi_ub ] for , .[ lemma : path_improves ] for , .we now prove the following : [ lemma : stretch_delta ] .by lemmas [ lemma : stretch_decrease][lemma : path_improves ] , and since the initial structure is a -`easpt ` , we have : we now define a global potential function : for . notice that we trivially have .the structure returned by the algorithm is a -`easpt ` of size .the fact that is a -`easpt ` follows from lemma [ lemma:1_epsilon_ftspt_correctness ] . concerning the size of ,since contains edges , we only focus on bounding the number of edges in . using lemma [ lemma : stretch_delta ], we can write : unfolding the previous recurrence relation we obtain : which we finally solve for to get .in this section we extend our previous -`easpt ` structure to deal with vertex failures . in order to doso we will build a different subgraph having suitable properties that we will describe .then we will use the natural extension of algorithm [ alg:1_epsilon_ftspt ] where we consider ( in preorder ) vertex failures instead of edge failures .we now describe the construction of and then argue how the previous analysis can be adapted to show the same bound on the size of .the structure is initially equal to and it is augmented by using a technique similar to the one shown in : the spt of is suitably decomposed into ancestor - leaf vertex - disjoint paths .then , for each path , an approximate structure is built .this structure will provide approximate distances towards any vertex of the graph when any vertex along the path fails .the union of with all those structures will form . fix a path of the previous decomposition starting from a vertex , andlet be the subtree of rooted at .moreover , let be a failing vertex , and let be the next vertex in .is not a leaf , as otherwise is already a spt of . ]we partition the vertices of the forest into three sets : ( i ) the _ up set _ containing all the vertices of the tree rooted at , ( ii ) the _ down set _ containing all the vertices of the tree rooted at , and ( iii ) the _ others set _ containing all the remaining vertices ( see figure [ fig : tree_vertex ] ) . when a bad vertex for the failing vertex is considered .bold edges belong to while the black path is .notice that all belong to the down set . ]we want to select a set of edges to add to . in order to do so ,we construct a spt of and we imagine that its edges are directed towards the leaves .we select all the edges of that do not lead to a vertex in , plus the unique edge of that crosses the cut induced by the sets and .notice that contains all the paths in towards the vertices in , and that each vertex has at most one incoming edge in .this implies that the number of selected edges is at most .the above procedure is repeated for all the failing vertices of , in order . as the sets associated with the different vertices are disjoint we have that , while processing , at most edges are selected .we use the path decomposition described in that can be recursively defined as follows : given a tree , we select a path from the root to a leaf such that the removal of splits the tree into a forest where the size of each subtree is at most half the size of the original tree . we than proceed recursively on each subtree . using this approach ,the size of the entire structure can be shown to be .we now prove some useful properties of the structure .first of all , observe that , by construction and similarly to the edge - failure case , we immediately have : [ lemma : vertex_stretch ] consider a failed vertex and another vertex .we have : ( i ) , and ( ii ) for , it holds .moreover , we also have the following ( proof postponed to the full version of the paper ) : [ lemma : bad_vertices_down ] consider a failed vertex . during the execution of the vertex - version of algorithm [ alg:1_epsilon_ftspt ] , every bad vertex for will be in . at this point ,the same analysis given for the case of edge failures can be retraced for vertex failures as well .we point out that lemma [ lemma : bad_vertices_down ] ensures that every bad every for is in the same subtree as . also notice that all the vertices s are , by definition , in the same subtree as well ( see figure [ fig : tree_vertex ] ) .the above , combined with lemma [ lemma : vertex_stretch ] ( i ) , is needed by the proof of lemma [ lemma : stretch_bound ] , while lemma [ lemma : vertex_stretch ] ( ii ) is used in the proof of lemma [ lemma : stretch_delta ] .hence we have : the vertex - version of algorithm [ alg:1_epsilon_ftspt ] computes a -`vaspt ` of size .in this section we turn our attention to the unweighted case , and we provide two polynomial - time algorithms that augment an -spanner of so to obtain an -`eabfs`/`vabfs ` .we present the algorithm for the vertex - failure case and show how it can be adapted to the edge - failure case .the algorithm first augments the structure computed so as explained in section [ section : vaspt ] and then adds its edges to the -spanner of .the structure is augmented as follows .the vertices of the bfs of rooted at are visited in preorder .let be the vertex visited by the algorithm and let be the set of vertices of the tree defined so as explained in section [ section : vaspt ] w.r.t the path decomposition computed for .for every , the algorithm checks whether contains no vertex of and .if this is the case , then the algorithm augments with the edge of incident to .[ fact : node_failure_bfs ] for every vertex and every vertex such that contains a vertex in , let and be the first and last vertex of that belong to , respectively .we have and .[ th : from_spanner_towards_vabfs ] given an unweighted graph with vertices and edges , a source vertex , and an -spanner for of size , the algorithm computes an -`vabfs ` w.r.t . of size .now , we adapt the algorithm to prove a similar result for the -`eabfs ` .the algorithm first augments a bfs tree of rooted at and then adds its edges to the -spanner of .the tree is augmented by visiting its edges in preorder .let be the edge visited by the algorithm .for every , the algorithm checks whether contains no vertex of and .if this is the case , then the algorithm augments with the edge of incident to . in the full version of the paperit will be shown that the proof of theorem [ th : from_spanner_towards_vabfs ] can be adapted to prove the following : given an unweighted graph with vertices and edges , a source vertex , and an -spanner for of size , the algorithm computes an -`eabfs ` w.r.t . of size less than or equal to .g. ausiello , p.g .franciosa , g.f .italiano , and a. ribichini , on resilient graph spanners , _ proc . of the 21st european symp . on algorithms ( esa13 )_ , vol . 8125 of lecture notes in computer science , springer , 8596 , 2013 .g. braunschvig , s. chechik , and d. peleg , fault tolerant additive spanners , _ proc .of the 38th workshop on graph - theoretic concepts in computer science ( wg12 ) _ , vol .7551 of lecture notes in computer science , springer , 206214 , 2012 .s. chechik , m. langberg , d. peleg , and l. roditty , -sensitivity distance oracles and routing schemes , _ proc .of the 18th european symp . on algorithms ( esa10 )6942 of lecture notes in computer science , springer , 8496 , 2010 .f. grandoni and v. vassilevska williams , improved distance sensitivity oracles via fast single - source replacement paths , _ proc . of the 53rd annual ieee symp . on foundations of computer science ( focs12 ) _ , 748757 , 2012 .
the resiliency of a network is its ability to remain _ effectively _ functioning also when any of its nodes or links fails . however , to reduce operational and set - up costs , a network should be small in size , and this conflicts with the requirement of being resilient . in this paper we address this trade - off for the prominent case of the _ broadcasting _ routing scheme , and we build efficient ( i.e. , sparse and fast ) _ fault - tolerant approximate shortest - path trees _ , for both the edge and vertex _ single - failure _ case . in particular , for an -vertex non - negatively weighted graph , and for any constant , we design two structures of size which guarantee -stretched paths from the selected source also in the presence of an edge / vertex failure . this favorably compares with the currently best known solutions , which are for the edge - failure case of size and stretch factor 3 , and for the vertex - failure case of size and stretch factor 3 . moreover , we also focus on the unweighted case , and we prove that an ordinary -spanner can be slightly augmented in order to build efficient fault - tolerant approximate _ breadth - first - search trees_.
the reliability of storage systems is usually a foremost concern to implementers and users .data loss events can be extremely costly , consider for example the value of data in systems storing financial or medical records .for this reason , storage systems must be engineered such that the chance of an irrecoverable data loss is extremely low , perhaps on the order of one chance in million per year of operation .these immensely high reliabilities , however , prohibit real - world testing due to the fact that it would require a huge number of these systems to be evaluated for an extremely long time to empirically measure with any degree of accuracy .therefore , implementers of storage systems must rely on mathematical models for gauging the reliability of their designs .it is important that the model used neither over- nor under - estimate reliability .if the model overestimates reliability , then the system will be more prone to data loss than expected .if the model underestimates reliability , the system will be designed with an excessive level of fault - tolerance and therefore be overly expensive .we evaluated two models used in reliability analysis , one presented by chen et al. and another given by angus .we found that while the chen and angus methods agree for systems with a fault - tolerance of zero or one , beyond that , the models diverge by a factorial of the system s fault - tolerance .this paper is organized as follows : in the _ background _ section , we introduce basic concepts in reliability analysis and define the meaning of notation used throughout this paper .next we introduce two models used in reliability analysis , one presented by chen et al . which is commonly used in the analysis of data storage systems and a more general model presented by angus for analyzing the reliability of -of- systems . to judge the applicability of these models we show simulations of how long systems take to fail and discuss the resultsfollowing that , we derive a new model which the simulation shows to have superior accuracy to both the chen and angus models .lastly , we mention some areas for further improvement to the model we give .the term _ reliability _ , as used in this paper , refers to the probability of correct operation over a given period of time , where correct operation is defined as the absence of an irrecoverable data loss .gibson showed that for systems with a constant failure rate , the following exponential function may be used to estimate reliability over time given the _mean - time - to - data - loss _ ( ) of the system : . knowing this function , the main difficulty in estimating reliability becomes accurately estimating the system s . in this analysis , it is important to distinguish between a data loss and an _ irrecoverable _ data loss .hard drives and tapes inevitably fail and these are instances of data loss . however ,storage systems are capable of recovering from such failures so long as the number of failures does not exceed the system s fault - tolerance . when the number of failures exceeds a system s fault - tolerance , data can no longer be read and therefore lost data can no longer be recovered .the rate at which individual components ( hard drives , tapes , etc . ) fail is denoted as while the rate at which those components are repaired is .alternatively , these two rates might instead be expressed as times : _ mean - time - to - failure _ ( ) and _ mean - time - to - repair _ ( ) respectively .when and are constant over time , meaning exponentially distributed , and . throughout this paper and will only be used to refer to the and of components , never the system . to increase a system s the of components should be as long as possible while the should be as short as possible .while implementers of storage systems can choose the components to use , once selected , they have little control over the or . is usually defined by the manufacturer of the component and little to nothing can be done to increase it . , on the other hand , may be improved to an extent . is the sum of the service time and rebuild time . by using hot spares , the service timemay be reduced to near zero and by prioritizing rebuild i / o over normal i / o requests , rebuild time may be minimized . for storage devices , however , rebuild time has an ultimate floor defined by the i / o rate of the device .consider that if a 1 tb disk fails , rebuilding it requires writing 1 tb of data . if the i / o rate of the disk is 100 mb / s it will take a minimum of 2.9 hours to recover .given the limited control implementers have over and the most important consideration for reaching a target is choosing an appropriate level of fault - tolerance for the system .the simplest method for achieving fault - tolerance in a storage system is by using replication .that is , create some number ( ) of copies of the data and store each copy to a different component .a replicated data storage system can tolerate the failure of components , so long as one copy remains , the failed copies can be remade . in this respect , implementers have complete control over the fault - tolerance , and by extension , the reliability of the systems they create .achieving fault - tolerance by making replicated copies , however , is very inefficient .other more advanced methods are known for achieving fault - tolerance , such as raid 5 , raid 6 , and erasure codes . unlike copy - based systems which require 1 of the components to remain operational ,these systems require of components to remain operational , where with being fault - tolerance . with an accurate model for estimating , implementers may engineer systems to meet the reliability requirements using a minimum level of fault - tolerance . by minimizing fault - tolerancethe storage system will be more efficient as less redundant information need be stored or calculated .all reliability models presented in this paper are capable of estimating from the four metrics : , , and .one might ask , what level of fault - tolerance is sufficient for any practical purpose ? unfortunately , no answer remains true indefinitely .consider that as disk capacities have grown , their performance has not kept pace . this has resulted in a very large increase in disk repair times . whereas it took 57 seconds to read an entire 40 mb disk in 1991, it takes 3.3 hours on a modern 750 gb drive . to cope with these longer repair times ,systems have had to increase their level of fault - tolerance .raid 5 was sufficient when disks could be rebuilt in minutes ; now raid 6 is required to keep an acceptable reliability as rebuilds take hours or days .another factor causing the fault - tolerance of storage systems to increase is the sheer size of storage systems being built today .the resulting decrease in reliability as storage capacity increases is linear ; with all else being equal , a system storing 2 pb of data on 2,000 disks has twice the chance of experiencing data loss as a system storing 1 pb on 1,000 disks . storing more datainherently carries a higher risk of loss , and with storage requirements doubling every 24 months the reliability of all that data is halved every 24 months .lastly , disk failures do not always manifest as complete operational failures .elerath showed that a more common path to data lass is via latent failures , caused by improper writes or degradation of the media over time .latent failures , more commonly known as _unrecoverable read errors _ ( ures ) result when a drive is unable to correctly read some sector .the rate at which ures manifest is generally reported to be between and per bit read .this means that even if the ure rate remains constant , as disk sizes grow the likelihood of encountering a ure will increase .consider a raid 6 ( 8 + 2 ) array composed of 1 tb disks .after two disk failures all eight of the remaining disks must be read perfectly without error . with a ure rate of , the chance of being able to read this amount of data without error is given by : this means that about half the time , the system will encounter a ure during rebuild and therefore experience data loss .even though raid 6 can supposedly recover from double disk failures , factoring in ures one finds that half the time it can not .therefore the true reliability of this raid 6 array is only marginally better than a system with a fault - tolerance of one .the net result is that when considering increasing disk capacities , larger storage systems , and the growing risk of ures , one may conclude that increasing levels of fault - tolerance will be required in the future to simply maintain the same level of reliability .therefore , it is important that the reliability model used by storage system implementers be able to accurately model highly - fault tolerant systems .chen et al . presented models for estimating the mttdl for various raid configurations , including raid 0 ( no parity ) , raid 5 ( single parity ) andraid 6 ( dual parity ) . as founders in the field of raid ,their model has seen wide adoption by those in the storage industry . in the paper ,zero redundancy raid 0 systems are said to have a mttdl equal to the mttf of individual disks divided by the number of disks : they further presented models for raid 5 , and raid 6 arrays : a clear pattern emerges in the progression of increased fault - tolerance .looking at the above formulas , one sees that the mttr term is taken to the power of the fault - tolerance ( ) while the mttf is taken to the power of . to account for the multiplication of ... we may use the factorial operator to find : , recalling that . therefore we may obtain a generalization of the chen model which works for any arbitrary and : it is straightforward to see that when , one obtains the raid 0 formula . if one sets or , one derives the raid 5 or raid 6 formulas respectively .j. e. angus published a paper titled `` on computing mtbf for a k - out - of - n : g repairable system'' .his model is more general than those given by chen et al .but nonetheless each may be used to estimate time to failure for data storage system .there is , however , a difference between what the angus and chen models attempt to calculate .the chen model calculates mttdl which if we stated more generally , is the _ mean - time - to - first - failure _ ( mttff ) .this is an important distinction because after the first failure in a data storage system , the system can not be repaired because data is lost .the angus model does not assume this , and therefore allows repair from cases where more than ( ) devices have failed .this is why angus defines the result as _ mean - time - between - failures _ ( mtbf ) rather than mttff , his model finds the average amount of time between failures over an infinite amount of time . in many situations , the mttff will be very close to the mtbf , and in those cases the angus model may be used to accurately model mttdl . later in this paper , we will explore the conditions under which this assumption is not valid .below is the formula angus gave in his paper .note that this is as it appeared in the original notation , where he used and instead of mttf and mttr : if we substitue and with the notation used by chen , the angus formula becomes : note that for cases when , as is normally the case for disk drives , the summation component of the formula rapidly converges to zero .given that for most cases , mttf will be a time in years and the mttr a time in hours , the ratio of will be in the thousands for typical cases and therefore , if iterations beyond are ignored , the result will only deviate by a few thousandths .this level of accuracy is acceptable for most purposes , and therefore when one may simplify the angus formula as : this formula looks very similar to the one given by chen .each has in the numerator , and in the denominator .where they differ is in their treatment of the and terms .chen gives : while angus gives : by decomposing the term used in the angus model one obtains : the only difference between the simplified angus model and the chen model is that there is an extra term in the numerator of . therefore the mttdl predicted by chen s model will be a factorial of the fault - tolerance times less than the mttdl predicted by angus s model . for raid 0 , and raid 5 systems , and are both , so no difference is observed between their predictions . however , for raid 6 systems with a fault tolerance of 2 , chen s model will yield a mttdl one half of what angus will give .this difference is arguably minor , but what about for highly fault - tolerant systems that are now becoming possible via erasure codes ?it was shown in the introduction that increased levels of fault - tolerance will be required for very large storage systems .systems have already been developed which can support much higher levels of fault - tolerance .one such system has a standard configuration of 10-of-16 . with a fault tolerance of 6, the chen and angus predictions will differ by a factor of ( 720 ) when estimating this system s mttdl .why is this so , and which prediction is right ? to see why these models give different predictions , it helps to look at how the models were derived . _ _ _ _ the angus model explicitly states the assumption that there are unlimited repairmen .this means that whether 1 device or 100 fail simultaneously , each failed device will be repaired at a constant rate .the chen model , on the other hand , appears to assume the per - device repair rate is inversely proportional to the number of failed disks. there is , however , no fundamental reason why this should be the case , as each drive has its own independent i / o resources . if two disks are simultaneously being rebuilt , the rebuild process may write to both of the disks at twice the rate it could write to a single failed disk .another possible consideration made in the chen model is that because each disk has a fixed repair time , the first disk to be repaired will necessarily be the first disk to have failed .the speed at which the first disk is repaired does not increase according to the number of failed disks , so why should the repair rate increase ? if anything is clear , it is that these models can not both be correct . to attempt to verify the validity of one of these two models , a monte carlo simulation of system failure time was createdthe goal of the simulation is to model the failures and repairs of independent devices until such time that more than devices are simultaneously in a failed state .failures are random and exponentially distributed over time according to the mttf . repairs for each device take a constant amount of time equal to mttr , such that mttr time after a device s failure it will be operational . .... random_ttf ( ) return mttf * -ln(random(0,1 ) ) initialize ( ) fail_times : = fail_time[n ] for each fail_time in fail_times : fail_time : = random_ttf ( ) count_failures(start , end ) count : = 0 for each fail_time in fail_times : if ( start < = fail_time < = end ) : count : = count + 1 return count simulate_time_to_data_loss ( ) while true : nf : = min(fail_times ) count : = count_failures(nf , nf+mttr ) if count > n - k : dl : = time of ( n - k+1)-th failure break nf : = nf + mttr + random_ttf ( ) return dl .... the first method , _ random_ttf ( ) _ generates a random failure time for a device given the device mttf . because the simulation assumes failures are exponentially distributed , the negative of the natural logarithm of a random value between 0 and 1 produces a multiplier for the device mttf .this function is used by the _ initialize ( ) _ method to assign random failure times to each of the devices .what the main simulation loop does is first check the time of the next failure , then it counts the number of failures occurring in the range from that failure time until the time that device is repaired .if the number of failures exceeds the fault - tolerance of the system , the loop breaks , and the time of the failure that pushes the system above its fault - tolerance is returned .otherwise , the failure time for the device is advanced by adding the mttr and another random time to failure .when the loop continues , the same process will be run for the next failing device .it is important to note that a full run of this simulation returns only one random time to data loss . to derive an accurate estimation of the _ mean _ time to data loss, this simulation must be run over many thousands of iterations to find the arithmetic average of all the results .the average of the results should give a close approximation of the system s true mttdl .we conducted various runs of the simulation , using different values of , , , and .it was found that the magnitude of was immaterial to the result of the simulation , only the ratio of to is important .therefore , for each of the results reported in the tables below , is assumed to have a value of 1 . each observed result is the average of at least 2,000 iterations of the above simulation code .note that for the highly fault - tolerant configurations , the mttf had to be reduced for the simulation to complete within a reasonable period of time . in this table , _ predicted _ refers to the chen model : [ cols=">,>,>,>,>,>",options="header " , ] therefore we find that this model yields results which are much closer to those of the simulation . in the instances where the angus model was off by a factor of 3.77 ,this model was within 1% .one may wonder how common it is for the ratio to be so low .pris and long presented a method for predicting reliability in the face of batch - correlated failures . in it , they suggest that to model the manifestation of a batch - correlated failure , one should reduce the of disks in the system to something much lower than it would be normally , suggesting a time between one week and one month might be reasonable .in such a case , the ratio could be as low as 3.5 .another reason to expect the ratio to drop is that in the past 15 years , the time it takes to read an entire hard drive has increased by over 200 times .this means the lower bound on rebuild time , and therefore the minimum mttr has likewise increased by this amount .a similar decrease over the next 15 years would see the ratio drop from the thousands to around 10 to 20 .with this new formula may be simplified by keeping only the biggest contributors to the summation .the biggest contributor occurs for . evaluating only this case, the formula reduces to : through some simple transformations this formula may be reweritten as : which is identical to the simplified angus model .therefore when these two models are expected to produce very similar results when , because in that case the less significant contributors to the summation converge rapidly .we have demonstrated that a common model for estimating due to disk failures grossly underestimates the true for systems that have a high degree of fault - tolerance .furthermore , we showed that a model presented by angus provides accurate estimations of mttdl for systems with a high degree of fault - tolerance so long as . while the angus model is more complicated than the one presented by chen et al ., a simplified version of angus can be used which deviates by only a few percent for reasonable ratios . formore less constrained situations , we presented a model derived through markov theory which exhibits a high degree of accuracy for cases of small ratios and showed the common relationship holds with the angus model .our main result is that while the chen model is adequate for systems with a fault tolerance of 0 or 1 , it should not be used for systems with a fault - tolerance beyond that .therefore to accurately model raid 6 , triple- ( or higher ) replication , or erasure code systems , the angus formula , or its simplified version ought to be used . when modeling a system whose ratio is less than a few hundred , one should use the model presented in this paper over the angus method .there is much room for further investigating and improvement to the method presented in this paper .some candidates for further research include : modeling of correlated failures , using non - exponentially distributed failures for system components , and investigating the true likelihood of unrecoverable read errors in light of our findings .the models presented in this paper all assume failures are statistically independent events , but much research has been done to refute this assumption in practice . in the paper by chen et al . , a simple method for modeling correlated disk failures was presented . what their model prescribed was to assume the mttf for the second disk failure was 1/10th what it was for the first failure , and further assume that every subsequent disk failure is 10 times more likely than the last .this provides reasonable results for a raid 5 or raid 6 system which can only tolerate one or two failures .for raid 6 , at worst the third disk failure will only be 100 times more likely to fail than the first .however , consider a system that could tolerate 5 disk failures .the 6th disk failure would be modeled to have a mttf 1/10,000th what it would be normally . if the ratio is less than 10,000 , adding increasing levels of fault - tolerance actually decreases the mttdl predicted through this method .while it is usually better to underestimate mttdl than overestimate it , clearly this method reaches a breaking point if adding additional fault - tolerance causes the estimated mttdl to decrease rather than increase . to blindly follow this method s predictions , one would design a system that is less reliable than what he or she might otherwise choose .therefore , developing a method for modeling correlated failures in highly fault - tolerant systems would be quite beneficial .the models in this paper assume constant failure rates over time for the underlying components .gibson and schroeder showed that in practice , disk failure rates follow a bathtub curve with higher levels of failure initially , stabilization during normal operating life , and slowly increasing with age .they further found that the weibull distribution could be used to provide a reasonable approximation of failure rates of hard drives over their useful lives .it remains an open question how our reliability model might be amended to accommodate for hard drives with a non - exponentially distributed failure rate . our conjecture for why the chen method was off for higher levels of fault - tolerance was that it fails to account for progress made in the rebuild of the first disk to have failed .this consideration should also alter the expectation of encountering a ure .since on average , the first portion of the disks will have already been read in rebuilding the first failed disk by the time the system experiences failures .a ure causing irrecoverable data loss , however , must happen on the part of the disks that remains to be read to rebuild the first failed disk .therefore , this consideration should reduce the expected likelihood of a ure causing irecoverable data loss . exploringexactly how the estimated mttdl is affected remains to be explored , but we expected it to have a non - negligible effect for systems with a high degree of fault - tolerance .we owe a special thanks to yura volvovskiy who offered methodology for the calculation of _ mean - time - to - first - failure _ and solving it in general case. this result would not be possible without his contribution .we would like to take this opportunity to thank our fellow colleagues at cleversafe for their insightful feedback and advice regarding this paper , and in particular andrew baptist who offered invaluable advice regarding the simulation methodology and sanjaya kumar for his insightful advice and feedback regarding this paper .pris , j. and long , d. e. , `` using device diversity to protect data against batch - correlated disk failures '' .proceedings of the second acm workshop on storage security and survivability , alexandria , virgina ( 2006 ) , p. 47- 52 .plank , j.s ., xu , l. , `` optimizing cauchy reed - solomon codes for fault - tolerant network storage applications '' , _ nca-06 : 5th ieee international symposium on network computing applications _ cambridge , ma , ( 2006 ) .b. schroeder , and g. a. gibson , `` disk failures in the real world : what doesan mttf of 1,000,000 hours mean to you ? '' .proceedings of the 2007 usenix technical conference , san jose , ca , feb 14 - 16 , 2007 .
we found that a reliability model commonly used to estimate _ mean - time - to - data - loss _ ( ) , while suitable for modeling raid 0 and raid 5 , fails to accurately model systems having a fault - tolerance greater than 1 . therefore , to model the reliability of raid 6 , triple - replication , or -of- systems requires an alternate technique . in this paper , we explore some alternatives , and evaluate their efficacy by comparing their predictions to simulations . our main result is a new formula which more accurately models storage system reliability .
since the advent of quantum mechanics , practitioners have struggled with an inherent conceptual dualism in its formalism . on one hand , time evolution of a quantum state is a continuous , deterministic , and reversible process well described by a wave equation . on the other hand, there is irreducible stochasticity present in the measurement process that leads to discontinuous and generally irreversible state evolution in the form of so - called `` quantum jumps '' or `` state collapse . ''to cope with the necessary introduction of the stochastic element of the theory while still preserving ties with the deterministic classical mechanics , traditional quantum mechanics emphasizes the role of hermitian observable operators that are analogous to classical observables .indeed , we find that observables underlie most of the core concepts in the quantum theory : commutation relations of observables , complete sets of commuting observables , spectral expansions of observables , conjugate pairs of observables , expectation values of observables , uncertainty relations between observables , and time evolution generated by a hamiltonian observable . even the quantum state is introduced as a superposition of observable eigenvectors .the stochasticity of the theory manifests itself as a single prescription for how to average the omnipresent observables under a deterministically evolving quantum state : the implicit projective quantum jumps corresponding to laboratory measurements are largely hidden by the formalism .experimental control of quantum systems has improved since the early days of quantum mechanics , however , so the discontinuous evolution present in the measurement process can now be more carefully investigated .modern optical and condensed matter systems , for example , can condition the evolution of a state on the outcomes of weakly coupled measurement devices ( e.g. ) , resulting in _ nonprojective _ quantum jumps that alter the state more gently , or even resulting in continuous controlled evolution of the state .since observables are defined in terms of projective jumps that strongly affect the state , it becomes unclear how to correctly apply a formalism based on observables to such nonprojective measurements .a refinement of the traditional formalism must be employed to correctly describe the general case . to address this need ,the theory of _ quantum operations _ , or generalized measurement , was introduced in the early 1970 s by davies and kraus , and has been developed over the past forty years to become a comprehensive and mathematically rigorous theory .the formalism of quantum operations has seen the most use in quantum optics , quantum computation , and quantum information communities , where it is indispensable and well - supported by experiment .however , it has not yet seen wide adoption outside of those communities .unlike the traditional observable formalism , the formalism of quantum operations emphasizes the _states_. observables are mentioned infrequently in the quantum operations literature , usually appearing only in the context of projective measurements where they are well - understood .some references ( e.g. ) define `` generalized observables '' in terms of the generalized measurements and detector outcome labels , but give no indication about their relationship to traditional observables , if any . as a result , there is a conceptual gap between the traditional quantum mechanics of observables and the modern treatment of quantum operations that encompasses a much larger class of possible measurements than the traditional observables seemingly allow .a possible response to this conceptual gap is to declare that traditional observables are meaningless outside the context of projective measurements .this argument is supported by the fact that any generalized measurement can be understood as a part of a projective measurement being made on a larger joint system that can be associated with a traditional observable in the usual way ( i.e. ) .however , there has been parallel research into the `` weak measurement '' of observables that suggests that linking generalized measurements to traditional observables may not be such an outlandish idea .weak measurements were introduced as a consequence of the von neumann measurement protocol that uses an interaction hamiltonian with variable coupling strength to correlate an observable of interest to the generator of translations for a continuous meter observable . the resulting shift in the meter observableis then used to infer information about the observable of interest in a nonprojective manner .the technique has been used to great effect in the laboratory to measure physical quantities like pulse delays , beam deflections , phase shifts , polarization , and averaged trajectories .therefore , we conclude that there must be some meaningful way to reconcile nonprojective measurements with traditional observables more formally .the primary purpose of the present work is to detail a synthesis between generalized measurements and observables that is powerful enough to encompass projective measurements , weak measurements , and any strength of measurement in between .the formalism of _ contextual values _, which we explicitly introduced in and further developed in , forms a bridge between the traditional notion of an observable and the modern theory of quantum operations. for a concise introduction to the topic in the context of the quantum theory , we recommend reading our letter .the central idea of the contextual - value formalism is that an observable can be completely measured indirectly using an imperfectly correlated detector by assigning an appropriate set of values to the detector outcomes .the assigned set of values generally differs from the set of eigenvalues for the observable , and forms a _ generalized spectrum _ that is associated with the operations of the generalized measurement , rather than the spectral projections for the observable .thus , the spectrum that one associates with an observable will depend on the _ context _ of how the measurement is being performed ; such an inability to completely discuss observables without specifying the full measurement context is reminiscent of bell - kochen - specker contextuality and motivates the name `` contextual values . ''the secondary purpose of the present work is to demonstrate that the contextual values formalism for generalized observable measurement is essentially classical in nature .hence , it has potential applications outside the usual scope of the quantum theory .indeed , we will show that any system that can be described by bayesian probability theory can benefit from the contextual - value formalism .extending contextual values to the quantum theory from the classical theory clarifies which features of the quantum theory are novel .the quantum theory can be seen as an extension of a classical probability space to a continuous manifold of incompatible frameworks , where each framework is a copy of the original probability space . hence , intrinsically quantum features arise not from the observables defined in any particular framework , but instead from the relative orientations of the incompatible frameworks .as we shall see , the differences manifest in sequential measurements and conditional measurements due to the probabilistic structure of the incompatible frameworks , rather than the observables or contextual values themselves . to keep the paper self - contained with these aims in mind , we first develop both the operational approach to measurement and the contextual values formalism completely within the confines of classical probability theory , giving illustrative examples to cement the ideas .we then port the formalism to the quantum theory and identify the essential differences that arise .our analysis therefore doubles as a pedagogical introduction to the operational approaches for both classical and quantum probability theory that should be accessible to a wide audience .the paper is organized as follows : in sec .[ sec : colorblind ] , we provide a simple intuitive example to introduce the concept of contextual values . in secs .[ sec : csample ] through [ sec : cdetector ] , we develop the classical version of the operational approach to measurement . in sec .[ sec : ccv ] , we introduce the contextual values formalism classically and then give several examples similar to the initial example . in secs .[ sec : qsample ] through [ sec : qdetector ] , we generalize the classical operations to quantum operations and highlight the key differences with explicit examples . in sec . [sec : qcv ] , we apply the contextual values formalism to the quantum case and show that it is unchanged .we also specifically address how to treat weak measurements as a special case of our more general formalism and provide a derivation of the quantum weak value in sec .[ sec : wv ] . finally , we give our conclusions in sec . [sec : conclusion ] .the idea of the contextual values formalism is deceptively simple .its essence can be distilled from the following classical example of an _ ambiguous detector _ : suppose we wish to measure a marble that may be colored either red or green .a person with normal vision can distinguish the colors unambiguously and so would represent an ideal detector for the color state of the marble .a partially colorblind person , however , may only estimate the color correctly some percentage of the time and so would represent an ambiguous detector of the color state of the marble .if the person is only mildly colorblind , then the estimations will be strongly correlated to the actual color of the marble .the ambiguity would then be perturbative and could be interpreted as _ noise _ introduced into the measurement .however , if the person is strongly colorblind , then the estimations may be only mildly correlated to the actual color of the marble .the ambiguity becomes _ nonperturbative _ , so the noise dominates the signal in the measurement .we can design an experimental protocol where an experimenter holds up a marble and the colorblind person gives a thumbs - up if he thinks the marble is green or a thumbs - down if he thinks the marble is red .suppose , after testing a large number of known marbles , the experimenter determines that a green marble correlates with a thumbs - up 51% of the time , while a red marble correlates with a thumbs - down 53% of the time .the experimental outcomes of thumbs - up and thumbs - down are thus only weakly correlated with the actual color of the marble .having characterized the detector in this manner , the experimenter provides the colorblind person with a very large bag of an unknown distribution of colored marbles .the colorblind person examines every marble , and for each one records a thumbs - up or a thumbs - down on a sheet of paper , which he then returns to the experimenter .the experimenter then wishes to reconstruct what the average distribution of marble colors in the bag must be , given only the ambiguous output of his colorblind detector . for simplicity, the clever experimenter decides to associate the colors with numerical values : for green ( g ) and for red ( r ) . in order to compare the ambiguous outputs with the colors, he also assigns them _ different _ numerical values : for thumbs - up ( u ) , and for thumbs - down ( d ) .he then writes down the following probability constraint equations for obtaining the average marble color , , based on what he has observed , which he can rewrite as a matrix equation in the basis of the color probabilities and , after solving this equation , he finds that he must assign the amplified values and to the outcomes of thumbs - up and thumbs - down , respectively , in order to compensate for the detector ambiguity . after doing so, he can confidently calculate the average color of the marbles in the large unknown bag using the identity .the classical color observable has eigenvalues of and that correspond to an ideal measurement .the amplified values of and that must be assigned to the ambiguous detector outcomes are _ contextual values _ for the same color observable .the _ context _ of the measurement is the characterization of the colorblind detector , which accounts for the degree of colorblindness .the expansion relates the spectrum of the observable to its generalized spectrum of contextual values . with this identity, both an ideal detector and a colorblind detector can measure the same observable ; however , the assigned values must change depending on the context of the detector being used .to define contextual values more formally , we shall define generalized measurements within the classical theory of probability using the same language as quantum operations .in particular , rather than representing the observables of classical probability theory in the traditional way as functions , we shall adopt a more calculationally flexible , yet equivalent , _ algebraic _ representation that closely resembles the operator algebra for quantum observables .we also briefly comment that the relevant subset of probability theory that is summarized here may slightly differ in emphasis from incarnations that the reader may have encountered previously .our treatment acknowledges that probability theory , in its most general incarnation , is a system of formal reasoning about boolean logic propositions ; specifically , our treatment emphasizes logical inference rather than the traditional frequency analysis of concrete random variable realizations .however , the `` frequentist '' approach of random variables is not displaced by the logical approach , but is rather subsumed as an important special case pertaining to repeatable experiments with logically independent outcomes . due to its clarity and generality , the logical approach has been widely adopted in diverse disciplines under the distinct name `` bayesian probability theory . '' several physicists , including ( but certainly not limited to ) jaynes , caves , fuchs , spekkens , harrigan , wiseman , and leifer , have also extolled its virtues in recent years . we follow suit to emphasize the generality of the contextual - value concept . in what follows, we shall consider the stage on which classical probability theory unfolds namely its space of observables to be a commutative algebra over the reals that we denote .this choice of notation is motivated by the fact that the observable algebra is built from and contains two related spaces , and , that are conceptually distinct and equally important to the theory .the three are illustrated in fig .[ fig : venndiagram ] to orient the discussion . to avoid distracting technical detail, we will briefly describe finite - dimensional versions of these three spaces here , and note straightforward generalizations to the continuous case when needed . , the boolean algebra of propositions , and the algebra of observables .the probability state is a measure from to the interval ] such that .such a state assigns a numerical value to each proposition that quantifies its degree of _ plausibility _ ; that is , formally indicates how likely it is that the question would be answered `` yes '' were it to be answered , with indicating a certain `` yes '' and indicating a certain `` no . ''the value is called the _ probability _ for the proposition to be true .normalizing ensures that exactly one proposition in the sample space must be true . for continuous spaces, the state becomes an integral . _frequencies_.empirically , one can check probabilities by repeatedly asking a proposition in to identically prepared systems and collecting statistics regarding the answers . for a particular proposition ,the ratio of yes - answers to the number of trials will converge to the probability as the number of trials becomes infinite .however , the probability has a well - defined meaning as a plausibility prediction even without actually performing such a repeatable experiment . indeed , designing good quality repeatable experiments to check the probabilities assigned by a predictive stateis the primary goal of experimental science , and is generally quite difficult to achieve ._ expectation functionals_.the linear extension of a state to the whole observable algebra is an _ expectation functional _ that averages the observables , and is traditionally notated with angled brackets .specifically , for an observable , then , is the _ expectation value _ , or average value , of under the functional that extends the probability state .since is linear , it passes through the sum and the constant factors of to apply directly to the propositions .the restriction of to is , so as written in .that is , the expectation value of a pure proposition is the probability of that proposition .the probability state and its linear extension are illustrated in fig .[ fig : venndiagram ] . for continuous spacesthe sum becomes an integral of the measurable function , . _moments_.the ^th^ statistical moment of is and empirically corresponds to measuring the observable times in a row per trial on identical systems and averaging the repeated results .hence , the moments quantify the fluctuations of the observable measurements that stem from uncertainty in the state .for continuous spaces , the higher moments also become integrals ._ densities_.states can often be represented as _ densities _ with respect to some _ reference measure _ from to , which can be convenient for calculational purposes . just as the state can be linearly extended to an expectation functional , any reference measure can be linearly extended to a functional . for continuous spaces ,such a reference functional takes the form of an integral .the representation of a state as a density follows from changing the integration measure for the state to the reference measure .the jacobian conversion factor from the integral over to the integral over a different measure is the _ probability density _ for with respect to , if it exists .we can then define a _state density observable _ that relates the expectation functional to the reference functional directly according to the relation . for continuous spaces ,the standard integral is most frequently used as a reference .hence , the probability density with respect to the standard integral is given the simple notation such that .importantly , the probability for is not the density , but is the ( generally infinitesimal ) integral of the density over a single point , commonly notated . in discrete spaces we apply the same idea by defining a state density observable directly in terms of measure ratios , by definition and linearity , f(x ) \mu(x ) = \sum_{x\in x }f(x ) p(x ) = { \big\langle f \big\rangle} ] , which tends to have noninfinitesimal densities ._ correlated states_.in addition to product states , the joint space admits a much larger class of _ correlated _ states where the detector and system questions are dependent on one another .with such a correlated state a measurement on the detector can not be decoupled in general from a measurement on the system .information gathered from a measurement on a detector under a correlated state will also indirectly provide information about the system , thus motivating the term `` detector . ''_ reduced states_.for a pure system observable or a pure detector observable , the average under a joint state will be equivalent to the average under a state restricted to either the system or the detector space , known as a _ reduced state _ , or a _marginalized state_. we can define such a reduced state by using the joint state density under any reference _ product _ measure , such as the trace .it then follows that , the quantities and are the _ reduced state densities _ that define the reduced states and with expectation functionals , by definition , and . however , in general , , and unless is a product state .the resulting reduced expectations and are independent of the choice of reference product functional . _probability observables_.any correlation between the system and detector in the joint state allows us to directly relate propositions on the detector to _ observables _ on the system .we can compute the relationship directly by using a closure relation and rearranging the conditioning procedure to find , the resulting set of system observables exactly correspond to the detector outcomes .analogously to a set of independent probability observables , they form a partition of the system identity , but are indexed by detector propositions rather than by system propositions , .such a set has the common mathematical name _ positive operator - valued measure _ ( povm ) , since it forms a measure over the detector sample space consisting of positive operators .however , we shall make an effort to refer to them as general _ probability observables _ to emphasize their physical significance .as long as the detector outcomes are not mutually exclusive with the system , the probability observables will be a faithful representation of the reduced state of the detector in the observable space of the system . _process tomography_.the probability observables are completely specified by the _ conditional likelihoods _ for a detector proposition to be true given that a system proposition is true .such conditional likelihoods are more commonly known as _ response functions _ for the detector and can be determined via independent _ detector characterization _ using known reduced system states ; such characterization is also known as _ detector tomography _ , or _process tomography_. any good detector will then maintain its characterization with any _ unknown _ reduced system state .that is , a noninvasive coupling of such a good detector to an unknown system produces a correlated joint state according to , where is the unknown reduced system state prior to the interaction with the detector ._ generalized state collapse_.in addition to allowing the computation of detector probabilities , , probability observables also have the dual role of updating the reduced system state following a measurement on the detector . to see this, we apply the general rule for state collapse for a detector proposition on the joint state to find , which can be seen as a generalization of the bayesian conditioning rule to account for the effect of an imperfectly correlated detector , and can also be understood as a form of _ jeffrey s conditioning _ . for this reason , probability observables are commonly called _ effects _ of the _ generalized measurement_. a reduced state density for the system updates as .generalized measurement _ is nonprojective , so is not constrained to the disjoint questions on the sample space of the system . as a result , it answers questions on the system space _ ambiguously _ or _noisily_. _ weak measurement_.the extreme case of such an ambiguous measurement is a _ weak measurement _ , which is a measurement that does not ( appreciably ) collapse the system state .such a measurement is inherently ambiguous to the extent that only a minuscule amount of information is learned about the system with each detection .formally , the probability observables for a weak measurement are all nearly proportional to the identity on the system space .typically , an experimenter has access to some control parameter ( such as the correlation strength ) that can alter the weakness of the measurement such that , where is the nonzero probability of obtaining the detector outcome in the absence of any interaction with the system .then for small values of the measurement leaves the system state nearly unperturbed , .the limit as such a control parameter is known as the _ weak measurement limit _ and is a formal idealization not strictly achievable in an experiment . _strong measurement_.the opposite extreme case is a _ strong measurement _ or projective measurement , which is a measurement for which all outcomes are independent , as in . in other words , the probability observables are independent for a strong measurement .the projective collapse rule can therefore be seen as a special case of the general collapse rule from this point of view ._ measurement sequences_.a further benefit of the probability observable representation of a detector is that it becomes straightforward to discuss sequences of generalized measurements performed on the same system .for example , consider two detectors that successively couple to a system and have the outcomes and measured , respectively . to describe the full joint state of the system and both detectorsrequires a considerably enlarged sample space .however , if the detectors are characterized by two sets of probability observables and we can immediately write down the probability of both outcomes to occur as well as the resulting final collapsed system state without using the enlarged sample space , similarly , a conditioned density takes the form .the detectors have been _ abstracted _ away to leave only their effect upon the system of interest ._ generalized invasive measurement_.the preceding discussion holds provided that the detector can be noninvasively coupled to a reduced system state to produce a joint state .however , more generally the process of coupling a reduced detector state to the reduced system state will _ disturb _ both states as discussed for .the disturbance produces a joint state from the original product state of the system and detector according to , where are states specifying the joint transition probabilities for the disturbance .the noninvasive coupling is a special case of this where the reduced system state is unchanged by the coupling . as a result, we must slightly modify the derivation of the probability observables to properly include the disturbance , [ eq : cpovmdisturb ] the modified probability observable includes both the initial detector state and the disturbance from the measurement . detector tomography will therefore find the effective characterization probabilities . the generalized collapse rule similarly must be modified to include the disturbance , surprisingly , we can no longer write the conditioning in terms of just the probability observables ; instead we must use an _ operation _ that takes into account both the coupling of the detector and the disturbance of the measurement in an active way .the measurement operation is related to the effective probability observable according to , .the change from observables to operations when the disturbance is included becomes particularly important for a sequence of invasive measurements .consider an initial system state that is first coupled to a detector state via a disturbance , then conditioned on the detector proposition , then coupled to a second detector state via a disturbance , and finally conditioned on the detector proposition .the joint probability for obtaining the ordered sequence can be written as the effective probability observable for the ordered measurement sequence is no longer a simple product of the probability observables and as in , but is instead an ordered _ composition of operations_. the ordering of operations also leads to a new form of _ postselected _ conditioning .specifically , if we condition only on the second measurement of in an invasive sequence , we obtain , {{{\big\langle \widetilde { y } \big\rangle } } } { } } & = \frac{{\big\langle \mathcal{e}_y(\tilde{e}'_z ) \big\rangle}_x}{\sum_{y'\in y } { \big\langle \mathcal{e}_{y'}(\tilde{e}'_z ) \big\rangle}_x } = \frac{{\big\langle \mathcal{e}_y(\tilde{e}'_z ) \big\rangle}_x}{{\big\langle \mathcal{e}(\tilde{e}'_z ) \big\rangle}_x } , \\\mathcal{e}(\tilde{e}'_z ) & = \sum_{y'\in y } \mathcal{e}_{y'}(\tilde{e}'_z ) = { \big\langle \mathcal{d}(\tilde{e}'_z ) \big\rangle}_y.\end{aligned}\ ] ] the different position of the subscript serves to distinguish the postselected probability {{{\big\langle \widetilde { y } \big\rangle}}}{}} ] reduces to , reduces to , and we correctly recover the noninvasive bayes rule ._ observable correspondence_.with the preliminaries about generalized state conditioning out of the way , we are now in a position to discuss the measurement of observables in more detail .first we observe an important corollary of the observable representation of the detector probabilities from : _ detector _ observables can be mapped into equivalent _ system _ observables , note that the eigenvalues of the equivalent system observable are not the same as the eigenvalues of the original detector observable , but are instead their average under the detector response .if the system propositions were accessible then the system observable would allow nontrivial inference about the detector observable , provided that the probability observables were nonzero for all in the support of ._ contextual values_.a more useful corollary of the expansion is that any _ system _ observable that can be expressed as a combination of probability observables may be equivalently expressed as a _ detector _ observable , which is the classical form of our main result . using this equivalence ,_ we can indirectly measure such system observables using only the detector_. we dub the eigenvalues of the detector observable the * contextual values * ( cvs ) of the system observable under the _ context of the specific detector _ characterized by a specific set of probability observables . the cvs forma _ generalized spectrum _ for the observable since they are associated with general probability observables for a generalized measurement and not independent probability observables for a projective measurement ; the eigenvalues are a special case when the probability observables are the spectral projections of the observable being measured . with this point of view, we can understand an observable as an _ equivalence class _ of possible measurement strategies for the same average information .that is , using appropriate pairings of probability observables and cvs , one can measure the same observable average in many different ways , .each such expansion corresponds to a different experimental setup . _moments_.similarly , the ^th^ statistical moment of an observable can be measured in many different , yet equivalent , ways .for instance , the ^th^ moment of an observable can be found from the expansion as , by examining the general collapse rule for measurement sequences we observe that the quantity must be the joint probability for a sequence of _ noninvasive _ measurements that couple the same detector to the system times in succession .furthermore , the average in is explicitly different from the ^th^ statistical moment of the raw detector results , .we conclude that , _ for imperfectly correlated noninvasive detectors , one must perform measurement sequences to obtain the correct statistical moments of an observable using a particular set of cvs_. only for unambiguous measurements with independent probability observables do such measurement sequences reduce to simple powers of the eigenvalues being averaged with single measurement probabilities . if a single measurement by the detector is done per trial , then only the statistical moments of the _ detector _observable can be inferred from that set of cvs , as opposed to the true statistical moments of the inferred system observable .we can , however , change the cvs to define new observables that correspond to powers of the original observable , such as .these new observables can then be measured indirectly using the same experimental setup without the need for measurement sequences .the cvs for the ^th^ power of will not be simple powers of the cvs for unless the measurement is unambiguous ._ invasive measurements_.if the measurement is invasive , then the disturbance forces us to associate the cvs with the measurement _ operations _ and not solely with their associated probability operators in order to properly handle measurement sequences as in .specifically , we must define the _ observable operation _ , which produces the identity similar to . correlated sequences of invasive observable measurements can be obtained by composing the observable operations , such an -measurement sequence reduces to the ^th^ moment when the disturbance vanishes .if time evolution disturbance is inserted between different invasive observable measurements , then we obtain an invasive _ correlation function _ instead , when the observable measurements become noninvasive , then this correctly reduces to the noninvasive correlation function .similarly , -time invasive correlations can be defined with time - evolution disturbances between the invasive observable measurements ._ conditioned averages_.in addition to statistical moments of the observable , we can also use the cvs to construct principled _ conditioned averages _ of the observable . recall that in the general case of an invasive measurement sequence we can condition the observable measurement in two distinct ways . if we condition on an outcome before the measurement of we obtain the _ preselected conditioned average _ defined in . on the other hand ,if the invasive conditioning measurement of happens after the invasive observable measurement then we must use the postselected conditional probabilities to construct a _ postselected conditioned average _, {{{\big\langle \widetilde { f_x } \big\rangle } } } { } } & = \sum_{y\in y } f_y(y)\ , { \tensor[_{z}]{{{\big\langle \widetilde { y } \big\rangle } } } { } } , \\ & = \frac{\sum_{y\in y } f_y(y ) { \big\langle \mathcal{e}_y(\tilde{e}'_z ) \big\rangle}_x}{\sum_{y\in y } { \big\langle \mathcal{e}_y(\tilde{e}'_z ) \big\rangle}_x } = \frac{{\big\langle \mathcal{f}_x(\tilde{e}'_z ) \big\rangle}_x}{{\big\langle \mathcal{e}(\tilde{e}'_z ) \big\rangle}}. \nonumber\end{aligned}\ ] ] the observable operation and the nonselective measurement encode the relevant details from the first measurement .when the disturbance to the reduced system state vanishes , both the preselected and the postselected conditioned averages simplify to the pure conditioned average defined in that depends only on the system observable .while the pure conditioned average is independent of the order of conditioning and is always constrained to the eigenvalue range of the observable , the postselected invasive conditioned average {{{\big\langle \widetilde { f_x } \big\rangle}}}{}} ] for the observable .however , they remain within the cv range ] , which depends solely on the amplification factor in the denominator . if the measurement is strong , such that , then the variance bound reduces to the ideal variance bound of , as expected , leading to a maximum rms error of .any additional ambiguity of the measurement stemming from distribution overlap or distributed autocorrelation amplifies the maximum rms error by a factor of } ] corresponding to different measurement orderings . for a pure state , this postselected conditioning is known as the _ aharonov - bergmann - lebowitz ( abl ) rule _ , and has the form {{{\big\langle \widetilde { y } \big\rangle}}}{_{x } } } = |{\langle z | y \rangle}|^2|{\langle y | x \rangle}|^2 / \sum_{y'\in y}|{\langle z | y ' \rangle}|^2|{\langle y ' | x \rangle}|^2 ] is an interference factor that depends only on relative orientation between the state framework and the observable framework .if the frameworks coincide , then and the classical result is recovered . _ joint observable space_.as with the classical case , we can couple a system to a detector by enlarging the sample space to the product space of a particular pair of frameworks .we can then perform _ local _ unitary rotations on each space independently to form a joint quantum sample space from the classical joint observables .however , the quantum observable space also admits _ global _ unitary rotations on the classical joint observables to form a larger joint quantum sample space .just as with a single sample space , any two propositions in can be continuously connected with some global unitary rotation .the full quantum observable space is constructed from in the usual way .product observables will maintain their product form under local unitary rotations , .however , global unitary rotations can create unfactorable correlated joint observables in even from product observables . _ joint states_.similarly , joint _ states _ on a classical product framework extend to joint quantum states on the quantum product observable space . under local unitary rotations ,product states remain product states and classically correlated states between two specific frameworks remain classically correlated . however , _global _ unitary rotations performed on any state can also form _ entangled _ states that have no analog in the classical theory .entangled states have some degree of _ local - rotation - independent _ correlation between frameworks , so display a stronger degree of correlation than can even be defined with a classically correlated state that is restricted to a single pair of frameworks . as an extreme example , maximally entangled states are completely local - rotation - independent and perfectly correlated with respect to any pair of frameworks ._ quantum operations_.the specifics of entanglement do not concern us here , since any type of correlation is sufficient to represent detector probabilities within the reduced system space .for the purposes of measurement , we only assume that the correlated state with density is connected to some initial product state with density via a unitary rotation .since all quantum states can be continuously connected with some global unitary rotation that acts as a disturbance , this is always possible .physically , the unitary rotation couples the known detector state to an unknown system state .furthermore , we assume that the initial state of the detector has some ( not necessarily unique ) pure - state expansion that is meaningful with respect to the preparation procedure .it then follows that the numerator for the conditioning rules and becomes , \\ & = { \big\langle \mathcal{e}_y(f_x ) \big\rangle}_x = \text{tr}_x(\mathcal{e}^\dagger_y(\rho_x )f_x ) , \nonumber \displaybreak[0]\end{aligned}\ ] ] with the _ operations _ and defined as , [ eq : qmeasoper ] \\ \label{eq : qoperad } \mathcal{e}^\dagger_y(\rho_x ) & = \text{tr}_y(yu\rho_x\rho_yu^\dagger y ) , \\ & = \sum_{y'\in y ' } p'(y ' ) \text{tr}_y(y u \rho_x y ' u^\dagger y ) , \nonumber \\ & = \sum_{y'\in y ' } m_{y , y ' } \rho_x m^\dagger_{y , y ' } , \nonumber \displaybreak[0]\\ \label{eq : qmeasops } m_{y , y ' } & = e^{i\phi_{y , y ' } } \sqrt{p'(y ' ) } { \langle y|}u{|y'\rangle } , \\m_{y , y'}^\dagger & = e^{-i\phi_{y , y ' } } \sqrt{p'(y ' ) } { \langle y'|}u^\dagger{|y\rangle}.\end{aligned}\ ] ] here , the hilbert space representations of the _ kraus operators _ have the form of partial matrix elements and are only well - defined up to the arbitrary phase factors .we also stress that depend not only on the measured detector outcome , but also on a particular detector _ preparation _ . as a result, we find the quantum versions of the probability observables , and the general invasive measurement , similarly to the invasive classical case , the measurement of on the detector must be described by a _ quantum operation _ in , which is a completely positive map that performs a _ generalized measurement _ on the system state corresponding to the detector outcome .the operation acting on the identity in produces a positive operator known as a _ quantum effect _ , . by construction ,the set of operations preserves the identity , ; hence , the effects form a partition of the identity , , making them probability observables over a particular detector framework exactly as in .sequences of measurements emphasize the temporal ordering of operations , just as in the invasive classical case . given two sets of quantum operations that define the sequential interaction of two detectors with the system and their subsequent conditioning , and , the joint probability of the ordered sequence of detector outcomes is , where .the proper sequential probability observable is not a simple product of the individual probability observables and .these sequence probabilities then give us the full generalization of the abl rule , {{{\big\langle \widetilde { y } \big\rangle } } } { } } & = \frac{{\big\langle \mathcal{e}_y(e'_z ) \big\rangle}_x}{{\big\langle \mathcal{e}(e'_z ) \big\rangle}_x } = \frac{{\big\langle \mathcal{e}_y(e'_z ) \big\rangle}_x}{\sum_{y''\in y}{\big\langle \mathcal{e}_{y''}(e'_z ) \big\rangle}_x } , \\ & = \frac{\sum_{y'\in y ' } \text{tr}_x(\rho_x m^\dagger_{y , y'}e'_z m_{y , y'})}{\sum_{y''\in y}\sum_{y'\in y ' } \text{tr}_x(\rho_x m^\dagger_{y'',y'}e'_z m_{y'',y ' } ) } , \nonumber\end{aligned}\ ] ] and the most general version of the invasive quantum bayes rule , {{{\big\langle \widetilde { y } \big\rangle } } } { } } & = { { \big\langle \widetilde { e'_z } \big\rangle}}_y \frac{{\big\langle e_y \big\rangle}_x}{{\big\langle \mathcal{e}(e'_z ) \big\rangle}_x } , \end{aligned}\ ] ] as with and , the postselected conditioning depends on the entire disturbance of the first measurement via the _ nonselective measurement _ in the denominator .the noncommutativity of the detection operations emphasizes the fact that measurement is an active _ process _ : an experimenter alters the quantum state by coupling it to a detector and then conditioning on acquired information from the detector . without some filtering process that completes the disturbance implied by , there is no measurement .the nonselective measurement also includes the active disturbance of the measurement process , but does not condition on a particular outcome .furthermore , measuring a quantum state in a different order generally disturbs it differently. the state may also in certain conditions be probabilistically `` uncollapsed '' back to where it started by using the correct conditioning sequence . in this sense ,sequential quantum conditioning is analogous to a stochastic control process that guides the progressive disturbance of a state along some trajectory in the state space ._ measurement operators_.since the quantum operation performs a measurement , we will refer to its kraus operators as _ measurement operators_. however , a quantum operation generally has many equivalent double - sided product expansions like in terms of measurement operators .each such set of measurement operators corresponds to a specific choice of framework for the preparation of the detector state .given a specific set of measurement operators , the substitution with unitary will produce the same effect according to but will correspond to a different operation .hence , we conclude that many measurement operations can produce the same probability observables on the system space .therefore , _ probability observables are not sufficient to completely specify a quantum measurement _ : one needs to specify the full operations as in the classically invasive case . _ quantum process tomography_.just as classical probability observables can be characterized via process tomography , operations can be characterized by _ quantum process tomography_. one performs quantum process tomography by sending known states into a detector , measuring the detector , then measuring the resulting states to see how the state was changed by the detector .since quantum operations contain information about disturbance as well as conditioning , quantum process tomography generally requires more characterization measurements than pure classical process tomography ._ pure operations_.an initially pure detector state with density produces a _ pure operation _ with a single associated measurement operator that is unique up to the arbitrary phase factor .most laboratory preparation procedures for the detector are designed to produce a pure initial state , so pure operations will be the typical case .a pure operation has the additional property of partially collapsing a pure state to another pure state .it is also most directly related to the probability observable , since the single measurement operator has a polar decomposition in terms of the positive root of the probability observable ._ weak measurement_.if we wish for such a conditioning process to leave the state approximately unchanged , we must make a _ weak measurement _ , just as in the classical case .however , a quantum weak measurement requires a strict condition regarding the measurement operations and not just the probability observables due to the additional disturbance in the measurement .formally , the measurement operations typically depend on a measurement strength parameter such that , where is the identity operation and is the probability for obtaining the detector outcome in the absence of interaction . as with the classical case , the limit as is an idealization known as the _ weak measurement limit _ and is not strictly achievable in the laboratory .the definition implies that subsequent measurements will be unaffected , , and that the probability observables are proportional to the identity in the weak limit , , just as in the classical case .it also follows that any set of measurement operators that characterize must also be proportional to the identity in the weak limit .weak measurements are more interesting in the quantum case than in the classical case due to the existence of incompatible frameworks . since a weak measurement of an observable does not appreciably affect the quantum state , subsequent measurements on incompatible observables can be made that will probe approximately the same state .this technique allows ( noisy ) information about two incompatible frameworks to be gleaned from nearly the same quantum state in a single experiment , which is strictly impossible using strong measurements that collapse the state to a pure state in a particular framework after each measurement .the penalty for using weak measurements is that many more measurements are needed than in the strong measurement case to overcome the ambiguity of the measurement , as discussed in the classical case . to cement these ideas, we consider the task of indirectly measuring polarization in a particular framework . for specificity , we will consider the passage of a laser beam with unknown polarization through a glass microscope coverslip , as shown in fig .[ fig : coverslip ] .fresnel reflection off the coverslip leads to a disparity between transmission and reflection of the polarizations , so comparing transmitted to reflected light allows a generalized measurement of polarization , as we demonstrated experimentally in .the system sample space we wish to measure is the polarization with respect to the table ( ) and ( ) , which could in principle be measured ideally with a polarizing beam splitter .the detector sample space is the spatial degree of freedom of the transmitted ( ) and reflected ( ) ports of a coverslip rotated to some fixed angle with respect to the incident beam around an axis perpendicular to the table .the initial state of the detector is the pure state indicating that the beam enters a single incident port ( ) of the coverslip with certainty .the rotation that couples the system to the detector describes the interaction of the beam with the coverslip and has a unitary rotor corresponding to the polarization - dependent scattering matrix of the coverslip . assuming that the scattering preserves beams of pure polarization ,so remains and remains , the rotor decouples into a direct sum of rotors that are specific to each polarization , meaning that has a block - diagonal structure when represented as a matrix . selecting each output port ofthe coverslip produces the two _ measurement operators _ according to , [ eq : polmeasops ] which characterize the _ pure measurement operations _ that modify observables according to , [ eq : polpmo ] and their adjoints that modify the state density according to , [ eq : polpmoa ] the pure measurement operations in turn produce _probability observables _ according to , [ eq : polpo ] in the same framework as and .these probability observables are therefore equivalent to classical probability observables specified by the effective characterization probabilities , , , and .the measurement operators have a polar decomposition in terms of the roots of the probability observables and an extra unitary phase contribution , [ eq : polmeasops2 ] any nonzero relative phase , such as , will affect the framework orientation for subsequent measurements ; however , it will not contribute to the acquisition of information from the measurement since it does not contribute to the probability observables .such relative phase is therefore part of the _ disturbance _ of the measurement process .specifically , the initial state of polarization will be conditioned by a selection of a particular port on the detector according to , although the probabilities in each denominator only depend on the probability observables , the altered states in each numerator depend on the measurement operations and will include effects from the relative phase in the measurement operators . _operation correspondence_.the introduction of contextual values in the quantum case proceeds identically to the classical case of invasive measurements . since we must generally represent detector probabilities by _ operations _ within the reduced system space according to and , we must also generally represent detector observables by _ weighted operations _ within the reduced system space , if we are concerned with only a single measurement , or are working within a single framework as in the classical formalism , then for all practical purposes the operation reduces to its associated system observable as in the classical definition ._ contextual values_.we observe a corollary exactly as in the classical case : if we can expand a _ system _ observable in terms of the probability observables generated by a particular measurement operation , then that observable can also be expressed as an equivalent _ detector _observable , which is the quantum form of our main result originally introduced in . as in the classical case, we dub the required detector labels the * contextual values * ( cv ) of the quantum observable with respect to the _ context _ of a specific detection scheme as represented in the system space by the measurement operations .since many measurement operations produce the same probability observables , many detection schemes can use the same cvs to reproduce an observable average . _moments_.as with classically invasive measurements , higher statistical moments of the observable require more care to measure .for instance , we require the following equality in order to accurately reproduce the ^th^ moment of an observable indirectly using the same cv , however , as indicated in , performing a sequence of measurements produces the measurable probability .indeed , will not generally be a well - formed probability . to obtain the equality with a particular choice of cv, we need the additional constraint that _ all the measurement operators must commute with each other_. as a result , they must be part of the same framework as the system observable and hence commute with that observable as well .we will call any detector with commuting measurement operators with respect to a particular observable a _ fully compatible detector _ for that observable .evidently , this is a strict requirement for a detector . alternatively , as with the classical case, we can change the cvs to define new observables that correspond to powers of the original observable , such as .these new observables can then be measured indirectly using the same experimental setup without the need for measurement sequences .the cvs for the ^th^ power of will not be a simple power of the cvs for unless the measurement is unambiguous . _correlation functions_.if a time - evolution unitary rotation is inserted between different observable measurements , then we obtain a quantum _ correlation function _ instead , which should be compared to the classical case .similarly , -time correlations can be defined with time - evolutions between the observable measurements ._ inversion_.since the cvs depend only on the probability observables , which commute with the measured observable for a fully compatible detector , the procedure for determining the cvs will be identical to the classical case .that is , _ the contextual values of a quantum observable exactly correspond to the detector labels for a classically ambiguous detector_. we shall refer the reader back to the classical inversion for discussion on how to solve the relation . as a reminder, we advocate the pseudoinverse as a principled approach for picking the cvs in the event of redundancy or course - graining ._ conditioned averages_.we can construct a general _ postselected conditioned average _ from the cvs and the fully generalized abl rule analogously to the classical case , {{{\big\langle \widetilde { f_x } \big\rangle } } } { } } & = \sum_y f_y(y)\ , { \tensor[_{z}]{{{\big\langle \widetilde { y } \big\rangle } } } { } } = \frac{{\big\langle \mathcal{f}_x(e'_z ) \big\rangle}_x}{{\big\langle \mathcal{e}(e'_z ) \big\rangle}_x } , \\ & = \frac{\sum_{y\in y}\sum_{y'\in y ' } f_y(y ) { \text{tr}(\rho_x m^\dagger_{y , y'}e'_z m_{y , y'})}}{\sum_{y\in y}\sum_{y'\in y ' } { \text{tr}(\rho_x m^\dagger_{y , y'}e'_z m_{y , y'})}}. \nonumber\end{aligned}\ ] ] we introduced this type of conditioned average in for the typical case of pure operations with single associated measurement operators .if the postselection is defined in the same framework as the measurement operation , then the nonselective measurement in the denominator will reduce to unity , leaving a classical conditioned average , of the same form as .similarly , the preselected conditioning will also reduce to for such a case .this special case can not exceed the eigenvalue range of the observable : the observable will always reduce to its eigenvalues since either the state or the postselection commute with it . more generally , however , the combination of amplified cvs and the context - dependent probabilities in the general postselected average can send it outside the eigenvalue range of the observable . as we discussed in , having such a conditioned average stray outside the eigenvalue range of the observableis equivalent to a violation of a leggett - garg inequality that tests the assumptions of macrorealism under noninvasive detection . as a result, an eigenvalue range violation gives a direct indication of either _ nonclassicality _ present in a measurement sequence , or intrinsic measurement _ disturbance _ beyond that of noninvasive classical conditioning as we saw in the example in [ sec : cmarbledisturb ] .we refer the reader to for more detail on this matter ._ strong - conditioned average_.there are two other important special cases of the conditioned average worth mentioning : strong measurement and weak measurement .the strong measurement case is distinguished by being constrained exclusively to the eigenvalue range of the observable .specifically , reduces to the form , {{{\big\langle \widetilde { f_x } \big\rangle } } } { } } & = \frac{\sum_{x\in x } f_x(x ) p(x)d_x(z)}{\sum_{x\in x } p(x)d_x(z ) } , \\ & = \frac{\sum_{x\in x } f_x(x ) { \langle x|}\rho{|x\rangle}|{\langle x | z \rangle}|^2}{\sum_{x\in x } { \langle x|}\rho{|x\rangle}|{\langle x | z \rangle}|^2 } , \nonumber\end{aligned}\ ] ] which contains only the eigenvalues of the observable and factored probability products .however , it can not be expressed solely in terms of the observable and a conditioned state as in the classical case due to the disturbances .only when the state or postselection commutes with the observable does reduce to a special case of and become free from disturbance . _weak values_.the weak measurement case is distinguished by being the only case of the quantum postselected conditioned average that can become _ context independent _ for any state and postselection ( under certain conditions ) .the context - independent weak limit of the conditioned average is the _ weak value _ , {{{\big\langle \widetilde { f_x } \big\rangle}}}{}}^w & = \frac{{\big\langle e'_z f_x + f_x e'_z \big\rangle}_x}{2 { \big\langle e'_z \big\rangle}_x},\end{aligned}\ ] ] and is expressed entirely in terms of the system expectation functional , the postselection probability observable , and the observable .written in this form it is clear that it is a symmetrized version of the context - independent commuting case ; however , unlike the weak value is not constrained to the eigenvalue range and can even diverge . for a pure initial state with trace - density and pure postselection , the weak value takes the traditional form , {{{\big\langle \widetilde { f_x } \big\rangle}}}{^w_{x } } } & \to \text{re}\frac{{\langle z|}f_x{|x\rangle}}{{\langle z | x \rangle}}.\end{aligned}\ ] ] we will consider under what conditions one can obtain such a weak value in sec .[ sec : wv ] . continuing the example from sec .[ sec : qpbscover ] and fig .[ fig : coverslip ] , observables defined in the same framework as the probability observables may be expressed in terms of the probability observables according to using _ contextual values _ ( cvs ) , exactly as in the classical example , inverting this relation according to produces the unique cvs , [ eq : polcv ] the denominator is unity when the output ports of the coverslip are perfectly correlated with the polarization . otherwise , the denominator is less than one and serves to _ amplify _ the cvs to compensate for the ambiguity of the detection .the numerator contains cross - compensation factors that correct bias in the detector ; that is , the eigenvalue for in the contextual value for is weighted by the conditional probability corresponding to the complementary quantities of and , and so forth .the cvs define the detector observable that is actually being measured in the laboratory , this detector observable corresponds to a detection _operation _ on the system space according to , which fully describes the interaction with the detector , subsequent conditioning , and experimental convention for defining the observable .when no subsequent conditioning is performed on the system , this operation constructs the system observable , as desired .since the pure measurement operations all belong to the same framework and commute with , the operation is also _ fully compatible _ with the observable , meaning it can measure any moment of that observable using the same cvs according to , the quantity indicates a sequence of consecutive measurements made by the same coverslip on the beam to construct the observable for the ^th^ moment of .that is , the output from each port of the coverslip is fed back into the coverslip to be measured again .there are possible outcome sequences for traversals through the coverslip , each with probability of occurring .these probabilities are weighted with appropriate products of corresponding cvs and summed to correctly construct the ^th^ moment of .alternatively , one can change the cvs to directly measure the observable from one traversal of the coverslip . the required cvs for , [ eq : polcvnthmom ] are not simple powers of the cvs for unless the measurement is unambiguous .in addition to moments of , we can obtain postselected _ conditioned averages _ of by conditioning on a second measurement outcome characterized by a probability observable after the measurement by the coverslip according to , {{{\big\langle \widetilde { f_x } \big\rangle } } } { } } & = \frac{{\big\langle \mathcal{f}_x(e'_z ) \big\rangle}_x}{{\big\langle \mathcal{e}(e'_z ) \big\rangle}_x},\end{aligned}\ ] ] where is the nonselective measurement by the coverslip .the second measurement could be a polarizer , another coverslip , or any other method for measuring polarization a second time .if the initial state is pure with a density and the final postselection is also pure , then simplifies to a pre- and postselected conditioned average , {{{\big\langle \widetilde { f_x } \big\rangle}}}{_{x } } } & = \frac{f_y(t ) |{\langle z|}m_t{|x\rangle}|^2 + f_y(r ) |{\langlez|}m_r{|x\rangle}|^2}{|{\langle z|}m_t{|x\rangle}|^2 + |{\langle z|}m_r{|x\rangle}|^2}.\end{aligned}\ ] ] if we relate both pure states to the reference state via unitary rotations as defined in , and , then the probabilities take the form , [ eq : polprobs ] we see that each probability possesses an interference term that stems from the relative orientations of the incompatible frameworks for the preparation , measurement , and postselection . in addition , the relative phases in the measurement operators will affect the orientations of the frameworks and further disturb the measurement , as mentioned . for the classical case ,the frameworks coincide , so ; the interference term vanishes ; and , the probabilities reduce to the conditional probabilities that characterize the probability observables .the combination of the expanded range of the cvs and the interference term in the probabilities can make the postselected conditioned averages counter - intuitively exceed the eigenvalue range of the observable .such a violation of the eigenvalue range can not occur from classical conditioning without disturbance as in sec .[ sec : cmarbledisturb ] .we can also measure polarization using a von neumann measurement that uses a detector with a continuous sample space detector , such as position .for example , passing a beam of polarized light through a calcite crystal will continuously separate the polarizations and along a particular position axis . measuring the position profile of the resulting split beam along that axisallows information to be gained about the polarization .for such a setup , measuring the position with a linear scale corresponds to measuring a detector observable for a continuous sample space of distinguishable positions .the observable has a conjugate that satisfies = i 1_y ] . hence , we can model the calcite crystal as a rotation governed by a unitary rotor of the form which will translate polarization by some amount while simultaneously translating polarization by some amount in the opposing direction . the parameters and will depend on the geometry of the crystal with respect to the incident beam .suppose the light beam has an initially pure beam profile state described by a density .the probability for obtaining a particular pure position in the profile would then be .each complex factor is the `` wave function '' of the transverse beam profile , whose complex square is the probability density with respect to the integral .if we then pass the beam through the crystal described by the rotor and measure its position in a pure position state , we will have enacted a pure operation on the polarization of the beam that is characterized by a single measurement operator , with components equal to the initial wave function of the detector profile shifted in position by an appropriate .the pure measurement operations define a continuous set of probability observables , with components equal to the initial transverse beam profile shifted in position by an appropriate .unless the shifts become degenerate with then these probability observables can be used to indirectly measure any observable in the framework of and .since the observable appears as a generator for the rotation , it could be tempting to assert that the detector must specifically measure this observable .however , only the _ framework _ in which the generating observable is defined determines which observables can be measured . the choice of cv , which can be made in postprocessing , will calibrate the detector to measure specific observables in that framework .we considered a classical version of similar probability observables in [ sec : ccontinuousexample ] . generalizing that derivation only slightly, we can find the preferred contextual values ( cvs ) for an arbitrary polarization observable , \\v_+(y ) & = \frac{p_y(y - \epsilon_h ) + p_y(y + \epsilon_v)}{a + b(\epsilon_h,\epsilon_v)},\displaybreak[0 ] \\ \label{eq : contpolcvgen } v_-(y ) & = \frac{p_y(y - \epsilon_h ) - p_y(y + \epsilon_v)}{a - b(\epsilon_h,\epsilon_v ) } , \displaybreak[0 ] \\ a & = \int_y p_y^2(y ) \ , dy , \displaybreak[0 ] \\b(\epsilon_h,\epsilon_v ) & = \int_y p_y(y-\epsilon_h)\,p_y(y+\epsilon_v ) \ , dy.\end{aligned}\ ] ] in particular , one can measure the orthogonal observables and using the expansions , for the specific case of an initial gaussian beam centered at zero , we have , [ eq : contpolgauss ] \\ \epsilon & = ( \epsilon_h + \epsilon_v)/2,\displaybreak[0 ] \\\delta & = ( \epsilon_h - \epsilon_v)/2,\displaybreak[0 ] \\ a & = \frac{1}{2\sigma\sqrt{\pi}},\displaybreak[0 ] \\b(\epsilon ) & = a \exp(- ( \epsilon/\sigma)^2 ) , \displaybreak[0 ] \\\label{eq : contpolcv } v_-(y ) & = \sqrt{2}\ , \frac{\exp(-\frac{(y-\delta)^2}{2\sigma^2 } ) \sinh(\frac{\epsilon(y-\delta)}{\sigma^2})}{\sinh ( \frac{\epsilon^2}{2 \sigma^2})},\displaybreak[0 ] \\v_+(y ) & = \sqrt{2}\ , \frac{\exp(-\frac{(y-\delta)^2}{2\sigma^2 } ) \cosh(\frac{\epsilon(y-\delta)}{\sigma^2})}{\cosh ( \frac{\epsilon^2}{2\sigma^2})},\end{aligned}\ ] ] what matters for the measurement is the average translation away from the midpoint .the amplification of the cvs is controlled by the parameter , which serves as an indicator for the ambiguity of the measurement .when the shift is large compared to the width of the gaussian , then ; the shifted gaussians for and are distinguishable ; the cvs approach the eigenvalues of the measurement ; and , the measurement is unambiguous . when the shift is small compared to the width of the gaussian , then , the gaussians for and largely overlap , the cvs diverge , and the measurement is ambiguous .[ fig : continuouscv ] shows the cvs for the gaussian initial beam profile , as well as for a laplace and top - hat profile for comparison .this sort of detection protocol was used in the original paper on weak values in the form of a stern - gerlach apparatus that measures spin analogously to polarization using a continuous momentum displacement generated by a magnetic field .the initial gaussian beam profile shifted an amount away from the midpoint of the initial beam profile in a direction corresponding to the value of the spin .since the beam profile was symmetric about its mean , the generic cvs were implicitly assigned as a linear calibration of the detector , which targets a specific observable analogous to . motivating this implicit choice was the fact that when is sufficiently small , the two overlapping gaussians produce to a good approximation a single resulting gaussian with a shifted mean consistent with such a linear scaling , as shown in fig .[ fig : continuousprobs ] . that such a choice was being madewas later pointed out explicitly in before we identified the role of the cvs in and derived the preferred form .the proposed spin measurement protocol was adapted to a polarization measurement using a calcite crystal , as we have developed in this section , and then verified experimentally . to produce the weak value from the polarization measurement , we postselect on a second measurement to form a conditioned average .if the initial polarization state is pure with a density and the final postselection is also pure , then we have the form , {{{\big\langle \widetilde { f_x } \big\rangle}}}{_{x } } } & = \frac{\int_y f_y(y ) |{\langle z|}m(y){|x\rangle}|^2dy}{\int_y |{\langle z|}m(y){|x\rangle}|^2 dy}.\end{aligned}\ ] ] if we choose the symmetric gaussian case with and take the form of without additional unitary disturbance , and relate both pure states to the reference state via unitary rotations as defined in , and , then the postselected probability density {\tilde{p}}{_x}(y) ] given sufficiently small .no negative probabilities are required to obtain the negative limit point since negative cvs are being averaged in the weak limit and not eigenvalues .all operationally accessible probabilities are positive and well - behaved : the negative cvs are assigned by the experimenter and highlighted by the disturbance in the well - behaved probabilities .we leave the reader to ponder how to interpret the operationally accessible negative conditioned average .however , we note that with at least this measurement context the conditioned averages do obey the equality , {{{\big\langle \widetilde { a } \big\rangle}}}{_{x } } } + { \tensor[_{z}]{{{\big\langle \widetilde { b } \big\rangle}}}{_{x } } } + { \tensor[_{z}]{{{\big\langle \widetilde { c } \big\rangle}}}{_{x } } } = 1,\end{aligned}\ ] ] for all values of .the three sets of cvs sum to unity for each detector outcome , leaving only the normalized sum of detector probabilities {{{\big\langle \widetilde { 1 } \big\rangle}}}{_{x } } } + { \tensor[_{z}]{{{\big\langle \widetilde { 2 } \big\rangle}}}{_{x } } } + { \tensor[_{z}]{{{\big\langle \widetilde { 3 } \big\rangle}}}{_{x } } } = 1 ] is the commutator , and is a unitary rotation of the postselection . _sufficient conditions_.next we make the following sufficient assumptions regarding the dependence of the relevant quantities on the measurement strength parameter : 1 .the measurement operators are analytic functions of , and thus have well defined taylor expansions around such that they are proportional to the identity in the weak limit , .the unitary parts of the measurement operators are generated by hermitian operators of order , , for some integer .furthermore , each must commute with either the system state or the postselection , = 0 ] .3 . the equality must be satisfied , where the cvs are selected according to the pseudo - inverse prescription .the minimum nonzero order in for all is such that assumption can also be satisfied for some cvs by the truncation to order .that is , for all , , , where is the detector probability in absence of interaction , and some of the may vanish .the probability observables commute with the observable ._ theorem_.given the above sufficient conditions , we have the following theorem : _ in the weak limit the context dependence of the conditioned average vanishes and the weak value is uniquely defined_. _ proof_.to prove the theorem , we expand to the minimum necessary order of and then take the weak limit as .first , we expand to order using assumptions ( 1 ) , and ( 4 ) , generally , the remaining unitary rotation of the postselection will disturb the weak limit . however , if } = 0 ] , then we can apply the state to and find , since , the first term simplifies .the unitary rotation in the second term expands to , and the correction can be absorbed into the overall correction .therefore , after summing over we find up to corrections of order , where the probability observable has the expansion to order , inserting into , we find , {{{\big\langle \widetilde { f_x } \big\rangle } } } { } } & = \frac{{\big\langle { \big\{f_x,\,e'_z\big\}}/2 \big\rangle}_x + \sum_y f_y(\epsilon ; y ) o(\epsilon^{n+1})}{{\big\langle { \big\{1_x,\,e'_z\big\}}/2 \big\rangle}_x + o(\epsilon^{n+1 } ) } , \end{aligned}\ ] ] where we have simplified in the numerator , and in the denominator . hence ,unless the cvs in the numerator have poles larger than the correction terms of order will vanish , producing in the weak limit , as claimed .the last step in obtaining , therefore , is to show that the pseudoinverse solution for that was indicated by assumption ( 3 ) can not have poles larger than .the following lemmas will show this , which will prove the main theorem ._ lemma preliminaries_.first , we note that commutes with by assumption ( 5 ) . as such , we will replace the cv definition with an equivalent matrix equation , the pseudoinverse is constructed from the singular value decomposition as , where and are orthogonal matrices such that , is the singular value matrix composed of the square roots of the eigenvalues of , and is composed of the inverse nonzero elements in .next , we note that the truncation of the matrix to order has the form , where is a matrix whose rows are identical and whose columns contain the interaction - free detector probabilities , and is a matrix whose rows all sum to zero . furthermore , since the solution to the equation is assumed to exist by assumption ( 4 ) , then the dimension of the detector , , must be greater than or equal to the dimension of the system , .we then have the following two lemmas . _lemma 1_._the singular values of the truncated matrix _ _ have maximum leading order _ . _proof_.the singular values of are , where are eigenvalues of , with its other eigenvalues being zero .since , this matrix has the simple form , where is times the projection operator onto the probability vector , and .we will use to determine the singular values of . differentiating the eigenvalue relation andthe eigenvector normalization condition with respect to produces the following deformation equation that describes how the eigenvalues of continuously change with increasing , integrating this equation produces the following perturbative expansion of the eigenvalues for small , hence , to prove the lemma it is sufficient to show that and can not both vanish unless for all . since is a projection operator , is its only nonzero eigenvalue with associated eigenvector , to leading order . for , and can be chosen arbitrarily to span the degenerate -dimensional subspace orthogonal to .suppose for some .it follows that since is orthogonal to .therefore , is an eigenvector of with eigenvalue for any .since is symmetric , its eigenvectors form an orthogonal set for any , so we must have the identification . as a result ,the associated eigenvalue vanishes for any , , which proves the lemma ._ lemma 2_._the pseudoinverse solution _ _ to _ _ can not have poles larger than _ . _proof_.in order to satisfy , we have the equivalent condition for each component of , therefore , all singular values corresponding to nonzero components of must also be nonzero ; we shall call these the _ relevant _ singular values . singular values which are not relevant will not contribute to the solution .we can see this since , so any zero element of will eliminate the inverse irrelevant singular value from the solution for .since the orthogonal matrices and do not contain any poles , and since is -independent , then the only poles in the solution must come from the inverses of the relevant singular values in . if a singular value has leading order , then its inverse has leading order ; therefore , to have a pole of order higher than then there must be at least one relevant singular value with a leading order greater than .however , if that were the case then the truncation of to order could not satisfy since to that order it would have a relevant singular value of zero according to the previous lemma , contradicting assumption ( 4 ) about needing to satisfy the cv definition with the minimum nonzero order in .therefore , the pseudoinverse solution can have no pole with order higher than and the lemma is proven ._ exceptions_.as the main theorem indicates , the weak value will arise as the weak limit of a conditioned average in many common laboratory situations , which explains its seeming stability in the literature .however , if the sufficiency conditions of the theorem are not met , then a different weak limit may be found .for example , if there is -dependent unitary disturbance in the measurement , then the postselection can be effectively rotated to a different framework for each measurement outcome , which creates additional terms in the weak limit .similarly , if the cvs are -dependent and diverge more rapidly than then additional terms will become relevant in the weak limit .( see , for example , ref .this latter case can happen either from a pathological choice of cvs by the experimenter in the case of redundancy , or from a set of probability observables that can not satisfy the constraint equation with their lowest nonzero order in .such probability observables that do not satisfy the constraint equation to lowest order are poorly correlated with the observable in the weak limit .we refer the reader to for more discussion on the uniqueness issue of weak values .the theorem presented here is a slight generalization of the one presented therein .in this work , we have detailed the contextual - value approach to the generalized measurement of observables that we originally introduced in the letter and further developed in .this approach completes the well - established operational theory of state measurements by directly relating the state - transformations to traditional observables .each such operation typically corresponds to a distinguishable outcome of a correlated detection apparatus .an experimenter can construct an observable from such an apparatus by assigning values to its outcomes .the assigned values can be generally amplified from the eigenvalues of the constructed observable due to ambiguity in the measurement , and thus form a generalized spectrum that depends on the specific measurement context .hence , we call these values _ contextual values _ for the constructed observable that allow its indirect measurement using such a correlated detector . constructing an observable using contextual values requires only classical probability theory , according to . hence , the technique may be used wherever bayesian probability theory applies .we have outlined an algebraic approach to operational measurements from within bayesian probability theory to encourage applications along these lines .we have also shown how to construct a quantum probability space as the orbit of a classical probability space under the special unitary group .this point of view illustrates that quantum observables can be constructed from contextual values in precisely the same way as their classical counterparts .the approach also highlights the similarity between lder s rule for updating a quantum state and _ invasive _ classical conditioning , which leads to a similarity between quantum operations and classically invasive measurement operations .numerous physical examples have been given . by putting all observable measurements on the same footing ,the contextual values formalism subsumes not only projective measurements but also weak measurements as special cases . to emphasize this point ,we have analyzed the quantum weak measurement protocol introduced by aharonov et al . in detail as an example using a calcite crystal and a polarized laser beam .we have also derived the quantum weak value as a limit point of a general pre- and postselected conditioned average as the measurement strength goes to zero and have given sufficient conditions for the convergence to hold .like the classically invasive conditioned average , the quantum conditioned average , with the quantum weak value as a special case , can exceed the eigenvalue bounds of the observable .the use of contextual values considerably clarifies and formalizes the process of measuring observables , particularly within a laboratory setting .the elements of the formalism directly describe operationally accessible quantities that can be tomographically calibrated . as such, the technique should be of considerable interest to experimentalists working on measurement and control of both quantum and classical systems .furthermore , the formalism prompts interesting theoretical questions about the foundations of quantum mechanics by highlighting its myriad similarities to classical probability theory .we acknowledge helpful discussions with shantanu agarwal and support from the national science foundation under grant no .dmr-0844899 , and the us army research office under grant grant no .w911nf-09 - 0 - 01417 .
we present a detailed motivation for and definition of the _ contextual values _ of an observable , which were introduced by dressel et al . , phys . rev . lett . * 104 * , 240401 ( 2010 ) . the theory of contextual values is a principled approach to the generalized measurement of observables . it extends the well - established theory of generalized state measurements by bridging the gap between partial state collapse and the observables that represent physically relevant information about the system . to emphasize the general utility of the concept , we first construct the full theory of contextual values within an operational formulation of classical probability theory , paying special attention to observable construction , detector coupling , generalized measurement , and measurement disturbance . we then extend the results to quantum probability theory built as a superstructure on the classical theory , pointing out both the classical correspondences to and the full quantum generalizations of both lder s rule and the aharonov - bergmann - lebowitz rule in the process . as such , our treatment doubles as a self - contained pedagogical introduction to the essential components of the operational formulations for both classical and quantum probability theory . we find in both cases that the contextual values of a system observable form a generalized spectrum that is associated with the independent outcomes of a partially correlated and generally ambiguous detector ; the eigenvalues are a special case when the detector is perfectly correlated and unambiguous . to illustrate the approach , we apply the technique to both a classical example of marble color detection and a quantum example of polarization detection . for the quantum example we detail two devices : fresnel reflection from a glass coverslip , and continuous beam displacement from a calcite crystal . we also analyze the three - box paradox to demonstrate that no negative probabilities are necessary in its analysis . finally , we provide a derivation of the quantum weak value as a limit point of a pre- and postselected conditioned average and provide sufficient conditions for the derivation to hold .
hen eggs provide an excellent possibility for the study of fragmentation of thin brittle shells of disordered materials with the additional advantages of being cheap and easy to handle , making the patience of scientists the only limiting factor for the subsequent improvement of the experimental results .our experiments were performed on ordinary brown and white egg - shells . in the preparations ,first two holes of regular circular shape were drilled on the bottom and top of the egg through which the content of the egg was blown - out . the inside was carefully washed and rinsed out several times and finally the empty shells were dried in a microwave oven to get rid of all moisture of the egg - shell .the drying process proved to be essential to ensure that the cuticula , which can not be blown out , competely looses its toughness . in the impact experimentsintact egg - shells are catapulted onto the ground at a high speed using a simple setup of rubber bands . the experimental setup provided a relatively high energy impact without the possibility of varying the imparted energy .the eggs are shot directly into a plastic bag touching the ground so that no fragments are lost for further evaluation . in the explosion experiment initially the egg - shell is flooded with hydrogen and hung vertically inside a plastic bag .the combustion reaction is initiated by igniting the escaping hydrogen on the top of the egg .the hydrogen immediately reacts with the oxygen which is also drawn up into the egg through the bottom hole , mixing with the remaining hydrogen . when enough air has entered to form a combustible mixture inside the egg , the flame back - fires through the top hole and starts the very quick exothermic reaction .the experiment is carried out inside a soft plastic bag so that secondary fragmentations due to fragment - wall collisions do not occur , furthermore , no pieces were lost after explosion . since the pressure which builds up during combustioncan slightly be changed by the hole size , _i.e _ the smaller the hole , the higher the pressure at the explosion , we performed several series of experiments with hole diameters between 1.2 and 2.5 millimeter .the limit values have practical reasons : a drilling nail of large diameter typically breaks the eggs - shell , on the other hand it is extremely difficult to blow out an egg through a hole of diameter 1 mm or less .it is possible to follow the time evolution of the explosion and impact processes by means of a high speed camera under well controlled conditions .three consecutive snapshots of the explosion process are presented in fig .[ fig : eggsplode ] taken by a camera of 1000 hz frequency .the ignition took place at the top of the egg in fig.[fig : eggsplode] .the instant of back - firing and the initiation of combustion is captured in fig.[fig : eggsplode] , while in fig .[ fig : eggsplode] already the flying pieces can be seen .based on the snapshots the total duration of an explosion is estimated to be of the order of 1 millisecond . in the impact experimentthe egg hits the ground in the direction of its longer axis , as it is illustrated by the picture series of fig .[ fig : impact ] . after hitting the ground ( fig .[ fig : impact] ) , the egg suffers gradual collapse as it moves forward ( fig .[ fig : impact] ) making the impact process relatively longer compared to the explosion .the resulted egg - shell pieces are then carefully collected and placed on the tray of a scanner without overlap . in the scanned image fragments are seen as black spots on a white background and were further analyzed by a cluster searching code . in the inset of fig .[ fig : exp_imp ] an example of scanned pieces of an impact experiment is shown where the broad variation of sizes can also be noticed with the naked eye .a dusty phase of shattered pieces was also observed in the experiments with fragment sizes falling in the order of the pixel size of the scanner .the mass of fragments was determined as the number of pixels in the scanned image .since shattered fragments were also comparable to normal dust pieces in the air , they were excluded in the analysis by setting the lower cut - off of fragment masses to a few pixels . as the main quantitative result of the experiments we evaluated the mass distribution of fragments which is defined so that provides the probability of finding a fragment with mass falling between and .[ fig : exp_imp ] presents the fragment mass distributions for impact and explosion experiments averaged over 10 - 20 egg - shells for each curve . for the impact experiment , a power law behavior of the fragment mass distribution can be observed over three orders of magnitude where the value of the exponent can be determined with high precision to .explosion experiments result also in a power law distribution of the same value of for small fragments with a relatively broad cut - off for the large ones .smaller hole diameter in fig .[ fig : exp_imp ] , _ i.e. _ higher pressure , gives rise to a larger number of fragments with a smaller cut - off mass and a faster decay of the distribution at the large fragments . comparing the number of fragments obtained , the ratio of the pressure values in the explosions at hole diameters and 2.0 mm , presented in fig .[ fig : exp_imp ] , was estimated to be about 1.6 .note that the relatively small value of the exponent can indicate a cleavage mechanism of shell fragmentation and is significantly different from the experimental and theoretical results on fragmenting two - dimensional bulk systems where has been found , and from the three - dimensional ones where is obtained .most of the theoretical studies on fragmentation relay on large scale computer simulations since capabilities of analytic approaches are rather limited in this field due to the complexity of the break - up process . over the past years the discrete element method ( dem ) proved to be a very efficient numerical technique for fragmentation phenomena since it has the ability to handle large deformations arising dynamically , and naturally captures the propagation and interaction of a large number of simultaneously growing cracks , which is essential for fragmentation . in order to investigate the fragmentation of spherical shells we constructed a three - dimensional discrete element model such that the surface of the unit sphere is discretized into randomly shaped triangles ( delaunay triangulation ) by throwing points randomly and independently on the surface . based on the triangulation ,the dual voronoi tessellation of the surface is also carried out as is illustrated in fig .[ fig : mesh ] .the nodes of the triangulation represent point - like material elements in the model whose mass is defined by the area of the voronoi polygon assigned to it .the bonds between nodes are assumed to be springs having linear elastic behavior up to failure .disorder is introduced in the model solely by the randomness of the tessellation so that the mass assigned to the nodes , the length and cross - section of the springs are determined by the tessellation ( quenched structural disorder ) .after prescribing the initial conditions of a specific fragmentation process , the time evolution of the system is followed by solving the equation of motion of nodes by a predictor - corrector method of fourth order where is the sum of forces exerted by the springs connected to node , and denotes the external driving force , which depends on the loading condition . to facilitate the relaxation of the system at the end of the fragmentation process, a small viscous damping force was also introduced in eq .( [ eq : eom ] ) . in order to account for crack formation in the model springsare assumed to break when their deformation exceeds a certain breaking threshold . a fixed threshold value is set for all the springs resulting in a random sequence of breakings due to the disordered spring properties .the breaking criterion is evaluated at each iteration step and those springs which fulfill the condition are removed from the simulation . as a result of successive spring breakings cracksnucleate , grow and merge on the spherical surface which can give rise to a complete break - up of the shell into smaller pieces .fragments of the shell are defined in the model as sets of nodes ( material elements ) connected by the remaining intact springs .the process is stopped when the system has attained a relaxed state , _i.e. _ when there is no spring breaking over a large number of iteration steps. the main advantage of dem is that it makes it possible to monitor a large number of microscopic physical quantities during the course of the simulation which are hardly accessible experimentally , providing a deep insight into the fragmentation process . with the present computer capacities ,dem models can be designed to be realistic so that the simulation results can even complement the experimental information extending our understanding .the most important parameter values used in our simulations are summarized in table [ tab : table1 ] . in computer simulationstwo different ways of loading have been considered which model the experimental conditions and represent limiting cases of energy input rates : ( i ) _ pressure pulse _ and ( ii ) _ impact _ load . a pressure pulse in a shellis carried out by imposing a fixed internal pressure from which the forces acting on the triangular surface elements are calculated as where denotes the actual area of triangle and the force points in the direction of the local normal , see also fig.[fig : mesh ] .the force is equally shared by the three nodes of the triangle for which the equation of motion eq . ([ eq : eom ] ) is solved . since the surface area of the shell increases , the expansion under constant pressure implies a continuous increase of the driving force and of the imparted energy .the impact loading realizes the limiting case of instantaneous energy input by giving a fixed initial radially oriented velocity to the material elements and following the resulted time evolution of the system by solving the equation of motion eq .( [ eq : eom ] ) .the control parameter of the system which determines the final outcome of the process are the fixed pressure and the initial kinetic energy for the pressure pulse and impact loading , respectively . .[ tab : table1 ] parameter values used in the simulations . [ cols="^,^,^,^",options="header " , ]in the simulations , in both loading cases the spherical shell is initially completely stress free with no energy stored in deformation . when a constant pressure is imposed the total energy of the shell increases due to the work done by the filling gas where denotesthe actual volume during the expansion and is the volume change with respect to the initial state .the total energy can be written as the sum of the kinetic energy of material elements and of the elastic energy stored in deformation , , where is proportional to the change of surface area of the expanding sphere with respect to the initial surface . introducing the relative volume change as an independent variable, the total energy and the elastic energy can be cast in the form ^ 2 , \label{eq : eel } \end{aligned}\ ] ] where the surface change was expressed in terms of .furthermore , the parameter of the system depends on the properties of the triangulation and the characteristic physical quantities of springs ( young modulus , length , thickness ) .it is interesting to note that there exists a specific pressure value below which the expansion always stops at a maximum volume change depending on , however , for the expansion keeps always accelerating . for a given the value of can be determined from the condition so that ^ 2,\end{aligned}\ ] ] and can be identified as the highest pressure for which eq.([eq : p_star ] ) can be solved for .usually can only be realized at low pressure values , because at higher pressures the system suffers complete break - up much below , due to the finite strength of the springs .[ fig : energy_both ] illustrates the evolution of the total , kinetic , and elastic energies as a function of for both pressure and impact loading . in the case of pressure loading itcan be observed that the total energy extracted from the simulations agrees well with the analytic prediction of eq.([eq : etot ] ) .the numerical value of the multiplication factor of the elastic energy was obtained by fitting the expression eq.([eq : eel ] ) to the curve of in the figure . due to the constant pressure ,the total force acting on the shell is proportional to the actual surface area so that the system is driven by an increasing force during the expansion process .since the driving force increases with a diminishing rate when approaching the limit volume change , it follows that the pressure loading case is analogous to the stress controlled quasistatic loading of bulk specimens .according to the simulations , under pressure loading there exists a critical pressure below which the expansion always stops at a finite volume and the shell only suffers partial failure _( damage ) _ in the form of cracks but it keeps its integrity .when the pressure exceeds , however , the system surpasses the critical volume change when abruptly a large amount of springs break resulting in the break - up of the system _( fragmentation)_. note that .the critical volume change where fragmentation sets in during the expansion can be identified by the location of the sudden drop of the elastic energy in fig.[fig : energy_both ] caused by the large amount of spring breaking which occurs in a very narrow interval , resulting in a rapid formation of cracks on the surface .the value of is mainly determined by the fixed breaking threshold and the disordered spring properties . since the shell is under constant pressure the nucleated microcracks can grow and join giving rise to planar pieces surrounded by a free crack surface ( fragment ) , as is illustrated in fig.[fig : cracks] .first large fragments are formed which then break - up into smaller pieces until the surviving springs can sustain the remaining stress , see fig .[ fig : cracks] . for simplicity , in the simulations the pressureis kept constant even if the system has lost its integrity , which has formally the consequence that pieces of the shell formed in the final state of fragmentation keep accelerating under the action of a constant force which explains the increasing kinetic energy in fig .[ fig : energy_both ] following fragmentation .the volume of the system is numerically calculated as the sum of the volume of pyramidal objects defined by the surface elements and the center of the sphere , which provides a meaningful result even after break - up in fig .[ fig : energy_both ] in the vicinity of .the critical pressure , required to exceed the critical volume change to achieve fragmentation , can be estimated as .when loading is imposed by an instantaneous energy input , there is no further energy supply , the total energy of the system is either constant or decreases due to the viscous dissipation and the breaking of springs ( see fig.[fig : energy_both ] ) . since the elastic energy is solely determined by the deformation , the curve of and the critical volume change where break - up arises in fig.[fig : energy_both ] coincide with the corresponding values of the pressure loading .similarly to the pressure loading case , simulations revealed that a critical value of the imparted energy can be identified below which the shell maintains its integrity suffering only damage , while exceeding gives rise to a complete fragmentation of the shell .the resulted fragments on the shell surface obtained in the fragmented regime can be seen in fig.[fig : cracks] .to give a quantitative characterization of the break - up of shells and to reveal the nature of the transition between the damaged and fragmented states large scale simulations have been performed varying the control parameters , _i.e. _ the fixed pressure , and the imparted energy over a broad range .the most important characteristics of our fragmenting shell system , that can be compared to the experimental findings is the variation of fragment masses when changing the control parameters . in the simulationstwo cut - offs arise for the fragment masses , where the lower one is defined by the single unbreakable material elements of the model and the upper one is due to the finite size of the system . for both types of loading above the critical point the typical fragment size obtained at the instant of break - up decreases with increasing control parameter , which can be described analytically in terms of an energy balance argument similarly to the one given in ref . . the loading energy of a shell region of linear extension and mass , _ i.e. _ the energy stored in the motion of particles separating the piece from its surrounding , can be written as {kin}(\delta v_c)l^2 = \left[e_{kin}(\delta v_c)/m_{tot}\right]l^4 $ ] , where denotes the total kinetic energy of the shell at the instant of break - up and is the total mass of the shell .the separation of the piece from its surrounding costs energy proportional to the fragment surface .the equilibrium fragment size can be obtained by minimizing the sum of the loading and surface energy densities with respect to , which results in .it has been shown in the previous section that at the critical point , the total kinetic energy of the system when break - up occurs takes zero value .it follows that above the critical point has a linear dependence on the distance from the critical point so that for , and for hold . substituting these results into eq .( [ eq : dens ] ) , the typical fragment mass at the instant of break - up can be cast into the form eqs .( [ eq : typic_pres],[eq : typic_imp ] ) express that the typical fragment mass obtained at the time of break - up decreases according to a power law with increasing distance from the critical point .the exponent of the power law is universal in the sense that it does not depend on specific material properties of the shell .later on during the fragmentation process the elastic energy stored in deformation may result in succesive breakings of the large fragments .hence , it can be expected that eqs .( [ eq : typic_pres],[eq : typic_imp ] ) describe the scaling behaviour of the largest fragments , which did not undergo substantial size reduction until reaching the final relaxed state . _ largest fragments . _ to characterize the degree of fragmentation , _i.e. _ the size reduction achieved in the simulations , we calculated the average mass of the largest and of the second largest fragment normalized by the total mass as a function of the pressure , and input energy in the case of pressure and impact loading , respectively .the results are presented in figs.[fig : maxmass_press ] , and [ fig : maxmass_inst ] .it can be seen that in both cases the maximum fragment mass is a monotonically decreasing function of the control parameters and , however , the functional forms are different in the two cases. low pressure values in fig .[ fig : maxmass_press ] result in a breaking of springs , however , hardly any fragments are formed except for single elements broken out of the shell along cracks .hence , the mass of the largest fragment is practically equal to the total mass of the system , while the second largest fragment is orders of magnitude smaller ( _ damage _ ) .increasing however the pressure above the threshold value the largest fragment mass becomes much smaller than the total mass , furthermore , in this regime there is only a slight difference between the largest and second largest fragments , indicating the complete disintegration of the shell into pieces ( _ fragmentation _ ) .the value of the critical pressure needed to achieve fragmentation and the functional form of the curve of above was determined such that was plotted as a function of the difference varying until a straight line is obtained on a double logarithmic plot .the result is presented in the inset of fig .[ fig : maxmass_press ] where a power law dependence of is evidenced as a function of the distance from the critical point the exponent was obtained in good agreement with the analytic prediction of eq .( [ eq : typic_pres ] ) .detailed studies in the vicinity of revealed a finite jump of both and at which implies that fragmentation occurs as an abrupt transition at the critical point , see fig.[fig : maxmass_press ] . in fig .[ fig : maxmass_inst ] the corresponding results are presented for the case of impact loading as a function of the total energy imparted to the system initially .the mass of the largest fragment is again a monotonically decreasing function of the control parameter , however , it is continuous in the entire energy range considered .careful analyzes revealed the existence of two regimes with a continuous transition at a critical value of the imparted energy . in the inset of fig .[ fig : maxmass_inst ] is shown as a function of the distance from the critical point where was determined using the same technique as for .contrary to the pressure loading , exhibits a power law behavior on both sides of the critical point but with different exponents the numerical values of the exponents were obtained as and , above and below the critical point respectively .note that the value of coincides with the corresponding exponent of the pressure loading and is in a good agreement with the analytic prediction of eq.([eq : typic_imp ] ) .below the critical point the second largest fragment is again orders of magnitude smaller than the largest one , which implies that in this energy range the shell suffers only damage in the form of cracks , while above the critical point the break - up of the entire shell results in comparable values of the largest and second largest fragment masses . at the transition point between the damaged and fragmented statesthe mass of the second largest fragment has a maximum , while the curve of the largest one exhibits a curvature change , see fig . [fig : maxmass_inst ] ._ average fragment mass ._ more insight can be obtained into the fragmentation process by studying the so - called single - event moments of fragment masses where denotes the moment of fragment masses in the realization of a fragmentation process , is the number of fragments of mass in event .the ratio of the second and the first moments provides a measure for the average fragment mass in a specific experiment averaging over simulations with different realizations of disorder the average fragment mass was obtained as a function of the control parameter of the system . due to the abrupt nature of the transition from the damaged to the fragmented states at the critical pressure , under pressure loading can not be evaluated below . however , when exceeds the critical pressure the average fragment mass monotonically decreases in fig.[fig : m2m1_pressure ] .the inset of fig .[ fig : m2m1_pressure ] shows as a function of the distance from the critical point where the same value of was used as in fig.[fig : maxmass_press ] .a power law dependence of is evidenced as a function of for and the value of the exponent was obtained to be . for impact loading be evaluated on both sides of the critical point with a sharp peak in the vicinity of which is typical for continuous phase transitions in finite systems , see fig.[fig : m2m1_inst ] .a power law dependence of on the distance from the critical point is again revealed for , which is illustrated in the inset of fig .[ fig : m2m1_inst ] .the value of the exponent was determined by fitting , which practically coincides with the value of pressure loading . _ fragment mass distributions . _the most important characteristic quantity of our system which can also be compared to the experimental results is the mass distribution of fragments . under impact loading for we found that has a pronounced peak at large fragments indicating the presence of large damaged pieces , see fig .[ fig : massdist_inst ] . approaching the critical point peak gradually disappears and the distribution asymptotically becomes a power law at .we can observe in fig .[ fig : massdist_inst ] that above the critical point the power law remains for small fragments followed by a cut - off for the large ones , which decreases with increasing . for pressure loading only be evaluated above .the evolution of with increasing pressure is presented in fig.[fig : massdist_press ] , where the mass distribution always shows a power law behavior for small fragments with a relatively broad cut - off for the large ones . for the purpose of comparison , a mass distribution obtained at an impact energy close to the critical point , and distributions at two different pressure values of the ratio 1.6are plotted in fig .[ fig : exp_imp ] along with the experimental results . for impactan excellent agreement with the experimental and theoretical results is evidenced . for pressure loading ,the functional form of has a nice qualitative agreement with the experimental findings on the explosion of eggs , furthermore , distributions at the same ratio of pressure values obtained by simulations and experiments show the same tendency of evolution , see fig .[ fig : exp_imp ] .[ fig : mass_scal_inst ] and [ fig : mass_scal_press ] demonstrate that by rescaling the mass distributions above the critical point by plotting as a function of an excellent data collapse is obtained with .the data collapse implies the validity of the scaling form typical for critical phenomena .the cut - off function has a simple exponential form for impact loading ( see fig.[fig : mass_scal_inst ] ) , and a more complex one containing also an exponential component for the pressure case ( see fig . [fig : mass_scal_press ] ) .the average fragment mass occurring in the scaling form eq .( [ eq : scaling ] ) diverges according to a power law given by eqs.([eq : avermass_pow],[eq : avermass_inst ] ) when approaching the critical point .the good quality of collapse and the functional form eq .( [ eq : scaling ] ) also imply that the exponent of the mass distribution does not depend on the value of the pressure or the kinetic energy contrary to the bulk fragmentation where an energy dependence of was reported .the rescaled plots make possible an accurate determination of the exponent , where and were obtained for impact and pressure loading , respectively .hence , a good quantitative agreement of the theoretical and experimental values of the exponent is evidenced for the impact loading of shells , however , for the case of pressure loading the numerically obtained exponent turned out to be somewhat higher than in the case of exploded eggs .we presented a detailed experimental and theoretical study of the break - up of closed shells arising due to a shock inside the shell .for the purpose of experiments brown and white hen egg - shells were carefully prepared to ensure a high degree of brittleness of the disordered shell material .the break - up of the shell was studied under two different loading conditions , _i.e. _ explosion caused by a combustible mixture and impact with the hard ground . as the main outcome of the experiments , the mass distribution of fragments proved to be a power law in both loading cases for small fragment sizes , however , qualitative differences were obtained in the limit of large fragments for the shape of the cut - off .we worked out a discrete element model for the break - up of shells which provides an insight into the dynamics of the process by simultaneously monitoring several microscopic quantities in the framework of molecular dynamics simulations . in the simulationstwo ways of loading have been considered , which mimic the experimental conditions and represent limiting cases of energy input rates : during an expansion under constant pressure the shell is driven by an increasing force with a continuous increase of the imparted energy , while the impact loading realizes the instantaneous input of an energy .simulations revealed that depending on the value of and , the final outcome of the break - up process can be classified into two states , _i.e. _ damaged and fragmented with a sharp transition in between at a critical value of the control parameters and . in the fragmented regime power law fragment mass distributions were obtained in satisfactory agreement with the experimental findings . analyzing the behavior of the system in the vicinity of the critical point , , we showed that power law distributions arise in the break - up of shells due to an underlying phase transition between the damaged and fragmented states , which proved to be abrupt for explosion , and continuous for impact .due to its unique characteristics , the break - up of shells defines a new universality class of fragmentation phenomena , different from that of the two- and three - dimensional bulk systems .based on universality , our results should be applicable to describe the break - up of other closed shell systems composed of disordered brittle materials .explosion of shell - like fuel containers , tanks , high pressure vessels often occur as accidental events in industry , or in space missions where also the explosion of complete satellites may occur creating a high amount of space debris orbiting about earth . for the safety design of shell constructions , and for the tracking of space debris it is crucial to have a comprehensive understanding of the break - up of shells . due to the universality of fragmentation phenomena ,our results can be exploited for these purposes . in the fragmentation of bulk systems under appropriate conditionsa so - called detachment effect is observed when a surface layer breaks off from the bulk and undergoes a separated fragmentation process .this effect also shows up in the fragment mass distributions in the form of a power law regime of small fragments of an exponent smaller than for the large ones .our results on shell fragmentation can also provide a possible explanation of this kind of composite power laws of bulk fragmentation .this work was supported by the collaborative research center sfb381 and by otka t037212 , m041537 .f. kun was supported by the research contract fkfp 0118/2001 and by the gyrgy bksi foundation of the hungarian academy of sciences .the authors are also thankful to the technical support of h. gerhard from ikp .d. l. turcotte , j. of geophys .res . , 1921 ( 1986 ) . l. zhang , x. jin , and h. he , j. phys .phys . * 32 * 612 ( 1999 ) .j. j. gilvarry , j. appl .* 32 * , 391 ( 1961 ) ; j. appl . phys . * 32 * , 400 ( 1961 ) . t. matsui , t. waza , k. kani and s. suzuki , j. of geophys ., 10968 ( 1982 ) .a. fujiwara and a. tsukamoto , icarus * 44 * , 142 ( 1980 ) .h. inaoka , e. toyosawa , and h. takayasu , phys .lett . * 78 * , 3455 ( 1997 ) .t. kadono , phys . rev. lett . * 78 * , 1444 ( 1997 ) .t. kadono and m. arakawa phys .e * 65 * , 035107 ( 2002 ) .h. inaoka , m. ohno , fractals * 11 * , 369 ( 2003 ) . f. g. bridges , a. hatzes , and d. n. c. lin , nature * 309 * , 333 ( 1984 ) .n. arbiter , c. c. harris and g. a. stamboltzis , soc . of min .eng . * 244 * , 119 ( 1969 ) .l. oddershede , p. dimon , and j. bohr , phys .lett . , 3107 ( 1993 ) .a. meibom and i. balslev , phys .lett . * 76 * , 2492 ( 1996 ) .h. katsuragi , d. sugino , and h. honjo phys .e * 68 * , 046105 ( 2003 ) .h. katsuragi , d. sugino , h. honjo , preprint cond - mat/0310479 .x. campi and h. krivine , z. phys .a * 344 * , 81 ( 1992 ) .s. steacy and c. sammis , nature * 353 * , 250 ( 1991 ) .h. inaoka and h. takayasu , physica a * 229 * , 1 ( 1996 ) . m. marsili and y. c. zhang , phys .lett . * 77 * , 3577 ( 1996 ) .j. strm and j. timonen , phys .lett . * 78 * , 3677 ( 1997 ) .j. strm , m. kellomki and j. timonen , phys . rev .e * 55 * , 4757 ( 1997 ) .r. botet and m. ploszajczak , int .e * 3 * , 1033 ( 1994 ) .r. englman , j. phys . : condens .matter * 3 * , 1019 ( 1991 ) .g. hernandez and h. j. herrmann , physica a * 215 * , 420 ( 1995 ) .t. ashurst and b. l. holian , phys .e * 59 * , 6742 ( 1999 ) .j. a. astrm , b. l. holian , and j. timonen , phys .84 * , 3061 ( 2000 ) .e. s. c. ching , s. liu , and k. - q. xia , physica a * 287 * , 83 ( 2000 ) .a. diehl , h. a. carmona , l. e. araripe , j. s. andrade , jr . , and g. a. farias , phys . rev .e * 62 * , 4742 ( 2000 ) .a. bershadskii , j. phys .a * 33 * , 2179 ( 2000 ) .a. bershadskii , e. s. c. ching , j. stat .* 104 * , 49 ( 2001 ) .a. bershadskii , chaos solitons and fractals * 13 * , 185 ( 2002 ) .p. l. krapivsky and e. ben - naim phys .e * 68 * , 021102 ( 2003 ) .f. kun and h. j. herrmann , phys .e * 59 * , 2623 ( 1999 ) .b. behera , f. kun , s. mcnamara , and h. j. herrmann , preprint cond - mat/0404057 . f. kun and h. j. herrmann , comput . meth. appl .eng . * 138 * , 3 ( 1996 ) . f. kun and h. j. herrmann , int .c * 7 * , 837 ( 1996 ) .f. wittel , f. kun , h. j. herrmann , and b .- h .krplin , physical review letters in print , cond - mat/0402461 . c. thornton , k. k. yin and m. j. adams , j. phys .* 29 * , 424 ( 1996 ) .w. benz and e. asphaug , icarus * 107 * , 98 ( 1994 ) .a. v. potapov , m. a. hopkins and c. s. campbell , int .j. of mod .phys . * c 6 * , 399 ( 1995 ) .s. redner , in _ statistical models for the fracture of disordered media _ , edited by h. j. herrmann and s. roux ( north holland , amsterdam , 1990 ) . c. moukarzel and h. j. herrmann , j. stat .phys . * 68 * , 911 ( 1992 ) .k. b. lauritsen , h. puhl and h. j. tillemans , int .j. mod .c * 5 * , 909 ( 1994 ) .
a theoretical and experimental study of the fragmentation of closed thin shells made of a disordered brittle material is presented . experiments were performed on brown and white hen egg - shells under two different loading conditions : fragmentation due to an impact with a hard wall and explosion by a combustion mixture giving rise to power law fragment size distributions . for the theoretical investigations a three - dimensional discrete element model of shells is constructed . molecular dynamics simulations of the two loading cases resulted in power law fragment mass distributions in satisfactory agreement with experiments . based on large scale simulations we give evidence that power law distributions arise due to an underlying phase transition which proved to be abrupt and continuous for explosion and impact , respectively . our results demonstrate that the fragmentation of closed shells defines a universality class different from that of two- and three - dimensional bulk systems . closed shells made of solid materials are often used in every day life , industrial applications and engineering practice as containers , pressure vessels or combustion chambers . from a structural point of view aircraft vehicles , launch vehicles like rockets and building blocks of a space station are also shell - like systems , and even certain types of modern buildings can be considered as shells . the egg - shell as nature s oldest container proved to be a reliable construction for protecting life . in most of the applications shell - like constructions operate under an internal pressure much higher than the surrounding one . hence , careful design and optimization of structural and material properties is required to ensure the stability and reliability of the system . closed shells usually fail due to an excess internal load which can arise either as a result of slowly driving the system above its stability limit during its usage or service time , or by a pressure pulse caused by an explosive shock inside the shell . due to the widespread applications , the failure of shell systems is a very important scientific and technological problem which has also an enormous social impact due to the human costs arising , for instance , in accidental events . fragmentation , _ i.e. _ the breaking of particulate materials into smaller pieces is abundant in nature and underlies several industrial processes , which attracted a continuous interest in scientific and engineering research over the past decades . fragmentation phenomena can be observed on a broad range of length scales ranging from the collisional evolution of asteroids and meteor impacts on the astrophysical scale , through geological phenomena and industrial applications on the intermediate scale down to the break - up of large molecules and heavy nuclei on the atomic scale . in laboratory experiments on the fragmentation of solids , the energy input is usually achieved by shooting a projectile into a solid block , making an explosion inside the sample or by the collision of macroscopic bodies ( free fall impact ) . due to the violent nature of the process , observations on fragmenting systems are often restricted to the final state , making the fragment size ( volume , mass , charge , ... ) to be the main characteristic quantity . the most striking observation on fragmentation is that the distribution of fragment sizes shows a power law behavior , independently on the way of imparting energy , relevant microscopic interactions and length scales involved , with an exponent depending only on the dimensionality of the system . during the past years experimental and theoretical efforts focused on the validity region and the reason of the observed universality in 1 , 2 , and 3 dimensions . detailed studies revealed that universality prevails for large enough input energies when the system falls apart into small enough pieces , however , at lower energies a systematic dependence of the exponent on the input energy was evidenced . recent investigations on the low energy limit of fragmentation suggest that the power law distribution of fragment sizes arises due to an underlying critical point . besides the industrial and social impact of the failure of shell like systems , they are also of high scientific importance for the understanding of fragmentation phenomena . former studies on fragmentation have focused on the behavior of bulk systems in one , two and three dimensions under impact and explosive loading , however , hardly any studies have been devoted to fragmentation of shells . the peculiarity of the break - up of closed shells originates from the fact that the local structure is inherently two - dimensional , however , the dynamics of the systems , the motion of material elements , deformation and stress states are three - dimensional which allows for a rich variety of failure modes . in this paper we present a detailed experimental and theoretical study of the fragmentation of closed solid shells arising due to an excess load inside the shell . experiments were performed on brown and white hen egg - shells under two different loading conditions : fragmentation due to an impact with a hard wall and explosion by a combustion mixture have been considered resulting in power law fragment size distributions . for simplicity , our theoretical study is restricted to spherical shells such that a three dimensional discrete element model of spherical shell systems was worked out . in molecular dynamics simulations of the two loading cases , power law fragment mass distributions were obtained in satisfactory agreement with experiments . based on large scale simulations we give evidence that power law distributions arise due to an underlying phase transition which proved to be abrupt for explosion and continuous for impact . analyzing the energetics of the explosion process in the two loading cases and the evolution of the fragment mass distributions we demonstrate that the fragmentation of closed shells defines a new universality class different from that of two- and three - dimensional bulk systems .
the aim of the present paper is to present a concise summary and some applications of a nomenclature and notation for the general description of space - time experiments introduced and explained in detail in ref .the latter is the third in a series of recently written papers , devoted to space time physics in the absence of gravitation , that correct several misconceptions about the subject originating in einstein s seminal special relativity paper .the most important of these , the spurious nature of the ` relativity of simultaneity ' ( rs ) and ` length contraction ' ( lc ) effects was explained in ref . and further discussed from different points - of - view in refs . . at the time of writing, there is ample and precise experimental confirmation of the time dilatation ( td ) effect , predicted as a consequence of the space - time lorentz transformation ( lt ) in ref . , but none for rs or lc .earth - satellite based experiments to test for the existence of rs have been proposed by the present author .the present paper contains , in the following section , definitions of the concepts of base and travelling frames and a space - time experiment and its reciprocal , introduced in ref .the following sections contain applications of these concepts : time dilatation and the simultaneity of spatially separated events in different inertial frames , the lorentz invariance of spatial intervals , velocity transformation formulae and reciprocity relations , einstein s train - embankment experiment and a thought experiment involving two trains moving on parallel tracks at different speeds due to sartori .as explained below , these two thought experiments were incorrectly analysed in previous papers by the present author .an experiment is considered where a ponderable physical object at some fixed position in an inertial frame , s , is in uniform motion relative to another inertial frame s. the frame s is denoted as the _ base frame _ of the experiment , s as a _travelling frame_. as is conventional the origin of s moves along the positive -axis in s with speed , the - and -axes being parallel , and the object lying on the axis .the above configuration describes a _ primary experiment _ ; the value of is a fixed initial condition specified in the frame s. an experiment with a _ reciprocal configuration _ is one in which the origin of s moves along the negative -axis with speed .s is now the base frame and s the travelling frame .the value of is a fixed initial condition ( in general not equal to ) specified in the frame s. in the special case that , the experiment with the reciprocal configuration is termed _ reciprocal _ to the primary experiment , and _ vice versa_.the nomenclature introduced above is now applied to a primary experiment and its reciprocal , in which similar clocks c , c are situated at the origins of s and s respectively . in the primary experimentc moves with speed along the positive -axis in s , and in the reciprocal experiment c moves with speed along the negative -axis in s. the lorentz transformations ( and their inverses ) describing the experiments are as follows : transformation : = 0,~\rightarrow x({\rm c'})_b = vt({\rm c})_b \\ t'({\rm c'})_t & = & \gamma [ t({\rm c})_b- \frac { v x({\rm c'})_b}{c^2}],~\rightarrow t'({\rm c'})_t = \frac{t({\rm c})_b}{\gamma } \end{aligned}\ ] ] inverse transformation : ,~\rightarrow x({\rmc'})_b = \gamma vt'({\rm c'})_t = vt({\rm c})_b \\t({\rm c})_b & = & \gamma [ t'({\rm c'})_t+ \frac { v x'({\rm c'})_t}{c^2}],~\rightarrow t({\rm c})_b = \gamma t'({\rm c'})_t \end{aligned}\ ] ] transformation : = 0,~\rightarrow x'({\rm c})_b = -vt'({\rm c'})_b \\t({\rm c})_t & = & \gamma [ t'({\rm c'})_b+ \frac { v x'({\rm c})_b}{c^2}],~\rightarrow t({\rm c})_t = \frac{t'({\rm c'})_b}{\gamma } \end{aligned}\ ] ] inverse transformation : ,~\rightarrow x'({\rm c})_b = -\gamma vt({\rm c})_t = -vt'({\rm c'})_b \\ t'({\rm c'})_b & = & \gamma [ t({\rm c})_t-\frac { v x({\rm c})_t}{c^2}],~\rightarrow t'({\rm c'})_b = \gamma t({\rm c})_t \end{aligned}\ ] ] where . and are the times recorded by c and c respectively and the subscripts and specify whether the space or time coordinate is defined in a base frame or a travelling frame , respectively . thus and are times recorded by clocks at rest in primary and reciprocal experiments , respectively while and are the respective times recorded by clocks in motion in the two experiments.the time dilatation ( td ) relations given by the second equations in ( 3.2 ) , ( 3.4 ) , ( 3.6 ) and ( 3.8 ) are obtained by using the equations of motion in ( 3.1 ) , ( 3.3 ) , ( 3.5 ) and ( 3.7 ) respectively to eliminate the spatial coordinates on the right sides of the first equations in ( 3.2 ) , ( 3.4 ) , ( 3.6 ) and ( 3.8 ) .the following remarks may be made concerning eqns.(3.1)-(3.8 ) * the primary experiment and its reciprocal are physically independent .the lt equations for the primary experiment contain only the spatial cordinates of the travelling clock c , the position of the stationary base - frame clock c being arbitary .the lt equations for the reciprocal experiment contain only the spatial cordinates of the travelling clock c , the position of the stationary base frame clock c being arbitary . * in both experiments , clocks in the travelling ( base ) frame appear to be running slower ( faster ) to observers in the base ( travelling ) frames . *identical predictions are given , in both the primary and reciprocal experiments , by the transformation and the inverse transformation .* the td relations : are translationally invariant ( do not depend on the spatial positions of the clocks ) .* the equations of motion of the clocks : are the same as in galilean relativity . because of( iv ) the td relations hold for pairs of clocks , at arbitary positions in s and s ; that is : where c and c are at arbitary positions in s and c and c are at arbitary positions in s. if now c and c are synchronised so that , at any instant in the frame s : it follows from ( 3.9 ) and ( 3.10 ) that : there is therefore no ` relativity of simultaneity ' effect for a pair of synchronised clocks at different positions in s they are also observed to be synchronised in the frame s. how this spurious effect arises from misuse of the space - time lorentz transformation is explained elsewhere .to discuss spatial intervals using the lorentz transformation , an abbreviated notation is used where the clock at the origin of s with is given the label 1 and a second clock , on the axis , with the label 2 .assuming the same initial conditions for the primary experiment as in eqns(3.1 ) and ( 3.2 ) and dropping , for simplicity , the clock , base frame and travelling frame labels , eqns(3.1 ) and ( 3.2 ) are written : the equation of motion in s of the clock at is where is a constant , independent of the value of , depending on the choice of spatial coordinates in s. the space transformation equation for the clock 2 , consistent with ( 4.1 ) in the limit , and therefore using the same spatial coordinate system in s as clock 1 , is : the corresponding time transformation equation , given by the replacement , in ( 4.2 ) is ~~\rightarrow~ t'_2 = \frac{t_2}{\gamma}\ ] ] considering now simultaneous events in the frame s ; , ( 4.1)-(4.5 ) yield : where , and the dependences of and , for a fixed value of , are explicitly indicated . with the aid of the identity: , ( 4.6),(4.7 ) and ( 4.8),(4.9 ) yield identical hyperbolic curves on the versus plot for a given value of : since ( 4.7 ) and ( 4.9 ) give ( 4.10 ) simplifies to the spatial separation of the clocks in s is therefore independent of the value of .since , for , it follows from ( 4.12 ) that : the spatial separation of the clocks in s and s is therefore the same for all values of there is no ` relativistic length contraction ' .how the latter spurious effect correlated with ` relativity of simultaneity ' arises is also discussed in refs .two , physically distinct , kinds of velocity addition formulae are considered in this section .the first , corresponding to the well - known relativistic velocity addition formulae as derived by einstein in ref . , gives relations between the base frame velocities of a single object in different inertial frames .the second gives the transformation of the relative velocity of two objects in a given inertial frame into the similarly defined relative velocity between them in another inertial frame .for the first type of transformation , since only base frame velocities are involved , the ` travelling frame ' concept plays no role , whereas it is essential for the second ( relative velocity ) transformation in order to correctly understand the physical basis of the td effect .suppose that the frame s moves with speed in the positive -direction in s and that an object moves with velocity components and in the directions of the - and -axes in s. the first type of calculation predicts the corresponding base frame velocities and in the frame s. the bar on a symbol denotes that it is a derived quantity rather than an assumed initial value of a parameter of the problem . the appropriate differential lt formulae are : \\ dy'_b & = & dy_b \\ dt'_b & = & \gamma[dt_b - \frac{v dx_b}{c^2 } ] \end{aligned}\ ] ] where dividing ( 5.1 ) or ( 5.2 ) by ( 5.3 ) and subsituting , in the equations so obtained , the base frame velocities defined in ( 5.4 ) and ( 5.5 ) gives the longitudinal and transverse base frame velocity addition formulae : eqs(5.1)-(5.3 ) can also be used to derive transformation equations for the 4-vector velocity , , of the object .if denotes an interval of the proper time of the object , the td relations and ( ) give , on dividing ( 5.1)-(5.3 ) throughout by and using ( 5.4 ) and ( 5.5 ) , the relations : \\\gamma_{\bar{w}_b}\bar{w}^{(y')}_b & = & \gamma_{u_b } u^{(y)}_b \\ \gamma_{\bar{w}_b } & = & \gamma[\gamma_{u_b } - \frac { v \gamma_{u_b } u^{(x)}_b}{c^2 } ] \end{aligned}\ ] ] or \\ u'^{(y ' ) } & = & u^{(y ) } \\u'^{(0 ) } & = & \gamma [ u^{(0 ) } -\beta u^{(x ) } ] \end{aligned}\ ] ] where the 4-vector velocities and of the object in s and s are defined as : the velocity addition relations ( 5.6 ) and ( 5.7 ) are recovered by dividing ( 5.8 ) and ( 5.9 ) respectively by ( 5.10 ) or dividing ( 5.11 ) and ( 5.12 ) respectively by ( 5.13 ) and using the definitions of the components of the 4-vector velocities in ( 5.14 ) and ( 5.15 ) .it is interesting to note that , although space - time events in an experiment and its reciprocal are physically independent , the initial kinematical configurations in the two experiments are related by the kinematical lt ( 5.11)-(5.13 ) that yields the parallel velocity addition formula when and : if ( for example an object at rest at the origin of s ) than ( 5.16 ) gives , which describes the kinematical configuration of the reciprocal experiment the object moves with speed along the negative -axis in s. consider now an experiment in which an object moving with the specified speed along the positive -axis in s is observed in the travelling frame s. since the relative velocity of the object and the frame s , in s , is , the speed of the object , as observed in s , is the transformed value of this relative velocity . if the origins of s and s and the moving object all have the same -coordinate at time , and , the separation , , in the frame s , of the object from the origin of s at time is if is the velocity of the object in s , in the positive direction , in the primary experiment , the separation of the object from the origin of s at time is the lorentz invariance , ( 4.10 ) , of the spatial separation of the object from the origin of s , at the corresponding times and , which implies , for , , and the td relation : ( c.f .eqn(3.9 ) ) then gives the the transformation law for the relative velocity of the object and s as : where .as above , the bar in the symbol indicates that it is a calculated quantity as contrasted with the assumed initial values , in this case , of and . in the special case , ( 5.19 ) , so that there is no minus sign sign on the left side of ( 5.19 ) . ]is the transformation law of the relative velocity of s and s between the base frame s and the travelling frame s : where is defined as the velocity of s relative to s , in s , in the direction of the negative -axis in the primary experiment . thus the reciprocity principle ( rp ) , that `` the velocity of an inertial frame of reference s , with respect to another inertial frame of reference s , is equal and opposite to velocity of s relative to s '' , although true in galilean relativity , no longer holds in special relativity , being replaced by the reciprocity relation ( 5.20 ) , when the space - time lt is used to transform events , in a particular space - time experiment , from one frame into another . as derivation of the latter equation shows , the breakdown of the rp is a necessary consequence of the definition of a relative velocity , the invariance of spatial intervals , and td .the reciprocity relation for an experiment with a reciprocal configuration where s is the base frame and s the travelling frame is where .eqn(5.21 ) is obtained from ( 5.20 ) by exchange of primed and unprimed quantities .reciprocal experiments correspond to the special case where .thus , in special relativity , the rp should be replaced a ` kinematical reciprocity principle ' ( krp ) : ` the velocity of an inertial frame of reference s ' relative to another such frame s in a space - time experiment is equal and opposite to the velocity of s relative to s in the reciprocal experiment. this statement , which is actually the definition of a reciprocal experiment , rather than a relation between velocities in different frames in the same space - time experiment , is applicable in both special and galilean relativity .a straightforward application of the relative velocity transformation law ( 5.19 ) is to the analysis of the much - discussed train - embankment thought experiment .this was introduced by einstein in the popular book ` relativity , the special and general theory ' with the intention to illustrate , in a simple way , ` relativity of simultaneity ' .light signals are produced by lightning strikes which simultaneously hit an embankment at positions coincident with the front and back ends of a moving train .the signals are seen by an observer , o , at the middle of the train and an observer , o , on the embankment , aligned with o at the instant of the lightning strikes .the light signals are observed simultaneously by o who concludes that the lightning strikes are simultaneous .because of the relative motion of o and the light signals , the latter are not observed by o at the same time . invoking the constancy of the speed of light in the train frame, einstein concludes that o would not judge the strikes to be simultaneous , giving rise to a ` relativity of simultaneity ' effect between the train and embankment frames .this train - embankment thought experiment ( tete ) is now analysed in terms of the concepts and nomenclature introduced above .the observer o is replaced by a two - sided light detector , d , at the middle of the train .the latter moves to the right with speed .the embankment frame , s , is the base frame of the experiment , the train frame , s , is the travelling frame . at time in s ( fig .1a ) light signals moving at speed in the embankment frame are emitted , and move towards d. the light signals are also ` travelling objects ' in the source frame s. the essential input parameters of the problem , and are therefore fixed in the frame s. in accordance with eqn(4.13 ) the length of the train , , is invariant . at time in s + t0 ].(fig .1c ) the right - moving light signal strikes d. the configurations in s corresponding to those in s in figs .1a , b , c are shown in figs .1d , e , f respectively .the velocity transformation formula ( 5.19 ) implies that the speed in s of the right - moving light signal relative to d is while that of the left - moving light signal is .the pattern of detection events in s and s is then the same , the only difference being that that the velocities of the light signals relative to d are greater in s by the factor a necessary consequence of time dilatation and the invariance of length intervals .the left - moving light signal is then observed in s at the time : and the right - moving one at the time : the time dilatation effect for the travelling frame s is manifest in these equations .it is seen to be a consequence of the relative velocity transformation formula ( 5.19 ) , not of lc . on the assumption that an experimenter analysing the signals received by d knows the essential parameters of the problem , , , and , the measured times and in the train frame can be used to decide whether the left and right moving light signals were emitted simultaneously in this frame or not . if the right - moving and left - moving signals are emitted at times and respectively then ( 6.1 ) and ( 6.2 ) are modified to : and subtracting ( 6.3 ) from ( 6.4 ) and rearranging : the observed time difference and knowledge of the value of then enables determination of so that the simultaneity of emission of the light signals can be tested . for the event configurations shown in fig . 1 it would be indeed concluded that , so the emission of the signals is found to be simultaneous in the train frame , contrary to einstein s assertion in ref . . the essential flaw in einstein s argument was the failure to distinguish between the speed of light , relative to some fixed object in an inertial frame , and the speed of light relative to some moving object in the same frame , which is what is relevant for the analysis of the tete .einstein s interpretation corresponds to replacing and by and , so that only events in the embankment frame are considered , and making the replacements , ( confusing the speed of light in an inertial frame , with the relative speed of light and a moving object in the same frame ) : in ( 6.1 ) and ( 6.2 ) giving : this leads to einstein s false conclusion that the light signal emission events would be found to be non - simultaneous in the train frame .an analysis of the tete in a previous paper by the present author also concluded that the train observer would judge the lightning strikes to be simulataneous , but the reasoning leading to this conclusion was fallacious . at the time of writing ref . i had not understood correctly the distinction between an experiment and its reciprocal and the difference in physical interpretation of a space - time and a kinematial lt explained in sections 3 and 5 above .i incorrectly assumed that the kinematical lt relating two base frame velocities was valid in a single space - time experiment , that is , for transformation between a base frame and a travelling frame .thus the velocities of both photons in s in fig .1 of the present paper were assumed to be .since the lightning strikes are ( see section 3 above ) simultaneous in both s and s the train observer would then see the light signals they emit at the same time and conclude that the strikes are simultaneous .this is a correct description , in the train frame , of the physically independent experiment that is reciprocal to the one proposed by einstein ( i.e. the one where s is the the base , not the travelling frame ) not the correct description , in the train frame , of einstein s experiment .the mistake in ref . was the hitherto universal , but erroneous , assumption that events defined in the base frames of an experiment and its reciprocal are related by the space - time lt .sartori proposed the thought experiment shown in fig .2 . in the base frames , the rest frame of the platform p , two trains t1 and t2 , with proper frames s and s respectively , move to the right with speeds and respectively , where .these base frame velocities are the fixed input parameters of the problem . as in the previous section , to lighten the notation , the base frame labels on these quantities are omitted .initially ( fig .2a ) when , t1 is aligned with p and distant l1 from t2 . at time , t2 is aligned with p and t1 is distant l2 from p ( fig .1b ) . since , t1 is aligned with t2 at time when t1 and t2 are distant l3 from p ( fig .the corresponding configurations as observed in the travelling frames s and s are shown in figs . 3 and 4 . because of the invariance of spatial separations the corresponding spatial configurations are identical to those of fig .2 , whereas the times of the corresponding events are scaled by the td factors , respectively .the travelling frame velocities as given by the relative velocity transformation formula ( 5.19 ) are shown in figs . 3 and 4 .._base frame ( b ) and travelling frame ( t ) veocities in various frames .the base frame velocities are related by the parallel velocity addition formula ( 5.16 ) while the travelling frame velocities are derived from base frame velocities using the relative velocity transformation formula ( 5.19 ) .each row of velocities specifies a physically - independent space - time experiment . ] and ] and the reciprocal experiment ] and ] the speed of p in the frame s is ( see fig .3a ) not .* comparison of ( 7.6 ) and ( 7.7 ) with ( 7.1 ) and ( 7.2 ) above shows that the former formulae are inconsistent . for event 2 the formula ( 7.6 )corresponds to the td effect for the experiment ] as in ( 7.1 ) .it is clear that , for example , the times and must both be times of clocks at rest in s but observed in motion from s in the primary experiment shown in figs 1 - 3 , ] . the argument given by sartori for ( 7.6 ) and ( 7.7 ) is that , for ( 7.6 ) ` the events 1 and 2 occur at the same position in s ' so that is a proper time interval , and for ( 7.7 ) that ` the events 1 and 3 occur at the same position in s ' so that is a proper time interval. actually all the times in the td relations shown in ( 7.1)-(7.3 ) are ` proper time intervals ' recorded by some clock . given the existence of an array of synchonised clocks in each frame , it is of no importance , for the timing of two events whether , or not , their times are both measured locally by the same clock .suppose that in fig .3 there is a clock c at rest in s , synchronised with a clock at t1 , distant l2 from it , such that event 2 in fig .3b is local at c. because c and a synchronised clock at t1 both record at the epoch of event 1 , the time interval in s , between events 1 and 2 in s , is correctly given by the epoch , as measured by c , of the event 2 which is local at c. since is a`proper time interval ' measured by the clock c , then if as in ( 7.6 ) , as measured by a local clock at t1 , then also as measured by the local clock c , in contradiction with sartori s assumption ( 7.6 ) .summarising , sartori s analysis assumes incorrectly that the base frames of an experiment and its reciprocal are related by the space - time lt and uses td relations in an inconsistent manner , ( 7.7 ) being applicable to the primary experiment shown in figs 2 - 4 : ] .that the algebraic manipulation of ( 7.4)-(7.7 ) yields the correct velocity addition formula ( 5.16 ) must then be considered as purely fortuitous . in my previous analysis of sartori s thought experiment , the same mistake of principle was made as in that of the train / embankment experiment in ref . . in common with sartori, it was assumed that the the kinematical configurations of a primary experiment and its reciprocal correspond to the base and travelling frame configurations of the primary experiment . that is ( see figs . 1 and 2 of ref. ) that the velocities in the frame s in the experiment shown in figs . 2 - 4 and the first row of table 1 of the present paper , were given instead by the base frame velocities in s of the reciprocal experiment shown in the second row of table 1 .thus , it is falsely assumed that the base frame configurations of an experiment and its reciprocal are actually the base and travelling frame configurations of the primary experiment , and that events in these two frames are connected by the space - time lt .this leads to false predictions of the breakdown of the lorentz invariance of spatial intervals in different inertial frames and ratios between time intervals observed in different inertial frames differing from the td effect .99 j.h.field , ` the physics of space and time iii : classification of space - time experiments and the twin paradox ' .arxiv pre - print : http://xxx.lanl.gov/abs/0806.3671 .cited 23 jun 2008 .j.h.field , ` the physics of space and time i : the description of rulers and clocks in uniform translational motion by galilean or lorentz transformations ' , arxiv pre - print : http://xxx.lanl.gov/abs/physics/0612039v3 . cited 28 mar 2008 .j.h.field , ` the physics of space and time ii : a reassessment of einstein s 1905 special relativity paper ' , arxiv pre - print : http://xxx.lanl.gov/abs/physics/0612041v2 . cited 14 apr 2008 .a.einstein , annalen der physik * 17 * , 891 ( 1905 ) .english translation by w.perrett and g.b.jeffery in ` the principle of relativity ' ( dover , new york , 1952 ) p37 , or in ` einstein s miraculous year ' ( princeton university press , princeton , new jersey , 1998 ) p123 .j.h.field , the local space - time lorentz transformation : a new formulation of special relativity compatible with translational invariance , arxiv pre - print : http://xxx.lanl.gov/abs/physics/0501043v3 . cited 30 nov 2007 .j.h.field , ` clock rates , clock settings and the physics of the space - time lorentz transformation ' , arxiv pre - print : http://xxx.lanl.gov/abs/physics/0606101v4 .cited 4 dec 2007 . .j.h.field , translational invariance and the space - time lorentz transformation with arbitary spatial coordinates , http://xxx.lanl.gov/abs/physics/0703185v2. cited 15 feb 2008 .j.h.field , ` spatially - separated synchronised clocks in the same inertial frame : time dilatation , but no relativity of simultaneity or length contraction ' .arxiv pre - print : http://xxx.lanl.gov/abs/0802.3298v2 . cited 4 mar 2008 .j.h.field , ` proposals for two satellite - borne experiments to test relativity of simultaneity in special relativity ' , arxiv pre - print : http://xxx.lanl.gov/abs/physics/0509213v3 .cited 7 mar 2007 .a.einstein , ` relativity , the special and the general theory ' , english translation by r.w.lawson , methuen , london , 1960 , ch ix , p25 .l.sartori,`elementary derivation of the ralativistic velocity addition law ' , am .* 63 * 81 - 82 ( 1995 ) .v.berzi and v.gorini , journ .* 10 * 1518 ( 1969 ) .j.h.field , the train / embankment thought experiment , einstein s second postulate of special relativity and relativity of simultaneity , arxiv pre - print : http://xxx.lanl.gov/abs/physics/0606135v1 . cited 15 jun 2006 .j.h.field , relativistic velocity addition and the relativity of space and time intervals , arxiv pre - print : http://xxx.lanl.gov/abs/physics/0610065v3 .cited 29 jan 2007 .
the concepts of primary and reciprocal experiments and base and travelling frames in special relativity are concisely described and applied to several different space - time experiments . these include einstein s train / embankment thought experiment and a related thought experiment , due to sartori , involving two trains in parallel motion with different speeds . spatially separated clocks which are synchronised in their common proper frame are shown to be so in all inertial frames and their spatial separation to be lorentz invariant . the interpretions given by einstein and sartori of their experiments , as well as those given by the present author in previous papers , are shown to be erroneous . 24.5 cm -5pt -5pt -50pt addtoresetequationsection * j.h.field * dpartement de physique nuclaire et corpusculaire universit de genve . 24 , quai ernest - ansermet ch-1211 genve 4 . e - mail : john.field.ch
the effective prediction success of volcanic eruptions is rare if one defines `` prediction '' as a precise statement of time , place , and ideally the nature and size of an impending activity [ _ minakami _ , 1960 ; _ swanson et al ._ , 1985 ; _ voight _ , 1988 ; _ tilling and lipman _ , 1993; _ chouet _ , 1996 ; _ mcnutt _ , 1996 ] .a noteworthy obstacle is that most studies do not quantify the effectiveness and reliability of proposed predictions , and often do not surpass the analysis of a unique success on a single case history with the lack of systematic description of forecasting results . in this studywe focus on rigorous quantification of the predictive power of the increase in the daily seismicity rate a well - known and probably the simplest volcano premonitory pattern .following _ minakami _ [ 1960 ] , _ kagan and knopoff _ [ 1987 ] , _ keilis - borok _ [ 2002 ] , we do not consider here deterministic predictions , and define a prediction to be `` a formal rule whereby the available observable manifold of eruption occurrence is significantly contracted and for this contracted manifold a probability of occurrence of an eruption is significantly increased '' [ _ kagan and knopoff _ , 1987 ] . to quantify the effectiveness and reliability of such predictions we use error diagrams [ _ kagan and knopoff _, 1987 ; _ molchan _ , 1997 ] .previous attempts in probalistic forecast of volcanic eruptions used seismicity data in combination with other observations or alone [ _ minakami _ , 1960 ; _ klein _ , 1984 ; _ mulargia et al . _ , 1991 , 1992 ] .these studies did not quantify the prediction schemes in the error diagram framework ._ minakami _ [ 1960 ] was a pioneer in the development of seismic statistics method for volcano monitoring .based on the data from the andesitic asama volcano , honshu , he uses the increase in five - day frequencies of earthquakes to derive an increase in the probability for an eruption in the next 5 days . _ klein _ [ 1984 ] tests the precursory significance of geodetic data , daily seismicity rate , and tides before the 29 eruptions during 1959 - 1979 at the kilauea volcano , hawaii .he derives a probabilistic prediction scheme that applies for eruptions anywhere on the volcano and can give 1- or 30-days forecast .the forecasting ability of daily seismicity rate is shown to be better than random at 90% confidence in forecasts on the time scale of 1 or 30 days using small earthquakes that occur in the caldera .a better performance is achieved with a 99% confidence when using located earthquakes only , in forecasts on the time scale of 1 day ._ mulargia et al . _ [ 1991 , 1992 ] use regional seimicity to define clusters of seismic events within 120 km distance of etna volcano .clusters within this regional seismicity are found within 40 days before 9 out of 11 flank eruptions in the 1974 - 1989 period . on the same periodno statistically significant patterns are identified 40 days before and after the 10 summit eruptions . as a test sitewe choose the pdlf volcano , the most active volcano worldwide for the last decades with 15 eruptions in the 1988 - 2001 period . on this sitethe volcanic risk remains low because most of the eruptions are effusive and occurred in an area that is not inhabited .for the pdlf site , the increase in seismicity rate and an increase in deformation rate have been reported within a few hours prior to an eruption ( e.g. [ _ lenat et al ._ , 1989 ; _ grasso and bachelery _ , 1995 ; _ sapin et al ._ , 1996 ; _ aki and ferrazzini _ , 2000 ; _ collombet et al ._ , 2003 ; _ lenat et al ._ , 1989 ; _ cayol and cornet _ , 1998 ] .although the deformation data are very efficient to locate the lava outflow vents from a few hours to minutes before the surface lava flow , there is not yet a long term catalog available to test how they can be used to forecast an eruption days to weeks in advance . in this studywe quantify the predictability of the pdlf eruptions on the longer time scale of a _ few days _ to _ weeks _ prior to an eruption ._ collombet et al . _ [ 2003 ] show that accelerating seismicity rate weeks prior to the pdlf eruptions can be recovered on average using the superposed epoch analysis before numerous eruptions . herewe show that the increase of the daily seismicity rate is useful as well to forecast individual eruptions .this is achieved by rigorous quantification of the prediction performance by introducing error diagrams [ _ kagan and knopoff , _ 1987 ; _ molchan , _ 1997 ] to choose among competitive prediction strategies .the pdlf hot spot volcano is a shield volcano with an effusive erupting style due to low viscosity basaltic magma . during 1988 - 2001period the seismicity at the pdlf site remained low , with , and was localized within a radius of a few km beneath the central caldera .less than 10% of these small events are located , most of them being only recorded by the three summit stations located 3 km apart from each other .contrary to the mauna loa - kilauea volcanic system , there is no seismically active flank sliding or basal faulting on the pdlf .contrary to the etna volcano , there is no tectonic interaction with neighboring active structures .accordingly , the pdlf seismicity is one of the best candidates to be purely driven by volcano dynamics .this seismogenic volume is also thought to be the main path for the magma to flow from a shallow storage system toward the surface [ _ lenat and bachelery _ , 1990 ; _ sapin et al ._ , 1996 ; _ bachelery _ , 1999 ; _ aki and ferrazini _ , 2000 ] .the pdlf seismicity catalog consists of data from the 16 seismic stations [ _ sapin et al ._ , 1996 ; _ aki and ferrazzini _ , 2000 ] . during the may 1988- june 2001 period the geometry and instrumental characteristics of the seismic network remained stable , with a magnitude detection threshold of 0.5 [ _ collombet et al ._ , 2003 ] . in this period15 eruptions were seismically monitored .we use here the seismicity rate of the volcano tectonic ( vt ) events , excluding long period ( lp ) events or rockfall signals .the number of lp events at the pdlf site is insignificant compared to the number of vt events .for example , the eruption of 1998 was acconpanied by a single lp event 4 hours before the surface lava flow [ _ aki and ferrazzini _ , 2000 ] , and 2500 vt events had been recorded at that time .although the peaks of seismicity rate clearly correlate with eruption days ( see figure 1 in [ _ collombet et al _ , 2003 ] ) , it is difficult to identify a long - term seismicity pattern before each eruption , except possibly during the last few hours before surface lava flow [ _ lenat et al ._ , 1989 ; _ sapin et al ._ , 1996 ; _ aki and ferrazzini _ , 2000 ; _ collombet et al _ , 2003 ] .for all the 15 pdlf eruptions the hourly seismicity rate during the seismicity crisis that precedes each surface lava flow is roughly constant with values ranging from 60 to 300 events / hr , with an average value of 120 events / hr . the average crisis duration is 4 hrs , the extreme values ranging from 0.5 hours for the may 1988 eruption to 36 hours for the 1998 eruption .no correlation is found between the seismic rates or the durations of the crisis and the erupted volumes . because there is no recurrent migration of seismicity during these crises [ e.g. _ sapin et al ._ , 1996 ] we suggested , as proposed by _ rubin et al ._ [ 1998 ] , that damage is neither directly related to the dyke tip , nor does it always map the dyke propagation .it is the response to dike intrusion of parts of the volcano edifice that are close to failure [ e.g. _ grasso and bachelery _ , 1995 ] .we synthesize the pre - eruption seismicity rate on the pdlf volcano as a 3 step process ( figure [ rfig2 ] ) .first , the seismicity rate increases in average and it follows a power law 10 - 15 days prior the eruption [ _ collombet et al _ , 2003 ] .this is reminiscent of the average foreshock patterns observed for earthquakes [ _ jones and molnar _ , 1979 ; _ helmstetter and sornette _ , 2003 ] .as for earthquakes , we suggest that this pattern illuminates a local damage process rather than a macroscopic failure , the damage being localized within the magma storage system a few km below the volcano [ e.g. _ sapin et al . _ ,this average acceleration is different from the acceleration proposed prior to each single eruption by _ voight _ [ 1988 ] , or individual large earthquakes [ e.g. _ bufe and varnes _ ,1993 ] . the second phase is seismically mapped by a discontinuity in seismicity rate from a peak value events / day to a events / day constant rate ( figure [ rfig2 ] ) .we suggest it corresponds to the onset of the magma flow outward of the storage system .the third phase is characterized by a constant strong seismicity rate during each crisis .we suggest it corresponds to the damage induced by fluid flow , either as a diffuse response to dyke propagation in an heterogeneous rock matrix or as damage in the open reservoir walls during fluid flow .this pre - eruption scheme helps both to clarify the eruption phases on the pdlf and to define our prediction targets .if one uses a conventional definition of the target as the onset time of surface lava flow , then all the eruptions can be predicted a few hours in advance by choosing a daily seismicity rate larger than 60 events / day as an alarm threshold . for this threshold valuethe seismic crisis that did not end up in an eruption are false alarms .they are post - labelled at the observatory as intrusion , and are part of the endogeneous growth of any volcano .we aim to find precursory patterns before the outward magma flow from the reservoir system .accordingly , we define our target as the onset of a reservoir leak as mapped by the end of the average acceleration process and before the onset of the eruption crisis ( figure [ rfig2 ] ) .this target possibly maps a local failure in the reservoir walls , contemporary to the onset of outward magma flow from the reservoir , and corresponds to predicting eruptions more than one day in advance .thus , our problem is different from that posed by _ klein _ [ 1984 ] .here we follow a pattern recognition approach [ e.g. _ gelfand et al . _ , 1976 ] to predict rare extreme events in complex systems ; this approach is reviewed by _ keilis - borok _ [ 2002 ] . to use pattern recognition techniques as a forecasting toolwe define 3 steps in the data analysis .first we consider a sequence of vt earthquake occurrence times note that we use neither magnitude nor location of events .second , on the sequence we define a function as the number of earthquakes within the time window $ ] , being a numerical parameter .this functional is calculated for the time interval considered with different values of numerical parameter .third , an _ alarm _ is triggered when the functional exceeds a predefined threshold .the threshold is usually chosen as a certain percentile of the distribution function for the functional .the alarm is declared for a time interval .the alarm is terminated after an eruption occurs or the time expires , whichever comes first .our prediction scheme depends on three parameters : time window , threshold , and duration of alarms .the quality of this kind of prediction is evaluated with help of `` error diagrams '' which are a key element in evaluating a prediction algorithm [ _ kagan and knopoff , _ 1987 ; _ molchan , _ 1997 ] .the definition of an error diagram is the following .consider prediction by the scheme described above .we continously monitor seismicity , declare alarms when the functional exceeds the threshold , and count the prediction outcomes ( figure [ rfig3 ] ) . during a given time interval , targets occurred and of them were not predicted .the number of declared alarms was , with of them being false alarms .the total duration of alarms was .the error diagram shows the trade - off between the relative duration of alarms , the fraction of failures to predict , and the fraction of false alarms . in the -planethe straight line corresponds to a random binomial prediction at each step in time the alarm is declared with some probability and not declared with probability .given a particular prediction that depends on our three parameters , different points in the error diagram correspond to different values of these parameters .error diagrams thus tally the score of a prediction algorithm s successes and errors .this score depends on the algorithm s adjustable parameters .for example , raising the threshold will reduce the number of alarms but may increase the number of failures to predict . raising , on the other hand , will increase the duration alarms but may reduce the number of failures to predict , etc .a prediction algorithm is useful if : ( i ) the prediction quality is better than that of a random one , i.e. the points on error diagram are close to the origin and distant from the diagonal ; and ( ii ) this quality is fairly insensitive to changes in the parameters .we estimate the time predictability of volcanic eruptions based on the increase of the daily seismicity rate .the parameters of the algorithm are varied as follows : days , events per _ s _ days , days .the 30 day limit is the minimum time between two eruptions during 1988 - 2001 .the best predictions are obtained when averaging seismicity rate over a 5 day window and declaring an alarm for 5 days .the predictive skills of our prediction scheme are illustrated by the error diagrams of figures ( [ rfig4a ] , [ rfig4b ] ) .each point in the error diagram corresponds to different values of the threshold ranging from 1 to 100 events per 5 days , other parameters are fixed as days , days .error diagrams outline the whole range of possible prediction outcomes ; thus they are more convenient for decision making than performance of `` the best '' single version of prediction .we observe for instance ( point a ) that 65% of the pdlf eruptions can be predicted with 20% of the time covered by alarms .these results are of the same quality as that obtained on the etna or the hawaii volcanoes . for instance , using regional seismicity in a 120 km radius from the etna volcano , 50% of the eruptions could have been predicted within 40 days in the 1974 - 1990 period , which can be sorted as 80 % of the 11 flank eruptions , and no summit eruptions [ _ mulargia et al . _ , 1991 , 1992 ] . decreasing the threshold yields an alternative prediction strategy that favors a lower failure to predict rate and accepts a higher alarm duration rate; it is shown as point b on figure ( [ rfig4a ] ) .the choice of a particular prediction strategy must be always based on the analysis of the entire error diagram ; different prediction strategies may be used in parallel to complement each other ( see more in [ _ molchan , _ 1997 ; _ zaliapin et al . , _ 2003 ] ) .it is worth noticing that the performance of our simple prediction algorithm , which is based on mere averaging of the seismicity rate , is close to the performance of much more sophisticated algorithms that use numerous seismic parameters to predict large observed earthquakes [ e.g. _ kossobokov et al . , _ 1999 ] .the significant predictability we obtain here is still concomitant of a fraction of false alarm larger than 90% ( figure [ rfig4b ] ) .because this predictability emerges from the use of a daily seismicity rate only , we expect that a modification of the above prediction strategy to include earthquake location and magnitudes with deformation and geochemistry data will improve this first quantitative analysis of eruption prediction on pdlf .we gratefully thank ovpf staff in charge of the pdlf seismic network since 1980 .we thank a. helmstetter , w. z. zhou , j. el - khoury , t. gilbert , d. shatto , m. collombet and the ess / ucla seismo group for stimulating discussion .we benefited from professor v. keilis - borok s lectures on time series analysis and pattern recognition during the spring 2003 quarter at ess / ucla .jrg is partially supported by ec e - ruption project and ec evg - ct-2001 - 00040 , volcalert project .iz is partly supported by intas , grant 0748 .kossobokov v et al .testing earthquake prediction algorithms : statistically significant advance prediction of the largest earthquakes in the circum - pacific , 1992 - 1997 .earth planet .int . , 111 _ , 187 - 196 , 1999 .mcnutt , s. , seismic monitoring and eruption forecasting of volcanoes : a review of the state of the art and case histories , in _ monitoring and mitigation of volcano hazards , scarpa and tiling ( eds ) _ , springer , berlin , 99 - 146 , 1996 .mulargia f. et al . , pattern recognition applied to volcanic activity : identification of precursory patterns to etna recent flank eruptions and period of rest , _j. volcanol ., 45 _ , 187 - 196 , 1991 .
volcano eruption forecast remains a challenging and controversial problem despite the fact that data from volcano monitoring significantly increased in quantity and quality during the last decades . this study uses pattern recognition techniques to quantify the predictability of the 15 piton de la fournaise ( pdlf ) eruptions in the 1988 - 2001 period using increase of the daily seismicity rate as a precursor . lead time of this prediction is a few days to weeks . using the daily seismicity rate , we formulate a simple prediction rule , use it for retrospective prediction of the 15 eruptions , and test the prediction quality with error diagrams . the best prediction performance corresponds to averaging the daily seismicity rate over 5 days and issuing a prediction alarm for 5 days . 65% of the eruptions are predicted for an alarm duration less than 20% of the time considered . even though this result is concomitant of a large number of false alarms , it is obtained with a crude counting of daily events that are available from most volcano observatories .
almost everyone must be familiar , by report at least , with optical mirages . in hot regions such as the desert ,when the speed of light is greater at low altitudes than at higher altitudes , distant palm trees can appear inverted as if reflected off a cool pool of water in a nearby oasis , see figure [ mirage ] .it is less well known that in cold regions such as the arctic , where the opposite conditions prevail , distant ships can appear inverted in the sky , the light from them having been bent over an intervening iceberg , see figure [ fata ] .more mundanely , motorway travellers on hot dry summer days often have the disconcerting impression that there are sheets of water lying on the road some distance ahead .the explanation of phenomena like this is easily understood using the concept of light rays subject to snell s law for a stratified medium .alternatively we may apply huygens s wave theory according to which , in passing over the iceberg , the higher parts of the wave front move faster than the lower parts causing it to bend downwards .( 0,0)(-95 , 30 ) _ mutatis mutandis _ similar phenomenon can apply to sound waves whose speed increases with increasing absolute temperature above zero , , as . during a warm sunny day therefore , when the temperature is typically hotter near the ground due to the sun s rays , the sound waves should be bent upwards . on a clear night ,when the temperature near the ground drops rapidly by radiative cooling , sound waves should be bent downwards allowing distant sources , such as cars on a motorway , to be heard much more clearly than during the day .one of us , living as he does a kilometre or so away from the a14 , which is , judged by the amount of traffic it carries effectively a motorway , had until recently always supposed that it was this temperature effect that was responsible for the din experienced at times when contemplating the night sky in his garden . however , the temperature effect should be isotropic , in other words it should affect the noise of cars coming from all directions and including those on less busy but closer city roads .the greater volume and speed of traffic on the a14 alone should not be so overwhelming .the obvious directional influence is that of the wind .however , the speed of sound km per hour is so much greater than typical wind speeds km per hour that simple convection of sound waves by the wind can not be responsible for any significant directional effect . as pointed out to one of us by a colleague , hugh hunt , who made a similar point in the new scientist of 15th april 2009 in response to readers queries ,it is not the wind velocity , but its gradient , that is its _ variation with height _ , called _ wind shear _ or more technically _ vorticity _ which is important .as is clear from observing clouds , wind speeds , while remaining roughly horizontal and much slower than the speed of sound , increase quite sharply with height , while remaining roughly horizontal .thus it is not just the increased velocity that matters , but its gradient or vorticity .of course , examination of the literature shows that this is not a new observation and many papers and textbooks contain a simple qualitive discussion .the earliest we have discovered dates from 1857 and is due george gabriel stokes ( 1819 - 1903 ) who was appointed to the lucasian chair of mathematics in cambridge in 1849 and held it until his death 54 years later .his explanation , elaborated on by osborne reynolds ( 1842 - 1912 ) in 1874 , followed very closely huygens s explanation for refraction by a gradient in the refractive index .if the wind direction is towards us , and the wind speed increases with height , the higher parts of the wave front move faster than the lower parts and the wave front is bent over towards us .if on the other hand the wind direction is in the opposite direction , the wave front will be bent upwards and hence away from us .the net effect is that if the vorticity is non zero , the sound rays are deflected in an analogous way to the deflection of a charged particle of mass and charge by the lorentz force due to magnetic field .in fact this is not just an analogy but , as we shall see shortly , a precise correspondence for low wind speeds : - .[ vort ] note that since it is the velocity gradient that matters , it is not the direction of the wind at the ground or at a height above , but their difference which is important . in practicehowever , the speed of the wind is almost always much slower near the ground than above , and so usually it is the velocity at higher altitudes which matters .thus , in principle there are two effects acting , and which is the more important depends on which induces the greater curvature to the sound rays . as stokes and reynolds realised for the thermal effect and for the wind shear , where is the index of refraction .it is probably no coincidence that stokes s work followed shortly after the first demonstration , by the german physicist karl friedrich julius sondhauss ( 1815 - 1886 ) in 1853 , that a balloon filled with will act as a sound lens , focussing the sound of a ticking watch so as to render it audible some distance away .examples of the interplay of these two effects , which can cause sound to travel over large distances , abound . in early june , 1666 , during the war between the dutch and the english , both samuel pepys and john evelyn reported in their diaries that while the sound of gun fire from ships off the coast of kent could be heard clearly in london , it was not audible at all at the ports of deal or dover .as pepys at the time observed : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` this makes room for a great dispute in philosophy : how we should hear it and not they , the same wind that brought it to us being the same that should bring it to them . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ infrasound ( sound of very low frequency ) from the volcanic explosion of krakatoa on august 27 1883 was heard to travel several times around the earth . during the first world war ,the noise of the very large guns on the western front was often audible within a range of a 100 km or so , and often beyond 200 km , but not within a `` zone of silence '' between 100 and 200 km . on september 21 , 1921 there occured an enormous explosion at oppau on the rhine and the same phenomenon was observed , see figure [ oppaufig ] .( 0,0 ) ( -90,30 ) one explanation for some of these phenomena is reflection of sound from a layer of air in the upper atmosphere with a higher sound velocity .the dependence of the velocity of sound in the atmosphere follows its temperature profile . in the absence of wind, the temperature ( and hence velocity of sound ) decreases under normal circumstances up to a height of about 10 km , remains constant up to roughly 25 km , then incresases to around 50 km , where it has a local maximum , and then has a local minimum at about 80 km , see figure [ profilefig ] . a simple application of the law of refraction n(z ) ( z ) = constant , [ refraction ] where the local speed of sound , reveals that sound rays can bounce off the maxima of butcan also be trapped at the minima .presumably the latter effect was responsible for the long distance propagation of infrasound from krakatoa .a similar phenomena occurs for sound waves in the ocean .the speed of sound initially decreases with depth and then increases exhibiting a minimum value at a depth of around 1 km at the so - called sofar channel .this allows whales to communicate over very large distances and more sinisterly , submarines to snoop on other submarines . for simple profiles , the law of refraction ( [ refraction ] ) allows simple solutions for the rays . thus if , one has catenaries with the horizontal axis as the directrix .this gives a rough description of mirages in the desert .if , the rays are cycloids , the curve traced out by the point on the circumference of a circle rolling without slipping along the horizontal axis ( see figure [ cyclfig ] ) .one can imagine this as the path of a glow - worm sitting on the rim of a bicycle wheel as it rolls along in the dark .this could decsribe the rays passing over icebergs in the arctic .more importantly for what follows later , this can also be achieved by assuming that , in which case the rays are semi - circles centred on the horizontal axis .the addition of the effects of wind complicates considerably this simple picture . considering for a moment fundamental physics , one of the clearest trends in research over the last 100 years orso has been what one might call the _ geometrisation _ of physics .this is by no means a new phenomenon .plato s association of the regular , or platonic , solids with the classical elements is perhaps the earliest attempt to explain the world through geometrical intuition .geometry , in the modern sense of differentiable manifolds , really caught hold in physics in 1915 with einstein s general theory of relativity . since then , the use of geometry has accelerated .modern string theory , with its space - time dimensions and complicated internal -dimensional calabi - yau manifolds is perhaps the clearest example of this geometrisation process .a geometrical approach has much to recommend itself , even to describe the more concrete physics we have thus far been considering . in this article, we will discuss how the properties of sound and light rays can be considered in a geometrical light .we will be interested in the behaviour of solutions of the wave equation which are repeated should be summed over ] in a moving medium : u(x , t ) = 0 .here is related to the speed of sound in a direction parallel to the unit vector by and is a vector giving the velocity of the medium at each point .we will be interested in particular in disturbances with short wavelength , which move along _ rays_. we will start by considering the case of a static medium , .the geometry of the rays in this case is well known and we shall describe some of its properties and give some examples .we will then move on to the case of a moving medium and demonstrate that this can also be described geometrically , as we showed in .we will have to loosen slightly our notion of a geometry in order to do so but the type of geometry which arises , a finsler geometry , is very natural .in fact , this geometry first arose in our studies of light rays near rotating black holes .the link between the effects of gravity on light rays and refraction of light and sound waves by media can be made explicit and is the basis for much current work on optical and acoustic black holes .these analogue models for black holes allow experimentation in a laboratory , which would of course not be possible with real black holes .for this reason , a thorough understanding not just of the mechanisms of refraction , but the _ geometry _ of refraction is of great relevance both for terrestrial and celestial physics .the elementary theory of mirrors and lenses is largely concerned with tracing the paths of _ light rays _ on reflection ( _ catoptrics _ ) or refraction ( _ dioptrics _ ) at a surface .although the greeks were uncertain whether light proceeds from the eye to the object ( _ emission theory _ ) or from the object to the eye _ intromission theory _ , and whether its passage is instantaneous or at a finite speed , nevertheless heron of alexandria ( c.10 - 70 ad ) was able to formulate the laws of reflection in terms of a _ principle of shortest length _ from object to eye via the reflecting surface .the occurrence of _ caustics _ shows that there can be more than one path , not all of which are necessarily the shortest and more accurately we refer to the principle _ principle of stationary length _ ,that is the length of each ray is merely stationary among all neighbouring paths . despite important pioneering work by ibn al - haytham or alhazen ( 965-c.1040 ) demolishing the emission theory and investigating refraction , it took longer to unravel the fundamental law of dioptrics .it it was not until independent work by abu sad al-`ala ' ibn sahl ( c.940-c.1000 ) , thomas harriot ( c.1560 - 1621 ) and willebrord snel van royen ( 1580 - 1626 ) that the familiar _ law of sines _ was established and pierre fermat ( c 1601- 1665 ) formulated his unified _fermat s principle of stationary time_. the idea is that the slowness of the ray inside a medium is proportional to its refractive index .note that the finite speed of light was only demonstrated by by ole roemer ( 1644 - 1710 ) using the eclipse of jupiter s moon io in 1676 .the relation of the refractive index to the speed of light remained controversial until experiments by armand hippolyte louis fizeau ( 1819 - 1896 ) , fresnel augustin - jean fresnel ( 1788 - 1827 ) and george biddell airy ( 1801 - 1892 ) in the nineteenth century finally established its velocity as , where is its speed in vacuo . according to the discredited _ corpuscular theory _ the opposite relation holds .both theories give the same rays but the speed with which light follows the rays differs .it would be more accurate therefore to speak of _ fermat s principle of stationary optical path length _, where the optical length of a path is given by l= _n ( ) |d | , where we have allowed for the possibility that the refractive index may depend upon position , as for example it does in a vertical stratifed medium such as we encounter discussing mirages .principle becomes the _variational principle _n ( ) |d | = 0 . by the time fermat introduced this principle , christiaan huygens ( 1629 - 1695 ) had initiated his wave theory of light and derived fermat s principle from it . his derivation makes it clear that it is the optical length or optical distance which enters into all interference effects , and its therefore appropriate to say that all optical measurements measure _ optical geometry_. by the same token , measurements using sound waves may be said to measure _ acoustic geometry _ and measurements using seismic waves , as on the earth or more recently the moon to measure _seismic geometry_. it is clear that these geometries will , in an inhomogeneous medium for which the dependence of on is non - trivial , differ considerably from _euclidean geometry_. the existence of such _ non - euclidean geometries _ was first realised by pure mathematicians in the early part of the nineteenth century working on the foundations of geometry . for centuries , people had been attempting to derive , starting from the other aximons , euclid s fifth axiom : that through any point not on a given line there is exactly one line parallel to the first .this seemed to them so obvious that it was `` neccessarily '' true .eventually they gave up , johann carl friedrich gauss ( 1777 - 1855 ) privately and jnos bolyai ( 1802 -1860 ) and nikolai ivanovich lobachevsky ( 1792 - 1856 ) publicly , showed the existence of two other types of homogeneous and isotropic _ congruence geometries _ , spherical and hyperbolic .the first is easy to grasp in two dimensions since it is just the geometry of the standard sphere , with the stationary paths ( or _ geodesics _ ) being the great circles .navigators , either by sea or air , have been using spherical geometry since at least the time of columbus .it is not too difficult to imagine a sphere in one dimension higher and indeed if the refractive index were to vary as n= [ fish ] the optical metric would be precisely that of three dimensional spherical space of radius .the resulting optical device is known as maxwell s fish eye lens since all rays emanating from any point are circles which reconverge onto the antipodal point .we can verify that the optical distance along a radial geodesic is given by : which is finite . because , for large enough , the refractive index drops below unity, the construction of such a device would require the manufacture of a suitable `` meta - material '' .a more practical device was invented by rudolph karl luneburg ( 1903 - 1949 ) and has this will focus all rays incident on it a fixed direction to the point on the circumference in the opposite direction .the radius of curvature of the spherical geometry given by ( [ fish ] ) is . to obtain hyperbolic space , often called _ bolyai - lobachevsky space _ or just _ lobachevsky space _ , one needs only to let the radius of curvature become pure imaginary which leads to n= [ lob ] .the refractive index becomes infinite when which should be thought of as the boundary at ` infinity ' of hyperbolic space .in fact the reader may easily verify that the optical distance along a radial path from the origin to the boundary at infinity is , by contrast with the case of spherical space , infinite .jules henri poincar ( 1854 - 1912 ) gave a simple analogy which illuminates the roles played by the flat euclidean metric geometry for which and the curved non - euclidean geometry for which is given by ( [ lob ] ) as follows .imagine a medium whose temperature varies with radius as occupying a ball of radius , as measured by a measuring rod made from a substance such as invar whose length is independent of temperature .if measured by a different ruler made of a material which expands in proportion to the temperature , then the rod will shrink to zero length as it approaches the the boundary which is at zero temperature .the boundary will thus seem to be infinitely far away as measured by the second measuring rod . at the time that poincar wrote, people were still reeling under the discovery that what seemed to have been well established since antiquity : that euclid s geometrical axioms were not logical necessities .there was thus great interest among geometers and philosophers as what was the `` correct '' or `` real '' geometry of space .for poincar it all depended upon how you measure it .he did however believe that one should always be able to find a system of measurements in which euclid s geometry holds . despite appearances both spherical geometry and hyperbolic geometry are , like their simpler euclidean counterparts , both isotropic and homogeneous .for this reason they are candidates for the physical geometry of space , as measured for example by light rays in vacuo . indeed according to einstein s theory of general relativityjust these three possibilities can arise in the theory of the _ expanding universe _ proposed by alexander alexandrovich friedman ( 1888 -1925 ) and monsignor georges henri joseph douard lematre ( 1894 - 1966 ) . for many yearscosmologists have been attempting to decide which best fits the observed universe . based on observations of the cosmic microwave background ( cmb ) , and other data ,the consensus now is that it is flat euclidean geometry .of course no realistic medium is exactly homogeneous or isotropic , and in particular the speed of light , or sound , may depend upon direction as well as position .a familiar example is provided by bi - refringence in crystals such as calcite for which the ordinary and extra ordinary ray have different refractive indices , and .this more general situation may be taken into account using a more general geometry invented by georg friedrich bernhard riemann ( 1826 - 1866 ) called _ riemannian geometry _ in which the optical or acoustic path is given by l= _ dt = 0 , where is , in three spatial dimensions , a symmetric array called the _metric tensor_. indeed one frequently introduces what is called a _ line element _ which is an expression for the infinitesimal form of a generalised pythagoras s theorem : the infinitesimal distance between points and is given by ds^2 = h_ij(x_k ) dx^i dx ^j .[ metric ] thus , for example , in the case of a uniaxial bi - refringent medium with unit field in the ordinary direction the _ joets - ribotta _ optical metric is ds^2 = n_e^2 d ^2 + ( n_0 ^ 2 -n_e^2 ) ( .d ) ^2 .riemann s ideas are fully incorporated into einstein s theory of general relativity . indeed , for a static spacetime a simple form of fermat s principle holds which , for example , allows one to discuss the optics of black holes in terms of an effective refractive index ( in so - called isotropic coordinates ) n()= ( 1 + ) ^6 ( 1- ) ^-2 , where is the mass and is newton s constant .in these coordinates , the black hole event horizon is located at .note that , because the refractive index becomes infinite there , the event horizon is at infinite optical distance .a closer examination reveals that as one approaches the event horizon , the optical geometry approximates more and more closely that of hyperbolic space near its boundary at infinity as described above .we have recently pointed out that this is a universal feature of all static ( `` non - extreme '' ) event horizons and used it as a quantitative tool for discussing some of their puzzling physics .( 0,0 ) ( -90,30 ) riemann also invented an even more general form of geometry , taken up by paul finsler ( 1894 - 1970 ) called _ finsler geometry _ in which a more general expression replaces ( [ metric ] ) and this also arises in the optics and acoustics of moving media as we shall discuss presently .in the meantime we wish to say more about hyperbolic geometry , and in particular the hyperbolic plane .the geometry of the hyperbolic plane is intimately connected with that of the complex numbers .their introduction dramatically simplifies many formulae . to see this we adopt units in which the radius of curvature is set to unity .we further set , in ( [ lob ] ) .the _ hyperbolic plane _, , is obtained by setting . in order to exploit the complex numbers we define to get ds ^2 = 4 = 4 .[ disc ] in this representation of the hyperbolic plane , the _ poincar disc _, occupies the interior of the unit disc and one may verify that its geodesics are circular arcs which cut the unit circle at right angles .a tiling of by triangles whose edges are geodesics is shown as figure [ h2tesball ] .the triangles are all similar , so one can see how the apparent length of a measuring rod shrinks as we approach the boundary .( 0,0 ) # 1#2#3#4#5 ( 11199,11670)(739,-12298 ) ( 9376,-3061)(0,0)[lb ] ( 8851,-4036)(0,0)[lb ] ( 8626,-5836)(0,0)[lb ] ( 6151,-736)(0,0)[lb ] we are free , in addition , to perform a coordinate transformation to an equivalent form .we choose a complex coordinate related to by a _fractional linear transformation _ w= i .this maps the unit disc into the _poincar upper half plane _ , .the centre of the unit disc maps to the point and the boundary of the unit disc to the real axis . in these coordinates the line element becomes ds ^2 = = .[ upper ] because , as is easily verified , fractional linear transformations take circular arcs to circular arcs , the geodesics are again circular arcs . because holomorphic maps are angle preserving , the geodeiscs are in fact semi - circles orthogonal to the real axis .figure [ h2tesuhp ] shows the upper half plane tiled with the same geodesic triangles as in figure [ h2tesball ] .let s now think of as horizontal distance and as vertical height in an horizontally stratified medium in which the refractive index decreases with height .if we assume that over a certain range of heights , , to a good approximation n [ law ] the rays will be semi circles . in this way we can easily explain the mirage mentioned in the introduction , that in arctic or antarctic regions light is bent over icebergs and ships behind icebergs are observed to float upside in the air above . to account for the mirages seen in the desert or on hot days driving on motorwaysin which trees or cars seem to be reflected in pools of still water , it suffices to take the complex conjugate and work in the lower half plane. of course many laws of horizontal stratification will give qualitatively similar results , but there is a considerable economy to be made by adopting the law ( [ law ] ) .if we do then the line element ( [ upper ] ) is invariant under all fractional linear transformations taking the upper half plane into itself .these are of the form w , ad - bc=1 , , where are real .this defines the three dimensional group isomorphic with the lorentz group of three dimensional minkowski spacetime .this is no accident .much as it is convenient to think of the usual -sphere as the set of points at a fixed distance from the origin in euclidean -space , there is a similar interpretation of hyperbolic space as a _ pseudo - sphere _ in minkowski space .we consider the space , but instead of the usual dot product , we endow it with the minkowski product : where and similarly for . for a vector with , which we call _ spacelike_ , we can define the length to be .the hyperbolic plane of radius is the set of points defined by = -r^2 , x^0>0 .[ pseudosphere ] the pseudo - sphere is sometimes referred to as the _ mass - shell_. this is because when we interpret the minkowski spacetime as the geometry of special relativity in two spatial dimensions and a particle of energy and momentum in the two spatial dimensions would have momentum vector . ] , the vectors with represent momentum vectors of particles whose rest mass , , is given by .the mass - shell thus represents all the possible velocities for a particle of a given rest mass .it is possible to check that any vector tangent to this surface is spacelike , so that the minkowski inner product allows us to define the length of such a vector .the geometry of this surface with this definition of length is that of the hyperbolic plane .a lorentz transformation leaves both the minkowski metric and the condition ( [ pseudosphere ] ) unchanged and so represents an isometry of the hyperbolic plane . to recover the poincar disk model , we stereographically project the pseudosphere from the point onto the plane as shown in figure [ hypfig ] .in the previous section , we considered the problem of finding the ray paths of a sound wave propagating in a static medium .we showed how fermat s principle of stationary time leads us to consider the geodesics of a riemannian metric .we would now like to consider how sound waves propagate through a moving medium .fermat s principle continues to apply , however we must work a little harder in order to translate the principle into a mathematical statement .we will start off by considering a problem proposed by ernst zermelo in 1931 .suppose a boat , which can sail at a constant rate relative to the water on which it sits , wishes to navigate from point to point as quickly as possible .if the water is at rest , then the captain should steer along a straight line joining to .more generally , the captain should steer along a geodesic if the surface of the water may not be taken to be flat , for example if the points a and b are far enough apart that the earth s curvature should be taken into account .the navigation problem for the captain in this case corresponds to finding the geodesics of some riemannian metric .( 0,0 ) # 1#2#3#4#5 ( 3986,2902)(3447,-4451 ) ( 5701,-3136)(0,0)[lb ] ( 7100,-1850)(0,0)[lb ] ( 4101,-4411)(0,0)[lb ] ( 4276,-1970)(0,0)[lb ] ( 4368,-3137)(0,0)[lb ] now let us suppose that the body of water on which the boat sits is not at rest , but instead moves with some velocity , which we call the drift .the absolute velocity of the boat is now the sum of two components : the motion of the boat relative to the water , , and . in order to find the fastest route between and the captain must clearly take the drift into account . in order to do this ,let us first consider the simplest situation , where the surface of the water is a plane and the drift is a constant vector , not changing from point to point .this situation is shown in figure [ zermdiag ] .we will assume for convenience that the speed of the boat relative to the water is , a constant .we will also assume that the speed of the drift is less than the speed of the boat relative to the water , i.e. .let s define and work out how long it will take the boat to get from to assuming that is constant . in this case , the position of the boat relative to at time will be = t ( + ) .supposing that the boat arrives at at time , we have = t ( + ) .[ dteqn ] we wish to solve for t as a function of and .we can do this easily by looking at the equations we get from dotting both sides of ( [ dteqn ] ) with and : & = & t ( + ^2 ) , + ^2 & = & t^2(^2 + 2 + ^2 ) . noting that and eliminating between the equations gives t [ ] = - .the condition that the speed of the drift is less than guarantees that the denominators are positive and that \geq 0 ] .we can also check that obeys a _ triangle inequality _ : t[_1+_2 ] t[_1]+t[_2 ] .this means that the time which we have found is in fact the _ least _ time to travel between and , because we can not reduce the time by travelling along the sides of a triangle with base .a simple limiting argument shows that any curve from to will take longer to traverse than .now we can consider a more general problem , where the boat is navigating in a riemannian manifold , with metric . the drift is a vector field on , which may vary from point to point and whose length is always less than .suppose the captain steers the boat so as to travel along a curve , with , . in order to find the time taken to traverse this curve , we can approximate it with lots of straight sections on which the metric and the drift are roughly constant . we have worked out how long it takes to travel along such a line segment and we can simply add these times up . passing to a limit , we find that the time taken to travel from to along this curve is t [ ] = _ a^b f[^i(s ) , ^i(s ) ] ds [ time ] where f[x^i , y^i]/c & = & + ( x^i , y^i ) + & = & + + & = & - , [ deff ] with this problem seems somewhat artificial as imagining a boat navigating a general curved space is rather strange .this set up is very natural , however , in the context of sound rays .if is the acoustical metric of a material ( so that a ray moving in the direction of the unit vector moves with velocity ) and is a bulk motion of the material , then tells us the time the sound would take to travel along .we are now in a position to make a mathematical statement of fermat s principle for a sound wave propagating through a moving medium .the sound rays are the paths which extremise the time along the path , or equivalently the optical length , : l = c t = 0 .before we discuss what this means for the study of sound waves in a moving atmosphere , we will first discuss briefly a larger class of problems which have a similar form to ours .we have thus far been speaking somewhat loosely about geometries , without describing exactly what we mean . for our purposes ,the particular type of geometry which is of interest is a _although named after paul finsler , the concept of a finsler geometry was introduced by riemann in the same lecture that he proposed what is now known as riemannian geometry .the defining feature of a finsler geometry is that for suitably well behaved curves , one can define a curve length . in order to do so, one first has to define a _finsler function_. the function of ( [ deff ] ) is a special case .a finsler function , in addition to an assumption on its smoothness , is required to have three properties * _ positivity : _ \geq 0 ] , for * _ subadditivity : _ \leq f[x , y_1]+f[x , y_2]$ ] given a finsler function , we can then define the length of a curve from to by : l [ ] = _a^b f[^i(s ) , ^i(s ) ] ds .[ finslen ] condition 1 ensures that the length of any non - trivial curve is positive .condition 2 ensures that the length of a curve does not depend on its parameterisation .condition 3 is necessary so that the problem of finding curves of minimal length is well posed .this means that we can talk about the _ geodesics _ of as being curves of minimal length between two points .note that the ` length ' defined in this way by the of the previous section is _ not _ the usual length of the curve , so the geometry defined by this differs from the euclidean geometry of the plane . whilst riemannian geometry is fairly well understood ,finsler geometry in general is much less well studied .this is mainly because of the sheer variety of possible finsler functions , which can be very exotic .one of the simplest classes of finsler function , into which our function of the previous section falls , are the _ randers metrics_. the finsler function of a randers metric is given in terms of a riemannian metric and a one - form as = \sqrt{a_{ij}(x ) y^i y^j}+b_i(x ) y^i .\label{randers}\ ] ] this is a good finsler function provided , where is the matrix inverse of .one reason to study randers metrics becomes apparent when we consider the equations satisfied by a curve which is a critical point of ( [ finslen ] ) when is the flat metric in .since we are free to choose the parameterisation , we can assume that for the curve we have and we then find that is a critical point when = ( ) .[ curleq ] we see that follows the path of a particle of mass and charge moving in a magnetic field with unit speed .for this reason , the extreme curves of are sometimes referred to as _ magnetic geodesics_. for a general , we find the generalisation of the lorentz force law in a curved space , with acting as the vector potential for the magnetic field . by extending the notion of a metric to allow finsler geometries , we have brought the problem of charged particles in a magnetic field into the realm of pure geometry . as an interesting example , we take , , with , and we will consider the randers metric given by ( [ randers ] ) with : a_ij= _ ij , b_1 = , b_2 = 0 .[ randmet1 ] where and are fixed real numbers .this is a good finsler metric , provided that .if , we see that this randers metric in fact corresponds to the riemannian metric considered in [ hpsec ] , that of the hyperbolic plane of radius in its ` upper half - plane ' form .if is non - zero , corresponds to a vector potential for the magnetic field which is everywhere directed straight out of the plane and which has strength , independent of position .thus the geodesics of this randers metric will correspond to a charged particle moving in a uniform magnetic field in the hyperbolic plane . in order to find the geodesics, we have to find curves which extremise the length l = ds ( + ) . such a curve must satisfy the _ euler - lagrange _ equations : in the case where , we know from above that the geodesics are circles which meet the line at right angles .we also know that when we have a constant magnetic field in the usual euclidean plane that the particle trajectories are circles .it seems reasonable then to guess that with non - zero , the geodesics remain circular .we can consider a possible solution of the form notice that the circle is traversed in a _ clockwise _ or _ anti - clockwise direction _ respectively for the two choices of sign .for a general finsler metric , unlike for a riemannian metric , the direction of travel is important and a curve will only be a geodesic when traversed in a particular direction . substituting into ( [ eleqns ] ) we find that we can satisfy the equations provided = = b. thus for we can either have clockwise circles with centres above or anti - clockwise circles with centres below .since , we can interpret both cases geometrically as meaning that circles which meet the line at an angle are geodesics , provided they are traversed in the appropriate sense . taking a limit where get larger and larger with their ratio fixed , we find that straight lines making an angle with the axis are also geodesics , provided again that they are traversed in the correct direction . a little more work shows that in fact any geodesic of this randers metric is one of these curves .figure [ magfig ] shows examples of the various cases for . for , we can simply reverse the sense of the curves .we noticed in [ zermelosec ] that fermat s principle tells us that a sound ray in a medium with acoustic metric with a wind will move along a geodesic of a related randers metric defined by & & a_ij = + , + & & b_i = - , where and .let s firstly consider what this means in the case where the speed of sound is constant and equal everywhere to and the wind speed is small in magnitude compared to .this is a reasonable approximation for sound waves in a realistic atmosphere .typically and even in the strongest hurricanes , does not exceed . for a constant , isotropic , speed of sound ,the acoustic metric is simply if we work to first order in , then we find that a_ij = _ ij , = - . making use of ( [ curleq ] ) and keeping track of the factors of , we find that the path followed by a sound ray is that of a particle of mass and charge moving with speed in a magnetic field given by where is the vorticity of the wind .this justifies our assertion in equation ( [ vort ] ) that the vorticity of the wind acts like a magnetic field on the sound rays .a simple consequence of this correspondence is observed by seismologists measuring the oscillations of the earth after a large earthquake .the spectral peaks are split by the earth s rotation . from our magnetic point of view , the effect of the rotation is to give rise to a constant magnetic field inside the earth .the splitting of the spectrum is in precise analogy with the zeeman effect which gives rise to a splitting of spectral lines for atoms in a magnetic field .an interesting problem to study involves combining a varying speed of sound with a wind in a stratified atmosphere .we can gain some insight by investigating a particular choice of acoustic metric and wind .let s suppose that the acoustic metric for the medium at rest is the hyperbolic metric discussed previously ds^2 = h_ijdx^i dx^j = l^2 we ll take this to model sound rays in an atmosphere varying with temperature near the ground , which we assume to be at .the speed of sound at ground level is given by .we ll suppose there s a horizontal wind with strength proportional to height , so that = ( -w , 0 ) .where is some fixed parameter with units of velocity .we can easily calculate the associated randers metric and find the time to go along a curve .it is best expressed in terms of new variables : u= , v = .this just represents a rescaling of the axes both parallel and perpendicular to the ground . in terms of , the time is given by t = ds ( + ) .thus , the sound rays follow the geodesics of a randers metric of precisely the form we considered in [ unhyp ] , with given by ! as a simple application , we can consider the problem of tracing the sound emanating from a point source , at ground level , which is at . we know that the sound rays in the coordinates are circles ( and lines )which meet the line at an angle .we also know that these circles have a clockwise sense if their centre is above and an anti - clockwise sense if it s below .this is already enough information to draw the rays shown in figure [ soundfig ] for the particular value .we sketch outward directed geodesics which have a common starting point , .the straight line geodesic through plays an important role as a _separatrix _ , which separates two different behaviours for the trajectories . in this case , any ray emitted to the left of the separatrix will return to the ground to the left of , while a ray emitted to the right returns to ground to the right .if we suppose that is emitting sound uniformly in all directions , we see that more of the energy emitted is absorbed by the ground on the left of than on the right . in this case three times as much , which we can see by considering the angle the straight geodesic makes with the ground .this example has three free parameters : , and . by choosing theseappropriately , we can match ( for example ) the speed of sound , the speed of the wind and the the wind shear at ground level to a real , more complicated profile .in fact , in we showed that it is possible to construct a model based on the hyperbolic plane with four free parameters , so that one can additionally set the rate of change of speed of sound with height at ground level as well .in this article we have sought to show that the motion of a charged particle moving in two - dimensional lobachevsky space , or the hyperbolic plane , equipped with a non - uniform magnetic field can provide a useful model for sound rays in a moving medium with a gradient in the refractive index .we have mentioned that in its three - dimensional version the geodesics of lobachevsky space provide a useful model for the motion of light rays near the event horizon of a non - rotating black hole .if the black hole is rotating then coriolis type effects , referred to in general relativity as the rotation of inertial frames , provide an effective magnetic field .these two examples by no means exhaust the possible applications of hyperbolic geometry to physics .two - dimensional surfaces abound in nature and if they have negative gauss curvature , sometimes called _ anti - clastic _ at a point , the surface can not lie on one side of its tangent plane at that point .thus a finite smoothly embedded surface without edges in euclidean space can not have everywhere negative gauss curvature , but a finite portion of a surface with edges may .a simple example is provided by a holly leaf .an example of great current physical interest , following the 2010 nobel prize to andre geim and konstantin novoselov is a graphene surface containing topological defects called _ disclinations _ in which some of the hexagonal lattice cells have been replaced by heptagons . the electrical and other properties of such surfaces are of great interest , and their study entails solving the dirac equation in a portion of two - dimensional lobachevsky space .the motion of charged particles on abstract finite riemann surfaces with no boundary or edges which have constant negative curvature and uniform magnetic field are of interest in statistical mechanics since for week magnetic field the motion is chaotic or _ ergodic _ as it is known technically .however as the magnetic field strength is increased there is a sudden phase transition and this ceases to be the case .three dimensional lobachevsky space has been invoked to model some aspects of _ quantum dots _ and the physics of four and five dimensional lobachevsky space and their conformal boundaries are currently of intense interest by string theorists since juan maldacena suggested the famous _ ads / cft correspondence _ which has led to a number of break throughs in quantum gravity and the quantum theory of black holes . without going into technical details, it may be of interest to outline some features of this fascinating idea . both in quantum field theory and in string theory it is customary to work in _ imaginary time _ .thus if we start in minkowski spacetime with spacetime metric ds ^2 = -c^2 dt ^2 + d*x * ^2 , we can pass to euclidean space with positive definite metric ds ^2 = c^2 d ^2 + d * x * ^2 . by setting t= i , _ real _ .often calculations may be performed more easily in euclidean space .we then pass back to minkowski spacetime by setting = - i t , t _ real _ .this process is called a _ wick rotation _ and it also works for some curved spacetimes .a case in point is anti - de - sitter spacetime .this is a solution of einstein s equations with a negative cosmological constant .it may be obtained from lobachevsky space by a simple wick rotation .maldacena s brilliant conjecture is that there is a precise correspondence between string theory in anti - de - sitter spacetime on the one hand , and a special type of quantum field theory , called a conformal quantum field theory , on the other hand , the latter being defined on the conformal boundary of anti - de - sitter spacetime .the conformal boundary of anti - de - sitter spactime is conformally related to minkowski spacetime if we `` wick rotate '' this conjecture we are led to conjecture a correspondence between string theory in lobachevsky space and quantum field theory on its conformal boundary , the latter being conformally related to euclidean space .we hope that in this article we have made it clear that not only is a knowledge of hyperbolic geometry and lobachevsky space useful for understanding trafffic noise , but it has a much much wider range of applications in theoretical physics ; from cosmology to condensed matter physics to string theory and planck scale physics .we commend to the interested reader its study and further exploitation . 99 g. c. stokes , on the effect of wind on the intensity of sound _ report of the brtish association , dublin _ ( 1857 ) 22 o. reynolds , on the refraction of sound by the atmosphere _ proc roy soc _ * a,22 * ( 1874 ) 531 - 548 c. sondhauss , on the refraction of sound , _ phil mag _ * 30 * 73 - 77 a. joets and r. ribotta , a geometrical model for the propagation of light rays in an anisotropic inhomogeneous medium _ optics communications _ * 107 * ( 1994 ) 200 - 204 g. w. gibbons and c. m. warnick , traffic noise and the hyperbolic plane, annals phys .* 325 * ( 2010 ) 909 [ arxiv:0911.1926 [ gr - qc ] ] .g. w. gibbons , c. a. r. herdeiro , c. m. warnick and m. c. werner , stationary metrics and optical zermelo - randers - finsler geometry , phys .d * 79 * ( 2009 ) 044022 [ arxiv:0811.2877 [ gr - qc ] ] .g. w. gibbons and c. m. warnick , universal properties of the near - horizon optical geometry , phys .d * 79 * ( 2009 ) 064031 [ arxiv:0809.1571 [ gr - qc ] ] . c. s. morawetz , geometrical optics and the singing of whales _amer math monthly _ * 85 * ( 1978 ) 548 - 554 d. e. weston and p.b .rowlands , guided acoustic waves in the ocean _ rep ._ * 42 * ( 1979 ) 347 - 387 r. c. weber et al .seismic detection of the lunar core _ science _ ( 2011 )gary gibbons was born exactly 300 years after gottfried wilhelm leibniz . he came up to st .catharine s college , cambridge , to read natural sciences in 1965 , specializing in theoretical physics . after taking part iii of the mathematical tripos he commenced research in damtp ,first under d. w. sciama and then s. w. hawking .after various post - doctoral appointments he was appointed to a lectureship in damtp in 1980 .he is now professor of theoretical physics in damtp .he was elected to the royal society in 1999 .he has been a professorial fellow of trinity college since 2002 .claude warnick came up to queens college , cambridge , in 2001 to read mathematics . after completing part iii of the mathematical tripos in 2005, he studied for a phd in the general relativity group of damtp , under gary gibbons .he has been a research fellow at queens since 2008 .
we survey the close relationship between sound and light rays and geometry . in the case where the medium is at rest , the geometry is the classical geometry of riemann . in the case where the medium is moving , the more general geometry known as finsler geometry is needed . we develop these geometries ab initio , with examples , and in particular show how sound rays in a stratified atmosphere with a wind can be mapped to a problem of circles and straight lines . 2
online reviews of products and services are an important source of knowledge for people to make their purchasing decisions .they contain a wealth of information on various product / service aspects from diverse perspectives of consumers .however , it is a challenge for stakeholders to retrieve useful information from the enormous pool of reviews .many automatic systems were built to address this challenge including generating aspect - based sentiment summarization of reviews and comparing and ranking products with regard to their aspects . in this studywe focus on the problem of review summarization , which takes as input a set of user reviews for a specific product or service entity and produces a set of representative text excerpts from the reviews .most work on summarization so far used sentence as the unit of summary .however , we do not need a complete sentence to understand its main communicative point .consider the following sentence from review of a coffee maker : ` my mum bought me this one , and i have to say it makes really awful tasing coffee ' . to a buyer looking for an opinion about the coffee maker ,only the part ` makes really awfultasing coffee ' is helpful .being able to extract such short and meaningful segments from lengthy sentences can bring significant utilities to users .it reduces their reading load as well as presents more readable summaries on devices with limited screen size such as smart phones .this motivates our main research question of how to extract concise and informative text from reviews of products and services that can be used for summarization .previous work has ignored the differences in product and service reviews , which is questionable . to the best of our knowledge ,this is the first work that studies and compares summarization for the two domains in details .we propose to to extract text segments that match against pre - defined syntactic patterns that occur frequently in reviews of both products and services .however , the extracted segments should be subjected to some selection or filtering procedure as not all matching candidates are likely to contain rich information .our proposed selection mechanism is based on the observation that segments containing users opinions and evaluations about product and service aspects carry valuable information .this motivates the use of output of joint sentiment topic models to discriminate between desirable and non - desirable text segments . since joint sentiment topic models capture sentiments that are highly associative with aspects , they are well suited for selecting informative segments from the pool of extracted candidates .the major contributions of our work are as follows . 1 . a new joint sentiment - topic model that automatically learns polarities of sentiment lexicons from reviews .identification of five frequently occuring syntactic patterns for extracting concise segments from reviews of both products and services .demonstration of the effective application of topic models to select informative variable - length segments for review summarization .4 . production of summaries that recall important information from review entities characteristics .the rest of the paper is structured as follows .we begin with the related literature in review summarization and joint sentiment topic models in sect .next we describe our extension to a topic model and its improvements over previous models in sect .we then introduce our proposed extraction patterns and procedures for segment selection in sect .we present our experiments and evaluation in sect . 5 and 6 andconclude in sect .we first look at how text excerpts are extracted from reviews in the existing literature .previous studies mainly generated aspect - based summary for products and services by aggregating subjective text excerpts related to each aspect .different forms of the excerpts include sentence , concise phrase composing of a modifier and a header term , adjective - noun pair extracted based on pos tagging and the term - frequency of the pair , and phrase generated by rules . some limitations of these previous work are i ) they only worked with the simplistic adjective - noun pairs or specific form of reviews such as short comments , and ii ) experiments were carried out with reviews of services only .our approach to extract text segments by matching variable - length linguistic patterns overcome these shortcomings and can generalize well for free - text reviews of both products and services .various methods for selecting informative text fragments were applied in previous research , such as matching against pre - defined or frequently occurring aspects , ranking frequency , and topic models .we are interested in the application of joint sentiment topic models as they can infer sentiment words that are closely associative with an aspect .this is an important property of polarity of sentiment words as pointed out in , and recently several joint topic models have been proposed to unify the treatment of sentiment and topic ( aspect ) .applications of these models have been limited to sentiment classification for reviews , but we hypothesize that they can also be helpful in summarization .we focus our next discussion on previous joint models in comparison to our proposed model .one of the earliest work is the topic - sentiment model ( tsm ) , which generates a word either from a topic or one of the two additional subtopics sentiments , but it fails to account for the intimate interplay between a topic / aspect and a sentiment .tsm is based on plsi whereas more recent work ( ) uses or extends latent dirichlet allocation ( lda ) . in the multi - aspect sentiment ( mas )model , customer ratings are incorporated as signals to guide the formation of pre - defined aspects , which can then be used to extract sentences from reviews that are related to each aspect . in the joint sentiment / topic ( jst )model , and the aspect and sentiment unification model ( asum ) , each word is assumed to be generated from a distribution jointly defined by a topic and a sentiment ( either positive or negative ) . as a result , jst and asumlearn words that are commonly associated with an aspect although the models are incapable of distinguishing between sentiment and non - sentiment lexicons .we propose a new model that leverages syntactic information to identify sentiment lexicons and automatically learn their polarities from the co - occurrences of words in a sentence .this allows the model to bootstrap using a minimum set of sentiment seed words , thereby alleviating the need for information that is expensive to obtain such as ratings of users for reviews or large lists of sentiment lexicons .our key modelling assumption for reviews is that a sentence expresses an opinion toward an aspect via its sentiment component .for example , in the sentence ` the service was excellent ' , only the word ` excellent ' carries the positive sentiment .this is not a new assumption as adjectives and adverbs are commonly considered the main source of sentiment in a sentence in existing literature .our model leverages on this type of knowledge to locate sentiment words in a sentence with relatively high confidence . the formal generative process of our model for the graphical representation in fig .[ fig : model ] is as follows ( see table [ tab : notations ] for the list of notations ) .* for every aspect , draw a distribution of non - sentiment words , and two distributions of sentiment words , , where denotes positive polarity and denotes negative polarity .* for each review , * * draw a sentiment distribution * * draw a topic distribution * * for each sentence in document , * * * choose a topic and a sentiment * * * choose words to discuss aspect and sentiment words to convey the sentiment toward .notice in the graphical model that the part of a sentence which emanates the sentiment is observed . in our implementation, we treat all adjectives and adverbs as and remaining words as in the generative procedure , but this is not a restriction imposed on the model .it is easy to incorporate prior knowledge about words that convey sentiment into the model .for example , we can instruct the model that words such as _ love , hate , enjoy , worth , disappoint _ are sentiment words , even though they are not adjective nor adverb . our main extension deals with the word smoother for sentiment words .each sentiment word is associated with a topic dependent smoothing coefficient for topic and a sentiment dependent smoothing coefficient for sentiment .we then impose that this modeling allows us to incorporate polarity of sentiment words as side information .the polarity of sentiment lexicon in a corpus is represented by the values of ; this is to assume that the polarity of is its intrinsic property as the corpus is about a specific domain .the topic dependent smoother is introduced to accommodate the different frequency of association between the sentiment word and different aspects ..list of notations used in the paper ( senti = sentiment , dist .= distribution ) [ cols= " < , < , < " , ] [ tab : highlyrated ] below we show examples of a restaurant review and a coffee maker review together with the segments extracted as their summaries .* review of restaurant * : _ the space is small but cozy , and the staff is friendly and knowledgeable .there was some great music playing , which kind of made me feel like i was on vacation some place far away from astoria .there are a lot of really great vegetarian options , as well as several authentic turkish dishes .if you re still wasting time reading this review , stop now and head straight for mundo .your stomach could already be filled with tons of deliciousness ._ * summary * : staff is friendly , space is small , some great music playing , several authentic turkish dishes , really great vegetarian options .* review of coffee maker * : _ i bought this machine about a week ago .i did not know which machine in the store to get , but the sales clerk helped me make the decision to buy this one .it is incredibly simple to use and the espresso is great . the crema is perfect too .my latte s rival those in coffee houses and i am saving a ton of money .the `` capsules '' must be ordered from the nespresso website , but they are usually at your door in 48 hours via ups ... _ * summary * : incredibly simple to use , espresso is great , crema is perfect . in both casesthe summaries express the gist of each review relatively well .looking at the sentence where a segment is extracted from , it can be seen that the segment conveys the main talking point of the sentence .additionally , each segment does express an opinion about some aspect of the coffee maker or the restaurant . recall that our key assumption in modeling reviews is that each sentence has a sentiment and an aspect . therefore extracting segmentsthe way we propose is likely to capture the main content of a sentence .in this paper we have describe a framework for extracting and selecting informative segments for review summarization of products and services .we extract candidate segments by matching against variable - length syntactic patterns and select the segments that contain top sentiment and aspect words learned by topic models .we proposed a new joint sentiment topic model that learns the polarity of aspect dependent sentiment lexicons .qualitative and quantitative experiments verify that our model outperforms previous approaches in improving the quality of the extracted segments as well as the generated summaries .
we present a novel summarization framework for reviews of products and services by selecting informative and concise text segments from the reviews . our method consists of two major steps . first , we identify five frequently occurring variable - length syntactic patterns and use them to extract candidate segments . then we use the output of a joint generative sentiment topic model to filter out the non - informative segments . we verify the proposed method with quantitative and qualitative experiments . in a quantitative study , our approach outperforms previous methods in producing informative segments and summaries that capture aspects of products and services as expressed in the user - generated pros and cons lists . our user study with ninety users resonates with this result : individual segments extracted and filtered by our method are rated as more useful by users compared to previous approaches by users .
we are in the midst of a new wireless revolution , brought on by the adoption of wireless networks for consumer , military , scientific , and wireless applications .for example , the consumer potential is clearly evident in the exploding popularity of wireless lans and bluetooth - protocol devices .the military potential is also clear : wireless networks can be rapidly deployed , and the failure of individual nodes does not imply the failure of the network .scientific data - collection applications using wireless sensor networks are also gaining in numbers .these applications have sparked a renewed interest in network information theory . despite the recent progress ( see and references wherein ) , developing a unified theory for network information flow remains an elusive task . in our work , we consider , perhaps , the most simplified scenario of wireless networks . our network is composed of only three nodes and limited by the half - duplex and total power constrains . despite this simplicity, this model encompasses many of the special cases that have been extensively studied in the literature .these special _ channels _ are induced by the traffic generated at the nodes and the requirements imposed on the network .more importantly , this model exposes the common features shared by these special cases and allows for constructing universal cooperation strategies that yield significant performance gains .in particular , we focus here on three special cases , namely 1 ) relay channel , 2 ) multicast channel , and 3 ) conference channel .these channels are defined rigorously in section [ sec : general ] .we adopt a greedy framework for designing cooperation strategies and characterize the achievable rates of the proposed schemes .our analysis reveals the structural similarities of the proposed strategies , in the three special cases , and establishes the asymptotic optimality of such strategies in several cases .more specifically , our contributions can be summarized as follows . 1 .we propose a novel cooperation strategy for the relay channel with feedback .our scheme combines the benefits of both the decode - and - forward ( df ) and compress - and - forward ( cf ) strategies and avoids the idealistic assumptions adopted in earlier works .our analysis of the achievable rate of the proposed strategy reveals the diminishing gain of feedback in the asymptotic scenarios of low and large signal - to - noise ratio .we further establish the sub - optimality of orthogonal cooperation strategies ( ) in this * half duplex * setting .2 . inspired by the feedback strategy for the relay channel, we construct a greedy cooperation strategy for the multicast scenario .motivated by a greedy approach , we show that the _ weak _ receiver is led to help the _ strong _ receiver first . based on the same greedy motivation ,the strong user starts to assist the weak receiver after successfully decoding the transmitted codeword .we compute the corresponding achievable rate achieved by and use it to establish the significant gains offered by this strategy , as compared with the non - cooperative scenario .3 . motivated by the sensor networks application, we identify the conference channel model as a special case of our general formulation . in this model ,the three nodes observe correlated date streams and every node wishes to communicate its observations to the other two nodes .our proposed cooperation strategy in this scenario consists of three stages of _ multicast with side information _, where the multicasting order is determined by a low complexity greedy scheduler . in every stage , we use a cooperation strategy obtained as a generalization of the greedy multicast approach .this strategy highlights the central role of cooperative source - channel coding in exploiting the side information available at the receivers . by contrasting the minimum energy required by the proposed strategy with the genie - aided and non - cooperative schemes ,we establish its superior performance .we identify the greedy principle as the basis for constructing efficient cooperation strategies in the three considered scenarios .careful consideration of other variants of the three node network reveals the fact that such principle carries over with slight modifications .the rest of the paper is organized as follows .section [ sec : general ] introduces our modelling assumptions and notation . in section [ sec : relay ] , we present the new cooperation strategy for the wireless relay channel with _ realistic _ feedback and analyze its performance .building on the relay channel strategy , section [ sec : multicast ] develops the greedy cooperation framework for the multi - cast channel .we devote section [ sec : conference ] to the conference channel .finally , we offer some concluding remarks on section [ sec : conclusion ] . to enhance the flow of the paper , all the proofsare collected in the appendices .figure [ fig : conferencechannel ] illustrates a network consisting of three nodes each observing a different source . in the general case , the three sources can be correlated .nodes are interested in obtaining a subset or all the source variables at the other nodes . to achieve this goal , nodes are allowed to coordinate and exchange information over the wireless channel .mathematically , the three node wireless network studied in this paper consists of following elements : 1 . the three sources , drawn i.i.d . from certain known joint distribution over a * finite * set .we denote by the length- discrete source sequence at the -th node . throughout the sequel , we use capital letters to refer to random variables and small letters for realizations .we consider the discrete - time _ additive white gaussian noise _ ( awgn ) channel . at timeinstant , node receives where is the transmitted signal by node- and is the channel coefficient from node to . to simplify the discussion ,we assume the channel coefficients are symmetric , i.e. , .these channel gains are assumed to be known _ a - priori _ at the three nodes .we also assume that the additive zero - mean gaussian noise is spatially and temporally white and has the same unit variance ( ) .we consider half - duplex nodes that can not transmit and receive _ simultaneously _ using the same degree of freedom . without loss of generality , we split the degrees of freedom available to each node in the temporal domain , so that , at each time instant , a node- can either transmit ( _ t - mode _ , ) or receive ( _ r - mode _ , ) , but never both . due to the half - duplex constraint , at any time instant ,the network nodes are divided into two groups : the t - mode nodes ( denoted by ) and the r - mode nodes ( ) .a partition is called a network state .4 . let denote the average transmit power at the -th node during the network state .we adopt a short - term power constraint such that the total power of all the t - mode nodes at any network state is limited to , that is , 5 .we associate with node- an index set , such that indicates that node- is interested in obtaining from node- ( ) . 6 . at node- , a causal joint source - channel encoder converts a length- block of source sequence into a length- codeword .the encoder output at time is allowed to depend on the received signal in the previous instants , i.e. , in the special case of a separate source - channel coding approach , the encoder decomposes into : * a source encoder maps into a node message , i.e. , , ] ( see ) .hence , the optimal time , assuming the other variables remain fixed , can be simply determined by the intersection point of the two associated line segments , as illustrated in fig .[ fig : linebd ] . on the other hand , and characterized by more complicated expressions due to the dependency of and upon the time - division parameters .our next result finds upper bounds on and which allow for the same simple line - crossing interpretation as .[ lem : up_bds ] the achievable rate of the feedback scheme is upper bounded by the achievable rate of compress - and - forward is bounded by please refer to appendix [ appex : up_bds ] .interestingly , fig .[ fig : linebd ] encodes a great deal of information regarding the performance of the three schemes . for example , when , the intersection point corresponding to decode - forward would fall below the flat line associated with the relay - off rate .more rigorously , we have the following statement .[ thm : relaycomph ] 1 .if then .2 . if then .3 . if then . please refer to appendix [ appex : relaycomph ] .theorem [ thm : relaycomph ] reveals the fundamental impact of channel coefficients on the performance of the different cooperation strategies .in particular , the df strategy is seen to work well with a `` strong '' source - relay link .if , at the same time , the relay - destination link is stronger , then one may exploit feedback , i.e. , , to improve performance .the next result demonstrates the asymptotic optimality of the feedback scheme in the limit of large or .[ thm : relayasymh ] 1 . as increases , both df- and fb - scheme achieve the optimal beam - forming benchmark , while cf - scheme is limited by a sub - optimal rate .2 . as increases , both cf- and fb - scheme achieve the optimal multi - receiver benchmark , while df - scheme only approaches to a sub - optimal rate .the proof of theorem [ thm : relayasymh ] is a straightforward limit computation , and hence , is omitted for brevity .so far we have kept the total power constant .but in fact , the achievable rate as a function of offers another important dimension to the problem .first , we investigate the low power regime , which is greatly relevant to the wide - band scenario . in this casewe study the slope of the achievable rate with respect to ( i.e. , ) .note that the relay - off benchmark has a slope .[ thm : relaylowp ] let and be a shorthand notation , then 1 .when and 2 . with .3 . with .please refer to appendix [ appex : relaylowp ] .it follows from theorem [ thm : relaylowp ] that given , df cooperation delivers a larger slope than the relay - off , i.e. , however , cf cooperation does not yield any gain in the low power regime .similarly , we see that the cf stage of the proposed fb becomes useless , and hence , the scheme reduces to the df approach in the low power regime .the reason lies in the fact that for small , the channel output is dominated by the noise , and hence , the compression algorithm inevitably operates on the noise , resulting in diminishing gains .we next quantify the snr gain of the three schemes in the high power regime , that is , to characterize as .[ thm : relayhighp ] following the same shorthand notations as in theorem [ thm : relaylowp ] , we obtain 1 . given , - \log\bigl[f_2(\theta , r_{12},h_{13})h_{13}^2\bigr]}.\ ] ] 2 . + where 3 . with . please refer to appendix [ appex : relayhighp ] .theorem [ thm : relayhighp ] reveals the fact that strict feedback ( ) does not yield a gain in high power regime .the reason for this behavior can be traced back to the half - duplex constraint .when , the destination spends a fraction of time transmitting to the relay , which cuts off the time in which it would have been listening to the source in non - feedback schemes .such a time loss reduces the pre - log constant , which can not be compensated by the cooperative gain when p becomes large . at this point, we wish to make a side comment contrasting the half - duplex constraint with orthogonal relay channels .the orthogonal cooperation framework was recently proposed as a practical way to address the half - duplex requirement . for simplicity , let s consider the non - feedback scenario and assume that the available bandwidth is hz , and hence , the total resources available to every node in network is real dimensions per second .the half - duplex constraint dictates * only * orthogonality at each node , where the available degrees of freedom , in the time - frequency plane , is splitted into two parts .the node uses the first part to receive and the second to transmit . in the orthogonal cooperation approach, however , one imposes orthogonality at the network level ( i.e. , no two nodes can now transmit in the same degree of freedom ) .in particular , the channel is split into two sub channels ( either in time domain or frequency domain ) , where the source uses one of the sub - channels to transmit information to the relay and destination , and the relay uses the other sub - channel to transmit to the destination .one can now see that this orthogonalization is sufficient but * not * necessary to satisfy the half - duplex constraint .figure [ fig : orthogonal ] shows the orthogonal cooperation scheme which splits the channel in the time domain .when relay sends , it can use either the df or cf strategies . using the same argument as in the previous part ,one obtains the following achievable rate for the orthogonal df strategy scheme , it is now clear that is just a special case of where , for any , one can obtain the corresponding by setting in .one can use the fact that is not necessarily the optimal power assignment that maximizes to argue for the sub - optimality of the orthogonal df strategy ( i.e. , ) .more generally , the same argument can be used to establish the sub - optimality of any orthogonal cooperation strategy .we conclude this section with simulation results that validate our theoretical analysis .figure [ fig : ratevsp ] reports the achievable rate of various schemes , when , , and .this corresponds to the case when the source - relay channel is a little better than the source - destination channel , and the relay - destination channel is quite good .this is the typical scenario when feedback results in a significant gain , as demonstrated in the figure . in the figure, we also see the sub - optimality of orthogonal cooperation strategies .figure [ fig : ratevsh23 ] reports the achievable rates of various schemes , when , , and , as we vary the relay - destination channel gain .we can see that as the relay - destination channel becomes better , the advantage of feedback increases .figures [ fig : relay_p2 ] , [ fig : relay_p001 ] , [ fig : relay_p100 ] illustrate regions in the plane ( ) corresponding to the best of the three strategies .it is seen that feedback can improve upon both df and cf strategies in certain operating regions . however , as predicted by our analysis , such gain diminishes when either or .overall , we can see that the proposed fb cooperation scheme combines the benefits of both the df and cf cooperation strategies , and hence , attains the union of the `` nice '' properties of the two strategies . on the other hand ,the gain offered by feedback seems to be limited to certain operating regions , as defined by the channel gains , and diminishes in either the low or high power regime .the relay channel , considered in the previous section , represents the simplest example of a three - node wireless network .a more sophisticated example can be obtained by requiring node- to decode the message generated at node- .this corresponds to the multicast scenario .similar to the relay scenario , we focus on maximizing the achievable rate from node- to both node- and , without any loss of generality .the half - duplex and total power constraints , adopted here , introduce an interesting design challenge .to illustrate the idea , suppose node- decides to help node- in decoding . in this case , not only does node- compete with the source node for transmit power , but it also sacrifices its listening time for the sake of helping node- .it is , therefore , not clear _ a - priori _ if the network would benefit from this cooperation . in the following , we answer this question in the affirmative andfurther propose a greedy cooperation strategy that enjoys several _ nice _ properties .in a recent work , the authors considered another variant of the multicast channel and established the benefits of receiver cooperation in this setup .the fundamental difference between the two scenarios is that , in , the authors assumed the existence of a dedicated link between the two receivers .this dedicated link was used by the _ strong _ receiver to help the _ weak _ receiver in decoding through a df strategy . as expected ,such a cooperation strategy was shown to strictly enlarge the achievable rate region . in our work, we consider a more representative model of the wireless network in which all communications take place over the same channel , subject to the half - duplex and total power constraints . despite these constraining assumptions ,we still demonstrate the significant gains offered by receiver cooperation .inspired by the feedback - relay channel , we further construct a greedy cooperation strategy that significantly outperforms the df scheme in many relevant scenarios . in the non - cooperative scenario , both node- and node- will listen all the time , and hence , the achievable rate is given by due to the half - duplex constraint , time is valuable to both nodes , which makes them selfish and unwilling to help each other .careful consideration , however , reveals that such a _ greedy _ approach will lead the nodes to cooperate .the enabling observation stems from the feedback strategy proposed for the relay channel in which the destination was found to get a higher achievable rate if it sacrifices some of its receiving time to help the relay .motivated by this observation , our strategy decomposes into three stages , without loss of generality we assume , 1 ) lasting for a fraction of the frame during which both receivers listen to node- ; 2 ) occupying fraction of the frame during which node- sends its compressed signal to node- ; and 3 ) ( the rest fraction ) during which node- and help node- finish decoding .one major difference between the multicast and relay scenarios is that in the second stage the source can not send additional ( new ) information to node- , for it would not be decoded by node- , thus violating the multicast requirement that both receivers obtain the same source information . here , we observe that the last stage of cooperation , in which node- is helping node- , is still motivated by the greedy approach .the idea is that node- will continue transmitting the same codeword until both receivers can successfully decode .it is , therefore , beneficial for node- to help node- in decoding faster to allow the source to move on to the next packet in the queue . a slight modification of the proof of lemma [ lem : fb ] results in the following .[ lem : multicast ] the achievable rate of the greedy strategy based multicast scheme is given by where and denotes the correlation between during state .the `` sup '' operator is taken over the total power constraint .we observe that the df multicast scheme corresponds to the special case of , which has a rate the cut - set upperbounds give rise to the two following benchmarks : beam - forming and multi - receiver .similar to the relay channel scenario , we examine in the following the asymptotic behavior of the greedy strategy as a function of the channel coefficients and available power .[ them : multicastlim ] 1 .the greedy cooperative multicast scheme strictly increases the multicast achievable rate ( as compared to the non - cooperative scenario ) .the greedy strategy approaches the beam - forming benchmark as increases , i.e. , 3 .the greedy strategy approaches the multi - receiver benchmark as increases , i.e. , 4 .as , the slope of the greedy strategy achievable rate is given by 5 .as , the snr gain with . please refer to appendix [ appex : multicast ] .parts 2 ) , 3 ) demonstrate the asymptotic optimality of the greedy multicast as the channel gains increase ( the proof follows the same line as that of theorem [ thm : relayasymh ] ) . on the other hand ,we see that the large - power asymptotic of the multicast channel differs significantly from that of the relay channel . in the relay case ( theorem [ thm : relayhighp ] ) , the contribution of feedback diminishes ( ) in this asymptotic scenario , but cooperation was found to be still beneficial , that is . to the contrast , the gain of receiver cooperation in the multicast channel disappears as increases .this is because , unlike the relay scenario , at least one receiver must cut its listening time in any cooperative multicast scheme due to the half - duplex constraint .such a reduction induces a pre - log penalty in the rate , which results in substantial loss that can not be compensated by cooperation as , and hence , the greedy strategy reduces to the non - cooperative mode automatically .figure [ fig : multicastpchange ] compares the achievable rate of the various multicast schemes where the df cooperation strategy is shown to outperform the non - cooperation scheme .it is also shown that optimizing the parameter provides an additional gain ( note in the figure corresponds to ) .figure [ fig : multicastpchangeh12small ] reports the achievable rate of the three schemes when . in this case , it is easy to see that df strategy yields * exactly * the same performance as the non - cooperative strategy .on the other hand , as illustrated in the figure , the proposed greedy strategy is still able to offer a sizable gain .figure [ fig : multicasth23change ] illustrates the fact that the gain of greedy strategy increases as increases .the non - cooperation scheme is not able to exploit the inter - receiver channel , and hence , its achievable rate corresponds to a flat line .the df scheme can benefit from the inter - receiver channel , but its maximum rate is limited by , whereas the greedy strategy achieves a rate as .arguably the most demanding instantiation of the three - node network is the conference channel . here, the three nodes are assumed to observe correlated data streams and every node is interested in communicating its observations to the other two nodes . in a first step to understand this channel , oneis naturally led to applying cut - set arguments to obtain a lower bound on the necessary bandwidth expansion factor . to satisfy the conference channel requirements , every node needs to transmit its message to the other two nodes and receive their messages from them . due to the half duplex constraint, these two tasks can not be completed simultaneously .take node- as an example and consider the transmission of a block of observations to the other two nodes using channel uses . to obtain a lower bound on the bandwidth expansion factor, we assume that node- and node- can fully cooperate , from a joint source - channel coding perspective , which converts the problem into a point - to - point situation .then node- only needs to randomly divide its source sequences into bins and transmit the corresponding bin index . with channel uses ,the information rate is .the channel capacity between node- and the multi - antenna node- is . in order to decode at node- with a vanishingly small error probability, the following condition must be satisfied , similarly , with full cooperation between node- and node- , the following condition is needed to ensure the decoding of the sequence at node- with a vanishingly small error probability , these two genie - aided bounds at node- imply that the minimum bandwidth expansion factor required for node- is .similarly , we can obtain the corresponding genie - aided bounds for node- and node- , , . to satisfy the requirement for all these three nodes ,the minimum bandwidth expansion factor for this half - duplex conference channel is therefore at this point , we remark that it is not clear whether the genie - aided bound in ( [ genie ] ) is achievable .moreover , finding the optimal cooperation strategy for the conference channel remains an elusive task .however , inspired by our greedy multicast strategy , we propose in the following a modular cooperation approach composed of three _ cooperative multicast with side information _ stages . in this scheme ,each node takes a turn to multicast its information to the other two nodes .the multicast problem here is more challenging than the scenario considered in section [ sec : multicast ] due to the presence of correlated , and different , side information at the two receive nodes . as argued in the following section , in order to fully exploit this side information , one must adopt a cooperative source - channel coding approach in every multicast stage .furthermore , from one stage to the next , the side - information available at the different nodes changes .for instance , assuming the first stage is assigned to node- , then the side - information available at node- and will enlarge after the first stage to and , respectively . now , suppose node- is scheduled to multicast next , then the rate required by node- is now reduced to , thanks to the additional side - information .thus , one can see that the overall performance depends on the efficiency of the scheduling algorithm . in section [ scheduler ] , we present a greedy scheduling algorithm that enjoys a low computational complexity and still achieves a near - optimal performance .the relay and multicast scenarios considered previously share the common feature of the presence of only one source . here , we expand our investigation of the multicast scenario by allowing the receive nodes to observe correlated , and possibly different , side information . to simplify the presentation , without sacrificing any generality , we assume that node- is the source and node- and are provided with the side information and , respectively . before presenting our greedy cooperation strategy , we study the non - cooperative scenario where the two receive nodes are not allowed to communicate .this investigation yields an upper bound on the bandwidth expansion factor achievable through cooperation .one can obtain a simple - minded transmission scheme by separating the source and channel coding components . in this approach , by appealing to the standard random binning argument , node- encodes the source sequence at the rate which allows both receivers to decode with the aid of their side - information .such a source code is then sent via the multicast channel assuming no receiver cooperation , which corresponds to a transmission rate .therefore , the above scheme would require channel uses per source symbol . in , the source code based on random binningreflects the worst - case scenario corresponding to the least correlated receiver node , i.e. , .a more efficient solution utilizes a _ nested binning _approach that combines the information required by the two receive nodes into a single hierarchical binning scheme . for the convenience of exposition , we assume that .a source sequence is randomly assigned to one of bins .this is the low - level indexing sufficient for node- to decode with side - information .these indices are then ( randomly ) divided into equal - sized groups , which corresponds to the random binning approach for node- .therefore , a source sequence is associated with an index - pair , where ] identifies the bin index within a group .given side - information ( more correlated with the source ) , node- needs only the group index to recover the source sequence .but the low - level bin index is necessary for node- to decode . in summary ,the above nested binning scheme permits the source node to send to node- while only to node- .such a structured message is called the _ degraded information set _ in where is the `` common '' information for both receivers and the `` private '' information required by only one of the two receivers .the corresponding rate set can be written as , where is the rate associated with the private message for node- , is the rate associated with the common message , and node- receives no private information .this broadcast channel with a degraded message set has been studied in ( and references wherein ) . [ them : broadwithdegra ] the capacity region of broadcast with degraded information set is the convex hull of the closure of all satisfying for some joint distribution .if the broadcast channel itself is degraded then the above three constraints can be simplified .consider the case where forms a markov chain .if the last constraint is satisfied ( in which case node- can decode both the private and common message ) , node- can also decode both parts of the source message although it does not need the private part . on the other hand ,if , the problem can be reduced to the conventional ( degraded ) broadcast setting where the rate / is for node-/ and node- would automatically decode the message for the degraded node- .so , in this case , only the first two constraints are sufficient .the final step , in this approach , is to combine theorem [ them : broadwithdegra ] with our nested binning approach . in this case , the rate set is given by and and we obtain the following result . [lem : multicastsidedegraded ] for multicast with side - information , the achievable bandwidth expansion factor based on nested binning source coding and degraded information set broadcasting is given by 1 . if , 2 .if , for some .now , we are ready to describe our greedy cooperative source - channel coding approach . similar to the multicast scenario ,the receive nodes follow a greedy strategy to determine the order of decoding . due to the presence of side information , however , a more careful approach must be employed in choosing the _ strong _ receiver .to illustrate the idea , consider the following degenerate case , where , , and .although the channel between node- and node- is worse in this case , node- knows the information from the beginning because . so it can start to cooperate with node- from the very beginning .this toy example suggests that one should take the amount of side information available at each node into consideration . in our scheme , each node calculates the expected bandwidth expansion factor assuming no receiver cooperation , , where denotes the link capacity .the receive node with the smaller is deemed as the _ strong _ node , and hence , will decode first .our definition of strong and weak highlights the cooperative source - channel coding approach proposed in this paper . without loss of generality , we assume . however , the `` weak '' node- may still decide to assist node- in decoding through a cf approach , in a way similar to section [ sec : multicast ] , hoping to benefit from node- s help after it decodes .after node- successfully decoding , with / without the additional help from node- , it coordinates with the source node to facilitate decoding at node- , in order to start the next round of multicast . to better describe the cooperative source - channel coding , we consider first the simple case where node- does not help node- .we randomly bin the sequences into bins and denote the bin index by ] , in which to destination .equally divide these messages into cells , index the cell number as .index the element in every cell as , m_{2}=2^ { n(1-t)\cdot r_{2}} ] is called the cell index .* state : * * source node : generate i.i.d .length- codewords with .label these sequences as ] . ** destination node : generate i.i.d .length- codewords with .index them as .generate i.i.d.length- sequences with .randomly partition the set ] .* state : * * relay node : randomly generate i.i.d .length- sequences with .index them as ] .partition the source message set into equal - sized cells .let be the message to be sent in block .suppose is the -th message in cell- and the cell index is in bin- and bin- respectively .for brevity we drop the block index in the following .* state : the source sends .* state : * * the source node knows that the cell index is in bin- , so it sends . * * the destination first selects that is jointly typical with .it then sends where is in the bin .* state : * * knowing the cell index is in bin- , the source node sends the corresponding . * * using the information received in state and , the relay gets an estimation of the cell index .suppose is in bin- .then it sends . in the following, code length is chosen sufficiently large . * at the end of :the destination has received and it decides a sequence if are jointly typical .there exists such a with high probability if * at the end of : at this stage , only the relay decodes the message . ** the relay estimates by looking for the unique such that are jointly typical . with high probability if * * knowing , the relay tries to decode by selecting the unique such that are jointly typical . with high probability if * * the relay calculates a list such that if are jointly typical . assuming decoded successfully at the relay , is selected if it is the unique . using the same argument as in , it can be shown that occurs with high probability if * * the relay computes another list such that if are jointly typical .* * finally , the relay declares is received if it is the unique .using the same arguement as in , one can show with high probability if * at the end of : * * the destination declares that was sent from the relay if there exists one and only one such that are jointly typical .then with high probability if * * after decoding , the destination further declares that was sent from the source if it is the unique such that are joint typical .assuming decoded correctly , the probability of error of is small if * * at first , the destination calculates a list , such that if are jointly typical . assuming decoded successfully at the destination , is declared to be the cell index if there is a unique . as in ,the decoding error is small if from the cell index and the message index within the cell , the destination can recover the source message . combining and , we have it follows from and that from and , we have the constraint thus if , , and are satisfied , there exist a channel code that makes the decoding error at destination less than . as mentioned in , strong typicality does not apply to continuous random variables in general , but it does apply to the gaussian variables .so the dmc result derived above applies to the gaussian . since is a degraded version of , we write where is gaussian noise with variance ( see for a similar analysis ) .first , we examine the constraint under the gaussian inputs . and so we observe that the correlation coefficient because neither the source nor the destination knows the codeword sent by the other duing the feedback state .thus , one has similarly , one has where so setting to solve for next , we examine the achievable rate expression . combining them together , we get similarly for , one has which gives rise to setting the noise variance , the proof is complete .we only show the upperbound of .the proof for is similar and thus omitted .setting shorthand notation , one has from that hence , which proves . as appendix [ appex : fb ], we first prove the result for dmc case , then apply the result to the gaussian channel .randomly bin all the sequence into bins by independently generating an index uniformly distributed on .let be the mapping function , such that .independently generate another bin index for every sequence by picking uniformly from .let be the set of all sequences allocated to bin .thus , every source sequence is associated with two bin indexes . * at state , generate i.i.d .length- sequence , each with probability , in which is the input distribution that maximizes .assign every bin index to one sequence $ ] .* at state , randomly generate i.i.d.length- at node- , each with probability .generate i.i.d .length- at node- , each with probability , in which . and the input distribution that maximizes .associate every bin index to one sequence pair . suppose we want to send source sequence at block , and , . for brevity of notation, we drop block index in the following . * state :node- sends .* state : * * node- knows is in , so it sends .* * at the end of state , node- gets an estimation of ( details will be given in the following ) , and suppose is in bin .then in state nodes- sends the corresponding . at the end of state : * at node-:at first , node- looks for the one and only one such that are jointly typical .then node- searches in the bin indexed by for source sequence such that are jointly typical .if it finds only one such sequence , it declares it has received , otherwise declares an error .* at node-:node- calculates a list , such that if are jointly typical . at the end of state , only node- needs to decode : * step 1 : node- declares it receives , if is the one and only one index such that are jointly typical . *step 2 : node- searches in the bin for the one and only one source sequence , such that are jointly typical and .if it finds such a unique one , it declares that is the source sequence .otherwise it declares an error . for node- are following error events : and when is sufficiently large , using the aep , .now consider , if channel code rate is less than the capacity , receiver will decode channel code with error probability less than . here, there are code words , and channel code length is , then the rate of channel code is .thus for sufficently large and , if which is the same as : because source code rate is , using the same argument as , one can get , if is sufficiently large .so if ( [ equ : n1condi ] ) is satisfied , and are sufficiently large , there exists a source - channel code that make the error probability at node- for node- , there are following error events : when is sufficiently large , .and if is satisfied , .now consider , the channel code rate is .so , for sufficiently large , if that is , now consider : follow the same steps in the , one has .so so if and and is sufficiently large , .together with and , one can get thus , if both and are satisfied , there exists a source - channel code that makes the error probability at node- .next step is to apply the result to the gaussian channel . in this case, we have inserting to and completes the proof .in view of in , implies that where we have used the total power constraint . to prove 2 ) , consider the upperbound for in .given the total power constraint , it is easy to verify that .therefore , the condition implies that the last statement of the theorem can be shown in a similar fashion using the upperbound in . since by , the two line segments in the expression intersect at some optimal ( see fig . [ fig : linebd ] ) .the corresponding rate is given by where we have set and according to the total power constraint .taking , the taylor expansion is sufficient to establish . to prove the lowerbound in ,note that with equality when and , which , together with , also proves the upperbound of in . on the other hand , as , it is seen from that , thus showing .the similar behavior holds for the feedback scheme , that is , as , in which case with the optimal approaches . the results for and follows from direct computation of large limit .we only show the last statement concerning the feedback scheme .as in the case of decode - forward , the line - crossing point gives the optimal and the associated rate is given by in which where we set and .taking , } \quad ( = \sigma_3 ^ 2(\infty)).\ ] ] denote , one has }{\frac{1-\alpha}{2}\logp + \frac{1}{2}\bigl [ \log f_1 + \log f_3 - \log f_2 -\alpha\log h_{13}^2\bigr]}.\ ] ] it follows that if which forces , that is , .here we only prove the part 1 ) of this theorem .parts 2 ) - 5 ) follow the same lines as the corresponding results in the relay case . to prove part 1 ), it suffices to show the statement for .the capacity of the multicast channel without cooperation is given by .with the assumption that , we have .note that the rate expression of admits the same line - crossing interpretation as in the relay case .thus , the intersection determines the optimal rate point .equate the two terms to solve which gives the corresponding rate therefore , using , one has which proves the theorem .part 1 ) and 2 ) of this theorem follow straightforward limit calculation , we only prove part 3 ) .the assumption becomes when . under this assumption , there are two different cases corresponding to different cost function for the benchmark scheme : and . when , in which case and when , so part 1 ) of this theorem , without loss of generality , we only prove the case when . in this case , , . in the following, we will show that the genie - aided bound could be achieved using the following multicast order .when node- multicasts to both node- and node- using the proposed cooperative multicast with side - information scheme , from lemma [ lem : multicastsidefb ] we know it requires means the achievable rate of the following relay channel using compress - forward scheme : node- is the source , node- acts as relay that spends part of the time in helping destination using cf scheme , and node- acts as the destination .to prove the second part of this theorem , without loss of generality , suppose is the optimal multicast order for the scheme that uses broadcast with degraded information set .then , just use the same order for the cooperative source - channel coding scheme based multicast with side - information .theorem [ them : multisideener ] shows that at every multicast step , the cooperative source - channel coding scheme outperforms the broadcast with degraded information set .thus even with this not necessarily optimal order , the cooperative source - channel coding scheme outperforms the scheme that uses broadcast with degraded information set with optimal order . a geometric representation of fb- , df- and cf - relay schemes .the solid lines are for in , the dash - dotted for in , and the dashed for the upperbound of in .the various endpoints in the figure are ( a ) , ( b ) , ( c ) , ( d ) , ( e ) , ( f ) , ( g ) , ( h ) , and ( i ) ., scaledwidth=30.0% ]
we consider a wireless network composed of three nodes and limited by the half - duplex and total power constraints . this formulation encompasses many of the special cases studied in the literature and allows for capturing the common features shared by them . here , we focus on three special cases , namely 1 ) relay channel , 2 ) multicast channel , and 3 ) conference channel . these special cases are judicially chosen to reflect varying degrees of complexity while highlighting the common ground shared by the different variants of the three node wireless network . for the relay channel , we propose a new cooperation scheme that exploits the wireless feedback gain . this scheme combines the benefits of decode - and - forward and compress - and - forward strategies and avoids the idealistic feedback assumption adopted in earlier works . our analysis of the achievable rate of this scheme reveals the diminishing feedback gain at both the low and high signal - to - noise ratio regimes . inspired by the proposed feedback strategy , we identify a greedy cooperation framework applicable to both the multicast and conference channels . our performance analysis reveals several _ nice _ properties of the proposed greedy approach and the central role of cooperative source - channel coding in exploiting the receiver side information in the wireless network setting . our proofs for the cooperative multicast with side - information rely on novel nested and independent binning encoders along with a list decoder .
the theory of relativistic hypercomputation ( i.e. , the investigation of relativity theory based physical computational scenarios which are able to solve non - turing - computable problems ) has an extensive literature and it is investigated by several researchers in the past decades , see , e.g. , , , , , , . for an overview of different approaches to hypercomputation ,see , e.g. , .it is well - known that hypercomputation is not possible in special relativity in the usual sense ( i.e. , the sense of malament hogarth spacetimes ) , see , e.g. , . in this paper , we show that it is possible to perform relativistic hypercomputation via ordinary computers ( turing machines ) in special relativity if there are faster than light ( ftl ) signals , e.g. , particles .we will also show that there have to be ftl signals if relativistic hypercomputation is possible in special relativity ( via turing machines ) , see thm.[thm - hc ] .it is interesting in and of itself to investigate the ( logical ) consequences of the assumption that ftl objects exist , independently of the question whether they really exist or not in our actual physical universe .logic based axiomatic investigations typically aim for describing all the theoretically possible universes and not just our actual one .moreover , so far we have not excluded the possibility of the existence of ftl entities in our actual universe ; and from time to time there appear theories and experimental results suggesting the existence of ftl objects .recently , the opera experiment , see , raised the interest in the possibility of ftl particles .contrary to the common belief , the existence of ftl particles does not lead to a logical contradiction within special relativity . for a formal axiomatic proof of this fact , see .however , it is interesting to note that , in contrast with this result , the impossibility of the existence of ftl inertial _ observers _ follows from special relativity , see , e.g. , .the investigation of ftl motion in relativity theory goes back ( at least ) to tolman , see , e.g. , .since then a great many works dealing with ftl motion have appeared in the literature , see , e.g. , , , , , , , to mention only a few .it is well - known that we can send information back to the past if there are ftl particles , see , e.g. , , .it is natural to try using this possibility to design computers with greater computational power .we will show that uniformly accelerated relativistic computers can compute beyond the church turing barrier via using ftl signals . in this section ,we show this fact informally . in sect.[sec - hc ] , we reconstruct our informal ideas of this section within an axiomatic theory of special relativity extended with accelerated observers .our first observation is that if we can send out an ftl signal with a certain speed , we also have to be able to send out arbitrarily fast signals , by the principle of relativity .prop.[prop - nolim ] is a formal statement of this observation . to informally justify this statement ,let us assume that we can send out an ftl signal by a certain experiment , say with speed .according to special relativity , for any ftl speed , say , there is a inertial reference frame ( moving relative to our frame ) according to which our signal moves with this speed . by the principle of relativity , inertial frames are experimentally indistinguishable ,see ( * ? ? ?* , pp.149 - 159 ) , , .so the experiment which is configured in our reference frame as our original experiment is seen by this moving inertial frame as yielding an ftl signal moving with speed in our frame .therefore , in our ( or any other inertial ) reference frame , it is possible to send out an ftl signal with any speed .let us see the construction of our special relativistic hypercomputer .let the computer be accelerated uniformly with respect to an inertial observer , see fig.[fig - hc ] .there is an event with the following property : any event on the worldline of our uniformly accelerated computer is simultaneous with , according to the inertial observer comoving with the computer at , see , e.g. , ( * ? ? ?* fig.6.4 , p.173 ) , ( * ? ? ?* fig.5.13 , p.152 ) .[ br][br ] [ tl][tl ] [ tl][tl ] [ tr][tr] [ tr][tr] [ tr][tr] [ tl][tl]comoving observer at 2 [ tl][tl ] [ tl][tl ] [ tl][tl]0 [ tl][tl]1 [ tl][tl]2 [ tl][tl]3 now let us show that this configuration can be used to decide non - turing - computable questions if there are ftl signals .let us set the computer to work on some recursively enumerable but non turing - computable problem , say the decision problem for the consistency of zf set theory ; the computer enumerates one by one all the consequences of zf .let us fix an event on the worldline of the programmer which is later than according to him .now , if the computer finds a contradiction , let it send out a fast enough signal which reaches the programmer before event .such signal exists since , by our first observation , the computer can send out a signal which is arbitrarily fast with respect to his coordinate system ( i.e. , any half line in the `` upper '' half space determined by the comoving observer s simultaneity can be the worldline of the signal ) .therefore , if the programmer receives a signal between events and , he knows that zf is inconsistent ; and if there is no signal between and , he knows that the computer has not found any contradiction , so after event the programmer can conclude that there is no contradiction in zf set theory . the same way , by this thought experiment using ftl signals , we can decide ( experimentally ) any recursively enumerable set of numbers . if there are no ftl signals , then the whole computation has to happen in the causal past of the event when the programmer learns the result of computation .however , in special relativity , the computer remaining within the causal past of any event has only finite time to compute by the twin paradox theorem . that is why hypercomputation is not possible in special relativity without ftl signals .this argument is also the basis of proving that minkowski spacetime is not a malament hogarth spacetime .to formalize the result of sect.[sec - hc ] , we need an axiomatic theory of special relativity extended with accelerated observers . to introduce any axiomatic theory, first we have to fix the set of basic symbols of the theory , i.e. , what objects and relations between them we will use as basic concepts . herewe will use the following two - sorted language of first - order logic parametrized by a natural number representing the dimension of spacetime : where ( bodies ) and ( quantities ) are the two sorts , ( observers ) , ( inertial observers ) and ( light signals ) are one - place relation symbols of sort , and are two - place function symbols of sort , is a two - place relation symbol of sort , and ( the worldview relation ) is a -place relation symbol the first two arguments of which are of sort and the rest are of sort .relations , and are translated as `` _ _ is an observer _ _ , '' `` _ _ is an inertial observer _ _ , '' and `` _ _ is a light signal _ _ , '' respectively . to speak about coordinatization, we translate as `` _ _ body coordinatizes body at space - time location _ _ , '' ( i.e. , at space location and instant ) . * quantity terms * are the variables of sort and what can be built from them by using operations and , * body terms * are only the variables of sort .relations , , , , , and where , , , , , , , , are arbitrary terms of the respective sorts are so - called * atomic formulas * of our first - order logic language .* formulas * are built up from these atomic formulas by using the logical connectives _ not _( ) , _ and _ ( ) , _ or _ ( ) , _ implies _ ( ) , _ if - and - only - if _ ( ) and the quantifiers _ exists _ ( ) and _ for all _ ( ) . to make them easier to read , we omit the outermost universal quantifiers from the formalizations of our axioms , i.e., all the free variables are universally quantified .we use the notation for the set of all -tuples of elements of .if , we assume that , i.e. , denotes the -th component of the -tuple . specially , we write in place of , and we write in place of , etc .we use first - order set theory as a meta theory to speak about model theoretical terms , such as models .the * models * of this language are of the form where and are nonempty sets , , and are unary relations on , and are binary functions and is a binary relation on , and is a relation on .formulas are interpreted in in the usual way . for the precise definition of the syntax and semantics of first - order logic ,see , e.g. , , ( * ? ? ?* , 2.2 ) .let us recall some of our axioms for special relativity .our first axiom states some basic properties of addition , multiplication and ordering true for real numbers .: : the quantity part is an ordered field , i.e. , + * is a field in the sense of abstract algebra ; and * the relation is a linear ordering on such that * * and * * holds . in the next axiom, we will use the concepts of time difference and spatial distance .the * time difference * of coordinate points is defined as : to speak about the spatial distance of any two coordinate points , we have to use squared distance since it is possible that the distance of two points is not amongst the quantities , e.g. , the distance of points and is .so in the field of rational numbers , and do not have distance just squared distance .therefore , we define the * squared spatial distance * of as : our next axiom is the key axiom of our axiom system of special relativity .this axiom is the outcome of the michelson - morley experiment , and it has been continuously tested ever since then .nowadays it is tested by gps technology .: : for any inertial observer , the speed of light is the same everywhere and in every direction ( and it is finite ) .furthermore , it is possible to send out a light signal in any direction everywhere : \big]\big ) .\end{gathered}\ ] ] let us note here that does not require ( by itself ) that the speed of light is the same for every inertial observer .it requires only that the speed of light according to a fixed inertial observer is a positive quantity which does not depend on the direction or the location .however , by , we can define the * speed of light * according to inertial observer as the following binary relation : \\ \rightarrow { \mathsf{space}^2}({\bar x},{\bar y})= v^2\cdot{\mathsf{time}}({\bar x},{\bar y})^2\big ] .\end{gathered}\ ] ] by , there is one and only one speed for every inertial observer such that holds . from now on, we will denote this unique speed by .our next axiom connects the worldviews of different inertial observers by saying that they coordinatize the same external " reality ( the same set of events ) . by the * event * occurring for observer at coordinate point , we mean the set of bodies coordinatizes at : : : all inertial observers coordinatize the same set of events : .\ ] ] from now on , we will abbreviate the subformula $ ] of to .the next two axioms are only simplifying ones .: : any inertial observer is stationary relative to himself : .\ ] ] our last axiom on inertial observers is a symmetry axiom saying that they use the same units of measurement . : : any two inertial observers agree as to the spatial distance between two events if these two events are simultaneous for both of them .furthermore , the speed of light is 1 for all observers : .\end{gathered}\ ] ] our axiom system is the collection of the five simple axioms above : to show that captures the kinematics of special relativity , let us introduce the * worldview transformation * between observers and ( in symbols , ) as the binary relation on connecting the coordinate points where and coordinatize the same ( nonempty ) events : map is called a poincar transformation iff it is an affine bijection such that , for all for which and , thm.[thm - poi ] shows that our streamlined axiom system perfectly captures the kinematics of special relativity since it implies that the worldview transformations between inertial observers are the same as in the standard non - axiomatic approaches . for the proof of thm.[thm - poi ] , see .[ thm - poi ] let .then is a poincar transformation if and are inertial observers .the so - called * worldline * of body according to observer is defined as : [ cor - line ] let .the is a straight line if and are inertial observers .to extend to accelerated observers , we need further axioms . we connect the worldviews of accelerated and inertial observers by the next axiom .: : at each moment of its world - line , each observer coordinatizes the nearby world for a short while as an inertial observer does .axiom is captured by formalizing the following statement : at each point of the worldline of an observer there is an inertial comoving observer such that the derivative of the worldview transformation between them is the identity map , see , e.g. , for details .we will also use the generalized ( localized ) versions of axioms and of assumed for every observer .: : observers coordinatize all the events in which they participate : : : in his own worldview , the worldline of any observer is an interval of the time axis containing all the coordinate points of the time axis where the observer coordinatizes something : \land\\ \big[{\ensuremath{\mathsf{w}}}(m , m,{\bar y})\land{\ensuremath{\mathsf{w}}}(m , m,{\bar z})\land y_1<t <z_1\rightarrow { \ensuremath{\mathsf{w}}}(m , m , t,0,\ldots,0)\big ] \land\\ \exists b \,\big[{\ensuremath{\mathsf{w}}}(m , b , t,0,\ldots,0 ) \rightarrow { \ensuremath{\mathsf{w}}}(m , m , t,0,\ldots,0)\big].\end{gathered}\ ] ] let us add these three axioms to to get a theory of accelerated observers : since ties the behavior of accelerated observers to the inertial ones and captures the kinematics of special relativity perfectly by thm.[thm - poi ] , it is quite natural to think that is a theory strong enough to prove the most fundamental theorems about accelerated observers .however , does not even imply the most basic predictions of relativity theory about accelerated observers , such as the twin paradox .moreover , it can be proved that even if we add the whole first - order logic theory of real numbers to is not enough to get a theory that implies ( predicts ) the twin paradox , see , e.g. , , . in the models ofin which the twin paradox is not true , there are some definable gaps in .our next assumption excludes these gaps .: : every parametrically definable , bounded and nonempty subset of has a supremum ( i.e. , least upper bound ) with respect to . in , `` definable '' means `` definable in the language of , parametrically . ''is tarski s first - order logic version of hilbert s continuity axiom in his axiomatization of geometry fitted to the language of . for a precise formulation of , see or .when is the ordered field of real numbers , is automatically true .let us extend with axiom schema : it can be proved that implies the twin paradox , see , . that requires the existence of supremum only for sets definable in instead of every set is important because it makes our postulate closer to the physical / empirical level .this is true because does not speak about `` any fancy subset '' of the quantities , but just about those `` physically meaningful '' sets which can be defined in the language of our ( physical ) theory .let us now introduce some auxiliary axioms we will use here but not listed so far .to do so , let us call a linear bijection of * trivial transformation * if leaves the time components ( i.e. , first coordinates ) of coordinate points unchanged and it fixes the points of the time axis , i.e. , the set of trivial transformation is : where denotes the * origin * , i.e. , coordinate point .: : inertial observers can move with any speed less than the speed of light and new inertial reference frames can be constructed from other inertial reference frames by transforming them by trivial transformations and translations along the time axis : can be represented by a matrix of quantities , the quantification in can easily turned into a quantification over quantities . ]+ \big].\end{gathered}\ ] ] the following axiom is a consequence of the principle of relativity .see , for a formalization of the principle of relativity in our first - order logic language .: : if one observer can send out a body with a certain speed in a certain direction , then any other inertial observer can send out a body with this speed in this direction .\leftrightarrow \exists b\big [ { \ensuremath{\mathsf{w}}}(k , b,{\bar x})\land{\ensuremath{\mathsf{w}}}(k , b,{\bar y})\big ] \big].\end{gathered}\ ] ] we call body * inertial body * iff there is an inertial observer according to who moves with uniform rectilinear motion : \big)\big].\end{gathered}\ ] ] let us now formulate the possibility of the existence of ftl inertial bodies . : : there is an inertial observer who can send out an ftl inertial body : .\end{gathered}\ ] ] implies that inertial observers can send out a body with arbitrary large speed in any direction if , , and are assumed : [ prop - nolim ] let . assume , , , and .then any inertial observer can send out a body with any speed in any direction : .\ ] ] the proof of prop.[prop - nolim ] is in sect.[sec - proof ] .in this section , we formulate our statement on the logical equivalence between the existence of ftl signals and the possibility of hypercomputation in special relativity as a theorem in our first - order logic language . to formulate the possibility of hypercomputation as a formula of our first - order logic language ,let us define the * life - curve * of observer according to observer as the world - line of according to _parametrized by the time measured by ,[life - curve ] formally : the * range * and * domain * of a binary relation , is defined as : the following formula of our language captures the possibility of relativistic hypercomputation in the sense used in the theory of relativistic computation .: : there are two observers a programmer and a computer and an instant in the programmer s worldline such that the computer has infinite time to compute , and during its computation the computer can send a signal to the programmer which reaches the programmer before the fixed instant : \big]\big)\big].\end{gathered}\ ] ] the following axiom ensures the existence of uniformly accelerated observers . : : it is possible to accelerate an observer uniformly : \big].\end{gathered}\ ] ] now we can state our theorem on the logical equivalence between the existence of ftl signals and the possibility of hypercomputation in special relativity : [ thm - hc ] let .then the proof of thm.[thm - hc ] is in sect.[sec - proof ] .in this section , we prove prop.[prop - nolim ] and thm.[thm - hc ] .let be an inertial observer and let . by , .if , then there is a body ( moreover , an inertial observer ) such that and by .if , then there is a body ( moreover , a light signal ) such that and by .so we only have to show that there is a body such that and if . by , there is an inertial observer who can send out an inertial body with a certain speed which is faster than the speed of light . by, the quantity structure is a real closed field , see ( * ? ? ?* prop.10.1.2 ) .specially every positive number has a square root .therefore , by , we can rotate the worldview of any observer around the time axis by an arbitrary angle ; and by thm.[thm - poi ] and axiom , there is an inertial observer whose simultaneity is so slanted that he sees moving with speed .consequently , there is an inertial observer who coordinatizes inertial body moving through and .then , by , every inertial observer can send out a body moving through and .assume , , , , and .we have to prove .let be an arbitrary inertial observer .let .let be a uniformly accelerated observer such that iff and , see fig.[fig - hc ] .this observer exists and by and prop.[prop - wl ] below . by prop.[prop - sim ] below , the simultaneity of any comoving observer of at the event of their meeting goes through the origin .so by prop.[prop - nolim ] , any comoving observer of ( and thus ) can send out a body reaching before and after , i.e. , \big].\ ] ] let now be an arbitrary inertial observer and . by prop.[prop - sim ] below , .therefore , . also by prop.[prop - wl] , for all .therefore , consequently , \big].\ ] ] this completes the proof of . to prove the converse direction ,assume that both and hold .let and be arbitrary observers , and be an arbitrary time instant such that holds for , and , see fig.[fig - nohc ] . since the computer can send a signal to the programmer during its life from any instant and and there are no ftl particles , denotes the * causal past * of coordinate point , i.e. , . ]according to any inertial observer . by the twin paradox theorem ,see , e.g. , , ( * ? ? ?* thm.7.2.2 ) , maximizes its time if it moves along a straight line . from this fact ,it is easy to see that the longest path in starting at is the line segment connecting and .since even this path is finite , has only a finite time to compute .therefore , subformula of can not be true .this contradiction proves our statement .[ br][br]ftl signal [ tl][tl]programmer [ tr][tr] [ tl][tl]computer [ br][br] [ tl][tl] [ tl][tl] , title="fig:",scaledwidth=70.0% ] [ prop - sim ] let . assume .let be an inertial observer and be a uniformly accelerated observer such that iff and for some .then the simultaneity of any comoving inertial observer of at through , i.e. , , contains the origin .let be an inertial observer , be a uniformly accelerated observer , be a point in the world - line of , and be an inertial comoving observer of at . by thm.[thm - poi ] , is a poincar transformation .therefore , the simultaneity of is minkowski - orthogonal to his worldline , i.e. , \big].\end{gathered}\ ] ] therefore , we have to show that line is minkowski - orthogonal to . by, the worldline of is the tangent line of the worldline of at .therefore , by lem.[lem - tan ] below , . let be a point of different from .we have to show that .this equation is the same as , which follows straightforwardly from , i.e. , , and , i.e. , .thus line is minkowski - orthogonal to ; and this is what we wanted to prove .[ lem - tan ] assume and .the tangent line of hyperbola at its point is axioms and imply that is a real closed field , see ( * ? ? ?* prop.10.1.2 ) . by tarski s theorem ,real closed fields are elementarily equivalent , see .thus something which is expressible in the language of ordered fields is true in a real closed field iff it is true in the field of real numbers .the statement of this lemma can be formalized in the language of ordered fields and it is straightforward to show it in the ordered field of real numbers .therefore , by tarski s theorem , the statement is true in every model of and ; and this is what we waned to prove .the following can be proved about life - curves , see ( * ? ? ?6.1.6 ) .[ prop - wl ] let , and be observers. then 1 . if is assumed . is a function if and are assumed and is an inertial observer .3 . and holds for all if and are inertial observers and is assumed .we have shown that , in special relativity , the possibility of hypercomputation is equivalent to the existence of ftl signals . a natural continuation is to investigate the question concerning the limits of the possibility of using ftl particles in hypercomputation in special and general relativity theories . for example , is there a natural assumption on spacetime which does not forbid the existence of ftl particles , but makes it impossible to use them for hypercomputation ? of course our construction contains several engineering difficulties .for example , the larger the distance the more difficult to aim with a signal .therefore , the computer has to calculate the speed of the ftl signal more and more accurately to ensure that the signal arrives to the programmer between events and , see fig.[fig - hc ] .thus the computer has to be able to aim with the ftl signal with arbitrary precision .galilei , g. : dialogues concerning two new sciences .macmillan , new york ( 1914 , first published in 1638 ) , translated from the italian and latin into english by henry crew and alfonso de salvio .
within an axiomatic framework , we investigate the possibility of hypercomputation in special relativity via faster than light signals . we formally show that hypercomputation is theoretically possible in special relativity if and only if there are faster than light signals . * keywords : * relativistic computation , special relativity , faster than light signals
alice , a physicist , decides to purchase a subscription to a well known online physics journal that is run by an entity we shall call bob . once the transaction is complete ,she is given a password that allows her access to the journal , a password that can be distributed at will .in particular , alice gives the password to her friend eve , who then resells it to the horde of independent physicists interested only in obtaining the journal at a cheaper price .even if alice were only to distribute the password to three of her closest friends , bob s income would be a quarter of what it should be .yet , there is no guaranteed method in which bob can prevent and detect such behavior . in this new age of information, passwords have become an essential part of everyday life , used for e - mails , bank accounts , cd - keys and a multitude of online products .yet such passwords are often saved on a customer s computer , easily vulnerable to viruses and trojan attacks .once such information is extracted by eve , she is free to mass distribute it , exacerbating everything from piracy , unlawful reselling to the more serious cases of identity theft .for example , cd - keys are often used as a method to prevent piracy , especially for programs that involve an online service . yet, nothing prevents a person from purchasing the software and copying it , writing down the key and returning it under a standard money back guarantee .classically , the only proven method to prevent password sharing is the use of one - time passwords , where a different password is required each time alice accesses the system .such a protocol creates a number of inconveniences , such as the unsuitability for group licenses , where a service is sold to a specified number of people .also , eve can steal one - time passwords at no risk , as she does not disturb alice or bob s system until she uses the password ._ can the laws of quantum mechanics guarantee the security of alice s information , even from herself ? _ so that there always exists a test that alice can make to determine if eve has taken her password before eve makes use of it . in this waybob can be fully confident that the service he sells can not be distributed or resold to multiple people . in this paper, we propose a quantum analogue of the classical password , whose security is guaranteed by the no - cloning theorem .in contrast to the classical one - time password scheme , our quantum password is not one time .also the fact that eve can be detected as soon as she takes the password , discourages any attempt to do so . while our proposal shares some similarities in aim with quantum identification schemes and quantum cryptography protocols , it has one critical distinction .we do not assume alice s station is secure , or even that she is on bob s side . in particular , all of the aforementioned protocols involve a password stored as classical information which is encoded into a quantum state for transmission .while they guarantee security during transmission , alice s password , stored at her station , can always be cloned without detection .the only previous proposal in this area was one time , and demonstrated to be equivalent to proposals based on classical correlations . in this paper, we argue that a password can be verified using an entirely physical process , and unlike classical documents , need not be read by a human being .therefore it is not necessary for a password to be an encoding of classical information ._ we propose a quantum password represented entirely by a quantum state that has no set basis of measurement and encodes no classical information_. since our protocol does not require alice to have any knowledge of this quantum state , the no - cloning theorem can be easily employed .thus alice can not distribute her password to anyone , short of giving it away and consequently losing her only copy .therefore there is only one quantum password existing , per person , at any one time .in addition , the quantum nature of the password means that it can not be measured exactly , and allows our protocol to retain its security even when bob s login server in vulnerable .that is , even if eve has access to bob s login server , she still can not access the restricted content without detection .this is not possible classically and is a further distinction from the one - time password protocol .the standard protocols of classical passwords can be divided into various stages : the creation of an account when alice signs up for a service provided by bob , the distribution of the password from alice to bob , and the process of password verification by bob .more explicitly , alice would purchase the password from bob , which is used for verification each time she wishes to access bob s product . in the quantum version of this protocol, all three steps are kept intact ( see fig . [ figure_quantum_passwords ] ) .the major difference is that the password itself will be a quantum object , and the process of transmission and verification are both done quantum mechanically .since the passwords featured within this protocol are not encodings of classical information , we are able to make several significant relaxations to the assumptions made in quantum cryptography . as aforementioned , we assume that alice s server is insecure , and in addition , eve also has the ability to extract information from bob s log - in server .these conditions are realistic , given that it is no harder , and often easier to install viruses into another s computer than to extract information during transmission .however we do assume that there exists an unjammable classical public channel between alice and bob , such that alice is able to contact bob should her password be compromised .when these conditions are met , the protocol is completely secure , where security is defined by the fact that eve can not take the password without disturbing the system , and hence can not escape detection .explicitly , if eve gains any information about the quantum password , there exists a finite probability that she will be detected .alice contacts bob who organizes to give alice her own quantum password ( along with a classical username which will be omitted in further discussion ) . for simplicity ,we consider that each password consists of only one qubit . in practice, the quantum password would consist of many qubits , so that the probability of a successful random guess is negligible .bob generates a random password given by - this does not violate the no - cloning theorem as the state is known to bob .bob stores his cloned qubit in quantum memory and sends the other qubit to alice through an insecure quantum channel . in order for alice to use her quantum password, she sends it back along the quantum channel to bob .bob then needs to compare his stored copy of alice s quantum password with the quantum password alice has sent to him .if both passwords are identical , then bob will allow alice access to his computer . to compare the two quantum passwords and , bob performs a controlled - swap operation using a fredkin gate to determine if they are identical ( see fig . [ cswap ] ) .the advantage is that this operation can be performed without explicit knowledge of either of the quantum passwords .explicitly , bob introduces an ancilla qubit and performs the operation with and acts only on the hilbert space of the ancilla qubit . is the identity operator acting only on .the controlled - swap is the operation which swaps the states and depending on the parity of the ancilla qubit .that is , it performs the swap operation and acts as the identity if the qubit is .the resulting state of the system can be written as bob will now measure the ancilla qubit in the computational basis and with outcome probabilities given by if the two quantum passwords are identical , i.e. and .thus if bob were to measure 1 , he knows with certainty that the supplied password is incorrect .however if he were to measure , he would assume the password to be correct and allow access .note that since when the states are not identical , extending the length of the quantum password will ensure that eve can not achieve success through a random guess .additionally , should the passwords be identical , this measurement process will leave the quantum passwords unchanged .therefore the quantum password is reusable and not a one - time password equivalent .the security of the quantum password scheme is due to the no - cloning theorem which states that an unknown quantum state can not be perfectly cloned . in addition, any measurement made on such a state will in general disturb it . in order for eve to steal alice s password without any chance of detection, she would be required to take it from alice s station or intercept it during transmission , clone it perfectly and then return the original password without detection .since we assume alice s station is not secure , eve is free to perform the first and final steps , but can only perform an approximation of the second .suppose eve were to make an attempt in breaching the security of the quantum password protocol .that is , she desires to create a quantum state that is a good approximation to the true password without detection from alice or bob .eve is forbidden to modify alice s password significantly , since doing so would cause alice s password to fail during the swap verification and hence reveal her actions .let us first assume eve uses the symmetric universal quantum cloning machine .given an input state , the cloning machine would output two identical cloned quantum passwords with a fidelity of with respect to the original input . for qubit states ,the two clones can be described by the same density operator where is a quantum state containing the errors of the cloning machine and is orthogonal to , i.e. . in order for eve to steal the password successfully , two events must occur .firstly , alice s quantum password must not be disturbed to the point where she can no longer access bob s network , and secondly , the clone she has created must be accepted by bob s server .clearly , for a quantum password consisting of a single qubit , eve chance of successfully using a clone is given by ( see fig . [ plot_quantum_passwords ] ) .thus , for a quantum password of qubits in length , eve s success rate is reduced exponentially to . in the more general case where eve utilizes two nonidentical clones from an asymmetrical quantum cloning machine , it can be demonstrated that the fidelity of the clone , and the password after measurement with respect to the original input , must satisfy the relation .thus eve s success rate is always bounded by .therefore , provided we use a quantum password of sufficient length , the security of the protocol is guaranteed against such an attack . additionally , by assuming that bob s login server is also insecure , eve has the option of cloning bob s quantum password instead .however , since bob has knowledge of what the password is , he can always check whether it has been modified by comparing it to an offline ( and thus secure ) copy of the password .should such a comparison fail at any time , he would generate a new quantum password and eliminate any information eve may have gained from the state .thus , the analysis of security for such an attack is reduced to the previous case .passwords were designed with the intention of providing secure services , in the sense that such services can only be used by authorized parties .the ability to clone and distribute a password clearly violates this intention , and , is greatly detrimental to any service which involves the use of such devices . as quantum passwordscan not be cloned , it is potentially applicable to any protocol that involves a variation of the classical password .for example : 1 .cd - keys are currently a widely used method to prevent piracy , used to validate the authenticity of a program whenever a user attempts to access some related online service .suppose alice runs a cybercafe with such programs installed .cd - key grabbers are commonly available for a visitor , eve , to steal the key . if the passwords were made quantum , any such attempts can by easily detected by quantum password verification prior to eve s departure from the cybercafe. therefore , the chance of being caught on the spot will severely discourage such attempts .credit card numbers are a variation of classical passwords .if alice were to purchase concert tickets over the phone , she would need to give the information required to access her account to a third party operator , effectively cloning her password .the operator , can now retain and distribute this knowledge at her leisure .this problem can not be circumvented using either classical or quantum cryptography since the operator , by definition , must have all the necessary knowledge to access alice s account .a quantum password prevents this problem , as the operator must return the password back to alice , and any attempt to extract information during the process could be detected .variations of the quantum password protocol can be implemented for atm transactions where we need not assume security of either the client alice , or even the login server , i.e. the atm itself .alice s quantum bank card stores a quantum state , which is matched by a clone on the bank s central server . upon a transaction request, bob sends his quantum password to the appropriate atm and performs the verification protocol .even if eve had access to the atm , she would still be unable to use it without detection .classical solutions , such as the s / key one - time password scheme , based on the computational difficulty of inverting the cryptographic hash function , are not formally secure .quantum passwords adopt a purely quantum approach to password identification , and in doing so , provides security that no classical protocol , or quantum protocol that involves the encoding of classical information , can offer. the implementation of quantum passwords will require reliable quantum memory and quantum gates . however , there is no explicit need for universal quantum processors and is a protocol that can be achieved in the medium term . in this paper, we have outlined the idea of quantum passwords in the simplest possible implementation using a string of qubits , though in reality , this need not be so .one can note that the verification process does not assume that the quantum password lives in any specific hilbert space , and thus , one can envision encoding such passwords in higher dimensions . for example , a continuous variable version would be able to take advantage of the high detection efficiencies and higher information bandwidths .future work could also be done in the analysis of how information loss due to noise or decoherence would affect the security of the protocol . in this case, bob would need to take into account the natural losses during storage and transmission , and accept alice s password provided a certain proportion of the qubits matched according to the swap protocol .of course , this leeway would give a greater chance of using a cloned password , and one would be interested in the region where the protocol remains secure against such losses .in conclusion , we have introduced a quantum password scheme whose security is guaranteed by the no - cloning theorem . when implemented, such a protocol would allow consumers to purchase items over the phone without worry , knowing that they could check if the operators on the other end have copied down their credit card details .it would give subscription services the peace of mind that the passwords they sell can not be distributed over the internet .it also bestows the added security when the login server itself becomes vulnerable , such as an atm .such security is made possible since our password is not a quantum encryption of classical information , but simply a quantum state . in short ,we take advantage of the fact that passwords are meant to be used , not read .
a quantum password is a quantum mechanical analogue of the classical password . our proposal is completely quantum mechanical in nature , i.e. at no point is information stored and manipulated classically . we show that , in contrast to quantum protocols that encode classical information , we are able to prevent the distribution of reusable passwords even when alice actively cooperates with eve . this allows us to confront and address security issues that are unavoidable in classical protocols .
being one of the simplest polyatomic molecules and present in many environments , including the interstellar medium , brown dwarfs and solar system planets , nh is a very important molecule for astronomers .it also has several applications in industry , such as the reduction of nox emissions in smoke stacks and the manufacture of hydrogen cyanide by the andrussow process .this has motivated over 140 experimental studies on its spectrum , recent work includes high and low temperature studies in various spectral regions .a comprehensive compilation of measured nh rotational and ro - vibrational spectra can be found in a recent marvel study .the marvel ( measured active rotation - vibration energy levels ) algorithm simultaneously analyses all available assigned and labelled experimental lines , thus yielding the associated energy levels .the recent study for nh analysed 29,450 measured transitions and yielded 4961 accurately - determined energy levels .the critically reviewed and validated high resolution experiments employed by this study , cover the region 0.7 - 17,000 with a large gap between 7000 15,000 .in fact there is an overall lack of detailed and accurate information for nh transitions in this region .the band model parameters of irwin _ et al _ cover the region 400 to 11000 , but was intended for analysis at low spectral resolution so the measurements were obtained at a spectral resolution of only 0.25 and not assigned .the hitran database , a major source of experimental data , contains no information for nh above 7000 .a number of variational line lists are available for nh . in this work we usebyte which is a variationally computed line list for hot nh that covers the range 0 - 12,000 .byte is expected to be fairly accurate for all temperatures up to 1500 k ( 1226 ) .it comprises of 1 138 323 251 transitions constructed from 1 373 897 energy levels lying below 18 000 .it was computed using the nh3 - 2010 potential energy surface , the trove ro - vibrational computer program and an _ ab initio _dipole moment surface .however this line list is known to be less accurate for higher wavenumber transitions , and assigned high resolution laboratory spectra in poorly characterised regions is needed .the reason for the void between 7000 and 15,000 is the complexity of the nh spectrum making analysis of experimental spectra using the established method of fitting hamiltonians tricky . in the present work we take on this challenge by employing the same technique used previously to study high temperature spectra , to study the room temperature , near infrared spectrum of nh in the 7400 8600 region .this region is of present interest .for example there are peaks in nh opacity between 1.210 m and 1.276 m which are important features in late type t dwarfs . in wavenumbersthis region is 7836 - 8265 which is covered by an unanalysed 1980 room temperature spectrum in the kitt peak archive .this spectrum was recently used by campargue _et al _ to identify residual nh lines in their ultra - long pathlength water spectra .this article has the following structure .section 2 describes the kitt peak spectrum and the construction of the experimental line list .section 3 gives an overview of the assignment procedure .section 4 comes in two parts .the accuracy of byte is assessed in section 4.1 by a direct comparison with the experimental line list .a summary of all assignments and new experimental energies is presented in section 4.2 .finally section 5 gives our conclusions and discusses avenues for further work .the kitt peak data center provides open access to laboratory fourier transform ( ft ) spectra recorded at kitt peak .the room temperature laboratory absorption spectrum of nh analysed by the present work ( 800407r0.004 ) was recorded by dr .catherine de bergh using a one metre ft spectrometer .the spectrometer in question was a permanent instrument on the mcmath solar telescope , the largest solar telescope in the world , and was used for both solar and laboratory analysis . in 2012the instrument was transferred to old dominion university .the spectrum was recorded at a resolution of 0.01 and generated from an average of 12 scans . some key information provided in the fits headeris presented in table [ tab : fits ] .the first and last wavenumber are listed as 5797 and 9682 respectively but our study focusses on the region 7400 - 8600 .figure [ fig : overview ] gives an overview of the spectrum ..key experimental information provided in the fits header downloaded from the kitt peak archive .[ cols= " < , < " , ] [ tab : expe2 ]in this paper we present an experimental line list for , and an analysis of , a 35 year old room temperature spectrum of nh in the region 7400 8600 .the centers and intensities of 8468 ammonia lines were retrieved using a multiline fitting of the spectrum . for isolated lines of intermediate strength the accuracy of retrieved line position and intensitiesis estimated to be of the order 3 and 15 % respectively . although it should be noted that the uncertainty in the retrieved intensity may significantly exceed 15% for lines stronger that cm / molecule .a comparison between the measurements and byte shows in general good agreement but there are shifts in line position of up to 3 throughout the region and experimental line intensities are only reproduced with 20 - 55 % at best .work towards a new , more accurate , hot nh line list is currently being carried out as part of the exomol project .the use of byte and marvel has allowed the assignment of 2474 lines , 1343 by combination differences and a further 1131 by the method of branches . in total 1692 new experimental energies between 7000 - 9000 been derived .assignments associated with strong bands with tens of combination difference and branch assignments should all be reliable , as these bands are well characterised and have stable observed minus calculated differences throughout the band .the remaining assignments should also be safe , as all results from the two assignment procedures have proven very consistent , though these are more tentative .we note that a room temperature spectrum for nh in the 9000 10,000 region , also measured by dr catherine de bergh in 1980 , is available from the kitt peak archive .we plan to make this a focus of future work although it is to be anticipated that current line lists will be less reliable at these higher wavenumbers .this work was supported by a grant from energinet.dk project n. 2013 - 1 - 1027 , by ucl through the impact studentship program and the european research council under advanced investigator project 267219 .33 natexlab#1#1[2]#2 , , , gao u.s . government accountability office , . . ,( ) . , , , , , , , , , ( ) . , , , ( ) . , , , ( ), , , , , ( ) . , , , , , , , ( ) . , , , , , , ( ) . , , , , , ( ) . , , , ( ) ., , , , , , , ( ) . , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ( ) ., , , , , , ( ) . , , , ( ) . , , , ( ) . , , , , , ( ) . , , , ( ), , , ( ) . , , , ( ) ., , , , , , , ( ) . , , , ( ) ., , , , , , , , ( ) ., , , , , , , , , , ( ) . , , , , , , ( ) ., , , ( ) ., , , , , , ( ) . , , , , , , ( ) .
a fourier transform ( ft ) absorption spectrum of room temperature nh in the region 7400 - 8600 is analysed using a variational line list and ground state energies determined using the marvel procedure . the spectrum was measured by dr catherine de bergh in 1980 and is available from the kitt peak data center . the centers and intensities of 8468 ammonia lines were retrieved using a multiline fitting procedure . 2474 lines are assigned to 21 bands providing 1692 experimental energies in the range 7000 - 9000 . the spectrum was assigned by the joint use of the byte variational line list and combination differences . the assignments and experimental energies presented in this work are the first for ammonia in the region 7400 - 8600 , considerably extending the range of known vibrational - excited states . room temperature , ammonia , absorption intensities , ftir spectroscopy , experimental energies , byte , line assignments
the problem of establishing a connection between the kolmogorov - sinai ( ks ) entropy and the conventional entropy expressed in terms of probability density is an interesting problem that is attracting some attention in literature. early work on this subject goes back to the discussion of goldstein and penrose : these authors , almost twenty years ago , established a connection between the ks entropy and a coarse - grained version of the distribution density entropy .the work of ref. is based on a formal and rigorous mathematical treatment which for this reason might have eluded the attention of physicists working on this subject .thus we restate the problem using intuitive arguments which also make it possible for us to account for the more recent literature on the subject .in fact , our heuristic treatment will allow us to relate the results of the more recent work of latora and baranger to the earlier work of zurek and paz .in addition to revisiting the problem of how to make the ks entropy emerge from a nonequilibrium dynamic picture , we shall touch also the intriguing problem of whether a thermodynamic perspective has to rest on the adoption of trajectories , as implied by the concept itself of ks entropy , or on the use of probability densities , advocated with strong arguments by petrosky and prigogine .it is convenient to stress that the ks entropy is a property of a single trajectory .the phase space is divided into cells , each cell being assigned a given label .then we define a sequence of symbols by means of a single trajectory : the sequence is determined assigning to any time step the label of the cell where the trajectory lies at that time step . the trajectory is supposed to be large enough as to yield reliable values for the probabilities determined through the numerical frequencies .this means that we fix a window of size , and we move this window along the sequence . for any window position a string of symbols determined .moving the window of fixed size along the infinite sequence generated by the trajectory we have to evaluate how many times the same string of symbols appears , thereby leading us to determine the probability .the ks entropy is then defined by where is the conventional shannon entropy of the window of size defined by .\label{shannonentropy}\ ] ] it is evident therefore that the ks entropy rests on trajectories , and , more specifically , it implies the adoption of only one trajectory of virtually infinite length .the ks entropy is very attractive because its value turns out to be independent of the repartition into cells of the phase space , due to the crucial role of the so called generating partitions . in the specific case where a natural invariant distribution exists, it is shown that with .note that denotes the coordinate of a multidimensional phase space , is the natural invariant distribution and is a local lyapounov coefficient , with , being the dimension of the system under study . from eq.([pesintheorem ] ) we see that , as earlier pointed out , the ks entropy is independent of the repartition into cells . the original definition of eq.([definition ] ) , with thought of as time , means that the ks entropy , as a property of a single trajectory , is the rate of entropy increase per unit of time .however , since the single trajectory under examination is infinitely long , and explores in time all the phase space available , the ks entropy can also be expressed in the form of an average over the equilibrium distribution density , without any prejudice for the single trajectory nature of this `` thermodynamic '' property . according to petrosky and prigogine , on the contrary, the connection between dynamics and thermodynamics implies the use of the liouville equation where denotes both the classical and the quantum liouville operator , and is the nonequilibrium distribution density .the reason for this choice is that the analysis of the liouville operator , through the `` rigged hilbert '' space , allows the appearance of complex eigenvalues which correspond to irreversibility , and to the collapse of trajectories as well .this is the reason why distribution densities are judged to be more fundamental than trajectories . in this paperwe limit our analysis to the special case where dynamics are generated by maps rather than by hamiltonians .we do not address the difficult issue of discussing the thermodynamic limit which is the subject of very interesting recent discussions , and where , according to lebowitz , ergodicity and mixing are neither necessary nor sufficient to guarantee the connection between dynamics and thermodynamics .we consider the case of low - dimension chaos , where probability emerges as a consequence of sensitivity to initial conditions . even in this case , however , according to the perspective established by petrosky and prigogine , probability densities are more fundamental than trajectories .the readers interested in knowing more about this perspective , entirely based on probability density , should consult the illuminating work of driebe . in this casethe counterpart of eq.([liouville ] ) becomes where is referred to as frobenius - perron operator .of course , the operator of eq.([liouville ] ) has to be identified with . according to the traditional wisdom, the frobenius - perron operator is expected to make the distribution densities evolve in the same way as that resulting from the time evolution of a set of trajectories with initial conditions determined by the initial distribution density : the known cases of discrepancy between the two pictures are judged to be more apparent than real . nevertheless , even in the case of invertible maps , the birth of irreversibility can be studied using the same perspective as that adopted for hamiltonian systems , with eq.([liouville ] ) replaced by eq.([frobenius ] ) , and so using again probability densities rather than trajectories . however , we attempt at digging out the ks entropy from eq.([frobenius ] ) , and this purpose forces us to formulate a conjecture on how to relate entropy to .a plausible choice seems to be d{\bf x}. \label{gibbs}\ ] ] we share the view of goldstein and penrose who consider the ks entropy to be a nonequilibrium entropy . in other words , we may hope to derive the ks entropy from the time derivative of of eq.([gibbs ] ) . as goldstein and penrose do , to realize that purpose we have to address a delicate problem : in the case of invertible maps , is time independent , thereby implying a vanishing ks entropy . yet ,the baker s transformation , which is a well known example of invertible map , thereby yielding a time independent , is shown to yield a ks entropy equal to , a fact suggesting a steady condition of entropy increase .we plan to discuss all this with the joint use of heuristic arguments and of the rigorous theoretical tools of ref. .the present paper uses as a paradygm of invertible map the two - dimensional baker s transformation , depending on two coordinates , and , the former corresponding to dilatation and the latter to contraction . using this prototype for invertible dynamics, we aim at proving that the adoption of the distribution density in the case of invertible chaotic maps would lead to an increasing process of fragmentation , depending not only , as the ks entropy does , on the positive lyapounov coefficient , but also on the negative one .the adoption of a coarse graining has the effect of quenching the action of the negative lyapunov coefficient , thereby allowing the ks entropy to show up .then , to go beyond these heuristic arguments we make a trace on the variable , namely , on the process responsible for contraction , and we focus our attention on the contracted dynamics .this is equivalent to that produced by the bernoulli shift map . hereroom is only left for dilatation and the problem can be solved with a rigorous mathematical method , without using trajectories .the outline of the paper is as follows . in sectionii we shall illustrate our heuristic picture . in section iii we shall address the problem by means of a rigorous treatment resting on the theoretical tools provided by driebe . in sectioniv we shall draw some conclusions .some delicate mathematical problems behind the theoretical calculations of section iii are detailed in appendix .note that the cases studied by latora and baranger are two - dimensional , and our discussion here refers to a two - dimensional case , too .we have in mind the backer s transformation and .we denote by the number of cells occupied at a given time .note that , where the symbol denotes the total number of cells into which we have divided the phase space .our heuristic approach is based on the following assumptions .\(i ) at the initial time only cells are occupied .\(ii ) at all times the trajectories are equally distributed over the set of occupied cells . this means \(iii ) we denote by the positive lyapounov coefficient , and we set all these three assumptions have been borrowed from the recent work of ref. . the joint use of all them yields which corresponds to the kolmogorov thermodynamical regime .note that the positive lyapounov coefficient in the case of the baker s transformation is shown to be : note also that according to the arguments of section i , the connection with the ks entropy is established through the time derivative of .thus , we conclude that which corresponds to deriving the ks entropy from the distribution density picture .this kolmogorov regime is not infinitely extended .it has an upper bound , given by the fact that when equilibrium is reached , even in the merely sense of a coarse - grained equilibrium , then the entropy stops increasing .an estimate of this time is obviously given by the solution of the following equation which yields the following saturation time furthermore a lower bound of validity exists , which will be easily estimated with very simple arguments . if the initial distribution includes a large number of cells and the size of this distribution along the coordinate is , and the size of the cells is with , then it is evident that , in spite of the coarse graining the total number of cells occupied remains the same for a while .this time is easily estimated using the equation which in fact defines the time at which the distribution volume , and consequently , the system entropy starts increasing .this time is denoted by the symbol and reads we denote by the volume of the distribution density at time and by the volume of the phase space , thereby implying that .we note that where is the total volume of the phase space and is the initial volume of the distribution density .thus the kolmogorov regime shows up in the following time interval the time duration of the regime of validity of the kolmogorov regime can be made infinitely extended by making the cell size infinitely small .this means that the conflict between the ks entropy prescription and the time independence of can be bypassed by focusing our attention on the intermediate region , whose time duration tends to infinity with .we note that a choice can be made such that , with .this means the time duration of the kolmogorov regime can be made times larger than the time duration of the transition regime . for time durations become infinite , thereby showing that a kolmogorov regime of infinite time duration can be obtained at the price , however , of waiting an infinitely long time for the entropy to increase .the infinite waiting time before the regime of entropy increase fits the observation that the gibbs entropy of an invertible map is constant .the linear entropy increase showing up `` after this infinite waiting time '' allows the emergence of the ks entropy from within the probability density perspective. this kind of coarse graining might be criticized as corresponding to arbitrary choices of the observer .it is interesting to remark that there exists another interesting form of coarse graining , produced by weak stochastic forces .both in the case where this stochastic forces mimic the interaction with the environment or in the case where it happens to be an expression of spontaneous fluctuations this kind of coarse graining can be regarded as being produced by nature . herewe limit ourselves to remarking that according to zurek and paz these stochastic forces contribute a fluctuation - dissipation process mimicking the interaction between the system of interest and the environment .these authors studied the inverted stochastic oscillator where the friction and the stochastic force are related to one another by the standard fluctuation - dissipation relation it is interesting to remark that the proper formulation of the second principle implies that the entropy of a system can only increase or remain constant under the condition of no energy exchange between the system and its environment . in the case of eq.([standard ] ) the energy exchange between system and environment is negligible for any observation made in the time scale to ensure that the system entropy increase to take place with no energy exchange between system and its environment zurek and paz [ 3 ] set the condition of eq.([tempo ] ) and this , in turn , allows them to neglect the friction term in eq.([paolo18 ] ) .then , these authors adopted the modes and which make it possible for them to split eq.([inverted ] ) into and let us imagine the initial distribution density as a rectangle of size along the direction and along the direction .we keep denoting by the distribution volume at a given time .thus the volume of the initial distribution is in the absence of the stochastic force , eqs.([splitting1 ] ) and eqs.([splitting1 ] ) result in an exponential increase and an exponential decrease , with the same rate , respectively .consequently , the liouville theorem is fulfiled . in the presence of stochastic force, we work as follows . in the former equation , with increasing beyond any limit , the weak stochastic force can be neglected .this is not the case with the latter equation .in fact , is a contracting variable in the absence of the stochastic force . in the presence of the stochastic force the minimum size of the distribution along given by this minimum size is reached in a time determined by the solution of the following equation yielding due to the fact that deterministic chaos is simulated by zurek and paz by means of an inverted parabola , these authors did not consider the entropy saturation effects .however , it is straigthforward to evaluate the saturation effect with heuristic arguments concerning the case where the total volume of the phase space has the finite value . from the time on , the distribution volume increases exponentially in time with the following expression thus , the saturation time is now given by .\label{saturationtime2}\ ] ] using eq.([initialvolume ] ) we can write this saturation time as , \label{alternative}\ ] ] which coincides with eqs.([validityregime ] ) and ( [ saturationtime ] ) . in conclusion , it seems that the emergence of a kolmogorov regime is made possible by the existence of a form of coarse graining , and that it is independent of whether the coarse graining is realized by the division into cells or by a weak stochastic force .this property seems to make less important the discussion of whether the stochastic force is of environmental origin or rests on some kind of extension of the current physical laws .however , we have to point out that the situation significantly changes , if we move from a strongly to a weakly chaotic classical system . as a relevant example , let us refer ourselves to the work of ref. .the authors of this work study the asymptotic time limit of a diffusion process generated by using an intermittent map as a dynamic generator of diffusion .if these dynamics are perturbed by a white noise , a transition is provoked , at long times , from anomalous to normal diffusion .when the only source of random behavior is given by the sporadic randomness of the intermittent map , the long - time limit is characterized by lvy statistics , a physical condition in a striking conflict with the condition of gaussian statistics produced by the action of fluctuations . herewe limit our attention to the case of strong chaos where the two distinct sources of coarse graining produce equivalent effects .it might be of some interest for the reader to compare the coarse - graining approach of this section to the more formal method recently adopted by fox to deal with the same problem .it is interesting to stress that to make the regime of validity of the kolmogorov regime as extended as possible we must make the ratio as large as possible ( virtually infinite ) .this means that we have to choose an initial distribution density so sharp as to become apparently equivalent to a single trajectory .this seems to be an attractive way of explaining why in this condition the ks entropy is recovered , since , as stressed in section i , the ks entropy is a single trajectory property .however , in accordance with the authors of refs. we must admit that there exists a deep difference between a trajectory and a very sharp distribution .the latter is stable and robust , while the former is not . in section iiiwe shall show that the rigorous derivation of the kolmogorov regime requires a non trivial mathematical procedure , and the mathematical effort to make from this side , to derive the ks entropy , serves the useful purpose of proving that the ks entropy of a trajectory is a really wise way of converting into advantages the drawbacks of the trajectory instability .this section is devoted to a rigorous discussion resting only on the theoretical tools described in ref. for a genuine probability density aproach .according to mackey , if we rule out the possibility that the laws of physics are misrepresented by invertible dynamic prescriptions , there are only two possible sources of entropy increase .the first is the coarse graining discussed in section ii .the second is the adoption of reduced equation of motion , obtained by a trace over `` irrelevant '' degrees of freedom .in fact here we study the bernoulli shift map , the frobenius - perron equation of this map is defined by . \label{end2}\ ] ] it is straigtforward to show that the frobenius - perron operator of eq.([end2 ] ) stems from the contraction over the variable of the baker s mapping , acting in fact on the unit square of two - dimensional space ( x , y ) ( see , for instance ref. ) .it is shown that the ks entropy of the baker s transformation is well defined and turns out to be the same as that of the bernoulli shift map , namely .intuitively , this suggests that the main role of coarse graining is that of making inactive the process of contraction , and with it the negative lyapunov coeficient .this intuitive argument seems to be plausible and raises the interesting question of how to prove it with a rigorous approach .this is equivalent to deriving the kolmogorov regime using a rigorous mathematical method rather than the heuristic arguments of section ii .we must observe again that this is made possible by the fact that the tracing has changed the originally invertible map into one that is not invertible . to address this issuewe follow the prescription of ref. .first of all , we express the distribution density at time under the form given by ref . which reads : .\label{mauro1}\ ] ] note that , are the bernoulli polynomials and denotes the -th order derivative of with respect to .hereby , we shall show how to derive from the previous one more tractable expression , which will be checked in appendix . in the case of an initial condition close to equilibrium , resulting from the sum of the equilibrium distribution and the first `` excited '' state , it is easy to prove that the entropy of eq .( [ gibbs ] ) reaches exponentially in time the steady - state condition .this suggests that the kolmogorov regime , where the entropy is expected to be a linear function of time , must imply an initial condition with infinitely many `` excited '' states . to deal with a condition of this kind it is convenient to express eq.([mauro1 ] ) in an equivalent form given by where ] . by plugging eq.([mauro7 ] ) within eq.([mauro9 ] ) we obtain - \frac{\alpha z}{exp(\alpha z ) -1 } .\label{mauro10}\ ] ] in the limiting case this exact prediction is approximated very well by it indicates that a sharp initial distribution makes the system evolve according to the ks entropy , with no regime of transition from mechanics to thermodynamics .the third regime of ref. is still present .it is straigtforward to show that the saturation time resulting from eq.([mauro10 ] ) is the same as that of eq.([saturationtime ] ) in the case .in fact using eq.([volumereplacingnumber ] ) and we obtain that , where is the size of the initial distribution .the size of the initial distribution of eq.([mauro7 ] ) , for , becomes prportional to .thus , in accordance with eq . ([ saturationtime ] ) .this is an elegant result , involving a modest amount of algebra .however , it refers to an initial distribution located at .we want to prove that this is a general property , independent of where the initially sharp distribution is located , at the price , as we shall see , of a more complicated mathematical treatment . for this purposewe study the case where the distribution shape is the lorentzian curve : with being a generic point of the interval ] denotes the integer part of . to derive a more tractable expression we note that in the limiting case of very small , the quantities ] in the first term , and for the possible contribution -\left[\frac{1-x_{0}}{z } \right ] = -\left[\frac{1-x_{0}}{z } \right] ] , admits a treatment based on its fourier transform if it is thought of as being defined on the whole interval ] .similarly we can define the fourier series of this function assuming it to be periodically repeated all over the real axis .we shall adopt this approach throughout the whole appendix .let us check eq.([mauro6 ] ) first .we note that the argument of the density of eq.([mauro6 ] ) can be arbitrary with the only condition that the variable is in the interval [ 0,1 ] . furthermore , since eq.([mauro6 ] ) is derived from eq.([mauro5 ] ) , it is enough for us to prove that eq.([mauro5 ] ) is properly normalized and is a solution of the frobenius - perron equation of eq.([end2 ] ) . in conclusion, we have to check : first we check that this equation is norm conserving or : to do so , we integrate eq.([mau5 ] with respect to the variable x from to .thus we obtain : this means the expression the integral over is by definition the fourier transform of , yielding thereby due to the fact that the initial condition is assumed to be normalized . andafter a little algebra we get : = \frac{z}{2 } \int_{-\infty}^{+\infty}e^{-i\omega zx/2 } \hat{\rho}(\omega ) \frac{\left[e^{-i\omega } -1\right]\left[e^{-i\omega zx/2}+1\right]}{e^{-i\omega z } -1 } \frac{d\omega}{2\pi}.\ ] ] by decomposing the denominator as follows \left[e^{-i\omega z/2 } + 1\right]}\ ] ] and simplifying , we obtain = \frac{z}{2 } \int_{-\infty}^{+\infty}e^{-i\omega zx/2 } \hat{\rho}(\omega ) \frac{e^{-i\omega } -1}{e^{-i\omega z/2 } -1 } \frac{d\omega}{2\pi}\ ] ] that coincide with eq.([contr4 ] ) .now we shall test directly the eq.([mauro8 ] ) using eq.([mauro1 ] ) . plugging directly the initial distribution : into eq.([mauro1 ] ) we obtain : \nonumber \\= \frac{\alpha}{1-e^{-\alpha } } \sum_{j=0}^{\infty}e^{-\gamma_{j}t}\frac{b_{j}(x)}{j ! } [ ( -\alpha^{j-1})e^{-\alpha } -(-\alpha^{j-1})].\end{aligned}\ ] ] using the bernoulli polynomials generatrix we get that coincides with eq.([mauro8 ] ) obtaned using the formula eq.([mauro6 ] ) finally let us to check the norm conservation of eq.([rhot ] ) . without any approximation , using the value of of eq.([normal ] ) , and a little algebra , we get : ^{-1 } \nonumber \\ & \cdot \int_{0 } ^{1 } z\gamma\sum _ { n=0}^{\infty } \left[\frac{1 } { \left(zx+zn - x_{0}\right)^{2}+\gamma^{2 } } -\frac{1 } { \left(zx+zn - x_{0}+1\right)^{2}+\gamma^{2}}\right]dx \nonumber \\ & = \left[\arctan\left ( \frac{x_{0}}{\gamma}\right)+ \arctan\left(\frac{1- x_{0}}{\gamma}\right)\right]^{-1}\cdot \sum^{+\infty}_{n=0}\left [ \arctan\left(\frac{xz+nz - x_{0}}{\gamma}\right)- \arctan\left(\frac{xz+nz+1-x_{0}}{\gamma}\right)\right]^{x=1}_{x=0}.\end{aligned}\ ] ] this yields : ^{-1 } \nonumber \\ & \cdot\sum^{+\infty}_{n=0}\biggl [ \arctan\left(\frac{z+nz - x_{0}}{\gamma}\right)- \arctan\left(\frac{z+nz+1-x_{0}}{\gamma}\right)-\arctan\left(\frac{nz - x_{0}}{\gamma}\right)+ \arctan\left(\frac{nz+1-x_{0}}{\gamma}\right)\biggr].\end{aligned}\ ] ] examining the expression we can note that only the terms with survive in the sum but that terms simplify with the external factor ( the constant ) so finally checking that eq.([rhot ] ) fulfils the frobenius - perron operator involves some extended but straightoforward algebra .
we study the problem of entropy increase of the bernoulli - shift map without recourse to the concept of trajectory and we discuss whether , and under which conditions if it does , the distribution density entropy coincides with the kolmogorov - sinai entropy , namely , with the trajectory entropy .
screen content images refer to images appearing on the display screens of electronic devices such as computers and smart phones , .these images have similar characteristics as mixed content documents ( such as a magazine page ) .they often contain two layers , a pictorial smooth background and a foreground consisting of text and line graphics .the usual image compression algorithms such as jpeg2000 and hevc intra frame coding may not result in a good compression rate for this kind of images because the foreground consists of sharp discontinuities . in these cases ,segmenting the image into two layers and coding them separately may be more efficient .the idea of segmenting an image for better compression was proposed for check image compression , in djvu algorithm for scanned document compression and the mixed raster content representation .foreground segmentation has also applications in medical image segmentation , text extraction ( which is essential for automatic character recognition and image understanding ) , biometrics recognition , and automatic texture segmentation for use in mobile games - .screen content and mixed document images are hard to segment , because the foreground may be overlaid over a smoothly varying background that has a color range that overlaps with the color of the foreground .also because of the use of sub - pixel rendering , the same text / line often has different colors . even in the absence of sub - pixel rendering , pixels belonging to the same text / line often have somewhat different colors .different algorithms have been proposed in the past for foreground - background segmentation in still images such as hierarchical k - means clustering in djvu , which applies the k - means clustering algorithm on a large block to obtain foreground and background colors and then uses them as the initial foreground and background colors for the smaller blocks in the next stages , shape primitive extraction and coding ( spec ) which first classifies each block of size into either pictorial block or text / graphics based on the number of colors and then refines the segmentation result of pictorial blocks , by extracting shape primitives and then comparing the size and color of the shape primitives with some threshold , and least absolute deviation fitting .there are also some reent algorithms based on sparse decomposition proposed for this task - .most of the previous works have difficulty for the regions where background and foreground color intensities overlap and some part of the background will be detected as foreground or the other way . the proposed segmentation algorithm in this work uses robust regression techniques to overcome the problems of previous segmentation algorithms , which to the best of our knowledge has not been investigated previously .we model the background part of the image with a smooth function , by fitting a smooth model to the intensities of the majority of the pixels in each block .any pixel whose intensity could be predicted well using the derived model would be considered as background and otherwise it would be considered as foreground .ransac algorithm is used here which is a powerful and simple robust regression technique . to boost the speed of the algorithm , we also proposed some pre - processing steps which first check if a block can be segmented using some simpler approaches and it goes to ransac only if the block can not be segmented using those approaches .the proposed algorithm has various applications including , text extraction , segmentation based video coding and medical image segmentation .the structure of the rest of this paper is as follows : section ii presents the proposed robust regression technique for foreground - background segmentation . the final segmentation algorithm that includes both the core robust regression algorithm as well as preprocessing stepsis discussed in section iii .section iv provides the experimental results for these algorithms . andfinally the paper is concluded in section v.we assume that if an image block only consists of background , it should be well represented with a few smooth basis functions . by well representation we mean that the approximated value at a pixel with the smooth functions should have an error less than a desired threshold at every pixel .but if an image block has some foreground pixels overlaid on top of a smooth background , and these foreground pixels occupy a relatively small percentage of the block , then the fitted smooth function will not represent these foreground pixels well . to be more specific ,we divide each image into non - overlapping blocks of size , and represent each image block , denoted by , with a smooth model , where and denote the horizontal and vertical axes and denote the parameters of this smooth model .two questions should be addressed here , the first one is how to find a suitable smooth model , and the second one is how to find the optimal value of parameters of our model such that they are not affected by foreground pixels , especially if we have many foreground pixels .we order all the possible basis functions in the conventional zig - zag order in the plane , and choose the first basis functions . for the first question , following the work in we use a linear combination of some basis functions , so that the model can be represented as .then we used the karhunen - loeve transform on a set of training images that only consist of smooth background to derive the optimum set of bases .the derived bases turned out to be very similar to 2d dct basis functions . because of that we decided to use a linear combination of a set of 2d dct bases as our smooth modelthe 2-d dct function is defined as : where and denote the frequency of the basis and and are normalization factors .it is good to note that algorithms based on supervised dictionary learning and subspace learning are also useful for deriving the smooth representation of background component - .the second question is a chicken - and - egg problem : to find the model parameters we need to know which pixels belong to the background ; and to know which pixels belong to background we need to know what are the model parameters .one solution to find the optimal model parameters , s , is to define some cost function , which measures the goodness of fit between the original pixel intensities and the ones predicted by the smooth model , and then minimize the cost function .one plausible cost function can be the -norm of the fitting error ( can be 0 , 1 , or 2 ) , so that the solution can be written as : let , and denote the 1d version of , the vector of all parameters and a matrix of size in which the k - th column corresponds to the vectorized version of respectively. then the above problem can be formulated as .+ now if we use the -norm ( i.e. ) for the cost function we simply get the least squares fitting problem and , which has a closed - form solution as below : but the least square fitting suffers from the fact that the model parameters , , can be adversely affected by foreground pixels . herewe propose an alternative method based on robust regression , which tries to minimize the the number of outliers and fitting the model only to inliers .the notion of robustness is hugely used in computer vision , for fundamental matrix and object recognition - .ransac algorithm is used in this work , which is more robust to outliers and the resulting model is less affected by them .this algorithm is explained below .ransac is a popular robust regression algorithm which is designed to find the right model for a set of data even in the presence of outliers .ransac is an iterative approach that performs the parameter estimation by minimizing the number of outliers ( which can be thought as minimizing the -norm ) .we can think of foreground pixels as outliers for the smooth model in our segmentation algorithm .ransac repeats two iterative procedures to find a model for a set of data . in the first step ,it takes a subset of the data and derives the parameters of the model only using that subset . in the second step ,it tests the model derived from the first step against the entire dataset to see how many samples can be modeled consistently . a sample will be considered as an outlier if it has a fitting error larger than a threshold that defines the maximum allowed deviation .ransac repeats the procedure a fixed number of times and at the end , it chooses the model with the largest consensus set ( the set of inliers ) as the optimum model .the proposed ransac algorithm for foreground / background segmentation of a block of size is as follows : 1 . select a subset of randomly chosen pixels .let us denote this subset by .2 . fit the model to the pixels and find the s .this is done by solving the set of linear equations . here denotes the luminance value at pixel .3 . test all pixels in the block against the fitted model . those pixels that can be predicted with an error less than will be considered as the inliers .4 . save the consensus set of the current iteration if it has a larger size than the previous one .5 . repeat this procedure up to times , or when the largest concensus set found occupies over a certain percentage of the entire dataset , denoted by .after this procedure is finished , the pixels in the largest consensus set will be considered as inliers or equivalently background .we propose a segmentation algorithm that mainly depends on ransac but it first checks if a block can be segmented using some simpler approaches and it goes to ransac only if the block can not be segmented using those approaches .these simple cases belong to one of these groups : nearly constant blocks , smoothly varying background and text / graphic overlaid on constant background .nearly constant blocks are those in which all pixels have similar intensities .if the standard deviation of a block is less than some threshold we declare that block as nearly constant .smoothly varying background is a block in which the intensity variation over all pixels can be modeled well by a smooth function .therefore we try to fit dct basis to all pixels using least square fitting .if all pixels of that block can be represented with an error less than a predefined threshold , , we declare it as smooth background .the image blocks belonging to the text / graphic overlaid on constant background usually have zero variance ( or very small variances ) inside each connected component .these images usually have a limited number of different colors in each block ( usually less than 10 ) and the intensities in different parts are very different .we calculate the percentage of each different color in that block and the one with the highest percentage will be chosen as background and the other ones as foreground .when a block does not satisfy any of the above conditions , ransac will be applied to separate the background and the foreground . the overall segmentation algorithm for each blocks of size summarized as follows ( note that we only apply the algorithm to the gray scale component of a color image ) : 1 .if the standard deviation of pixels intensities is less than , then declare the entire block as background . if not , go to the next stepperform least square fitting using all pixels .if all pixels can be predicted with an error less than , declare the entire block as background . if not , go to the next step ; 3 .if the number of different colors is less than and the intensity range is above , declare the block as text / graphics over a constant background and use the color that has the highest percentage of pixels as the background color . if not , go to the next stepuse ransac to segment background and foreground .those pixels with fitting error less than will be considered as background .to perform experimental studies we have generated an annotated dataset consisting of 332 image blocks of size , extracted from hevc test sequences for screen content coding .we have also manually extracted the ground truth foregrounds for these images .this dataset is publicly available at . in our experiment ,the block size is chosen to be =64 .the number of dct basis functions , , is set to be 10 based on prior experiments on a separate validation dataset .the inlier maximum allowed distortion is chosen as .the maximum number of iteration in ransac algorithm is chosen to be .the thresholds used for preprocessing ( steps 1 - 3 ) should be chosen conservatively to avoid segmentation errors . in our simulations , we have chosen them as , , and , which achieved a good trade off between computation speed and segmentation accuracy . to illustrate the smoothness of the background layer and its suitability for being coded with transform - based coding , the filled background layer of a sample image is presented in figure 1 .the background holes ( those pixels that belong to foreground layers ) are filled by the predicted value using the smooth model , which is obtained using the least squares fitting to the detected background pixels . as we can seethe background layer is very smooth and does not have any sharp edges .we have compared the proposed approach with three previous algorithms ; least absolute deviation fitting , hierarchical k - means clustering and spec .we have also provided a comparison with least square fitting algorithm result , so that the reader can see the benefit of minimizing the norm over the norm for model fitting . to provide a numerical comparison between the proposed scheme and previous approaches ,we have calculated the average precision , recall , and f1 score ( also known as f - measure ) achieved by different segmentation algorithms over this dataset .these results are presented in table 1 . the precision and recallare defined as in eq .( 2 ) , where and denote true positive , false positive and false negative respectively . in our evaluation, we treat a foreground pixel as positive .a pixel that is correctly identified as foreground ( compared to the manual segmentation ) is considered true positive . the same holds for false negative and false positive . balanced f1 score is defined as the harmonic mean of precision and recall . as it can be seen , the proposed scheme achieves higher precision and recall and f1 score than other algorithms .recall & f1 score + spec & 50% & 64% & 56% + hierarchical clustering & 64% & 69% & 66% + least square fitting & 79% & 60% & 68% + least absolute deviation & 91.4% & 87% & 89.1% + sparse - smooth decomposition & 64% & 95% & 76.4% + ransac based segmentation & 91.5% & 90% & 90.7% + [ tblcomp ] the results for 5 test images ( each consisting of multiple 64x64 blocks ) are shown in fig .0.18 0.18 0.18 0.18 0.18 + 0.18 0.18 0.18 0.18 0.18 + 0.18 0.18 0.18 0.18 0.18 + 0.18 0.18 0.18 0.18 0.18 + 0.18 0.18 0.18 0.18 0.18 + 0.18 0.18 0.18 0.18 0.18 it can be seen that in all cases the proposed algorithm gives superior performance over djvu and spec .note that our dataset mainly consists of challenging images where the background and foreground have overlapping color ranges .for simpler cases where the background has a narrow color range that is quite different from the foreground , djvu and least absolute deviation fitting will also work well . on the other hand ,spec usually has problem for the cases where the foreground text / lines have varying colors and are overlaid on a smoothly varying background .this paper proposed an image decomposition scheme that segments an image into background and foreground layers .the background is defined as the smooth component of the image that can be well modeled by a set of dct functions and foreground as those pixels that can not be modeled with this smooth representation .we propose to use a robust regression algorithm to fit a set of smooth functions to the image and detect the outliers .the outliers are considered as the foreground pixels .ransac algorithm is used to solve this problem . instead of applying these robust regression algorithms to every block , which are computationally demanding, we first check whether the block satisfy several conditions and can be segmented using simple methods .the authors would like to thank jct - vc group for providing the hevc test sequences for screen content coding .we would also like to thank huawei technologies co. , for supporting this work .1 w. zhu , w. ding , j. xu , y. shi and b. yin , `` screen content coding based on hevc framework '' , ieee transactions on multimedia , 16 , no . 5 : 1316 - 1326 , 2014 .m. zhan , x. feng and m. xu , `` advanced screen content coding using color table and index map '' , ieee transactions on image processing , 23.10 : 4399 - 4412 , 2014 .a. skodras , c. christopoulos and t. ebrahimi , `` the jpeg 2000 still image compression standard '' , ieee signal processing magazine , 36 - 58 , 2001 .sullivan , j. ohm , w.j .han and t. wiegand , `` overview of the high efficiency video coding ( hevc ) standard '' , ieee transactions on circuits and systems for video technology , 22 : 1649 - 1668 , 2012 .j. huang , ek .wong and y. wang , `` check image compression using a layered coding method '' , journal of electronic imaging 7.3 : 426 - 442 , 1998 .p. haffner , p.g .howard , p. simard , y. bengio and y. lecun , `` high quality document image compression with djvu '' , journal of electronic imaging , 7(3 ) , 410 - 425 , 1998 .dequeiroz , r.r . buckley and m. xu , `` mixed raster content ( mrc ) model for compound image compression '' , electronic imaging99 .international society for optics and photonics , 1998 .s. minaee , m. fotouhi and b.h .khalaj , `` a geometric approach for fully automatic chromosome segmentation '' , ieee symposium on spmb , 2014 .mp hosseini , mr nazem - zadeh , d pompili , h soltanian - zadeh , `` statistical validation of automatic methods for hippocampus segmentation in mr images of epileptic patients '' , international conference of the ieee engineering in medicine and biology society , 2014 .mp hosseini , mr nazem - zadeh , d pompili , k jafari - khouzani , k elisevich , h soltanian - zadeh , `` comparative performance evaluation of automated segmentation methods of hippocampus from magnetic resonance images of temporal lobe epilepsy patients '' , medical physics , 43(1 ) , 538 - 553 , 2016 .j. zhang and r. kasturi , `` extraction of text objects in video documents : recent progress '' , document analysis systems .s minaee and y wang , `` fingerprint recognition using translation invariant scattering network '' , ieee signal processing in medicine and biology symposium , 2015 .m hosseini , j peters , s shirmohammadi , `` energy - budget - compliant adaptive 3d texture streaming in mobile games '' , in proceedings of the 4th acm multimedia systems conference , 2013 .m hosseini , dt ahmed , s shirmohammadi , `` adaptive 3d texture streaming in m3g - based mobile games '' , proceedings of the 3rd multimedia systems conference , 2012 .m hosseini , j peters , s shirmohammadi , `` energy - efficient 3d texture streaming for mobile games '' , in proceedings of workshop on mobile video delivery .acm , 2014 .t. lin and p. hao , `` compound image compression for real - time computer screen image transmission '' , ieee transactions on image processing , 993 - 1005 , 2005 .s. minaee and y. wang , `` screen content image segmentation using least absolute deviation fitting '' , ieee international conference on image processing ( icip ) , pp.3295 - 3299 , sept .s. minaee , a. abdolrashidi and y. wang , `` screen content image segmentation using sparse - smooth decomposition '' , asilomar conference on signals , systems , and computers , ieee , 2015 .s minaee , y wang , `` screen content image segmentation using sparse decomposition and total variation minimization '' , international conference on image processing , ieee , 2016 .s minaee and y wang , `` screen content image segmentation using robust regression and sparse decomposition '' , ieee journal on emerging and selected topics in circuits and systems , 2016 .p.j rousseeuw , am .leroy , `` robust regression and outlier detection '' , vol . 589 .john wiley and sons , 2005 .phs torr , dw murray , `` the development and comparison of robust methods for estimating the fundamental matrix '' , international journal of computer vision , pp .271 - 300 , 1997 .a taalimi , a rahimpour , c capdevila , z zhang , h. qi , `` robust coupling in space of sparse codes for multi - view recognition '' , international conference on image processing , pp .3897 - 3901 , ieee , 2016 .r. raguram , jm . frahm and m.pollefeys , `` a comparative analysis of ransac techniques leading to adaptive real - time random sample consensus '' , computer vision - eccv , springer , 500 - 513 , 2008 .a. levey and m. lindenbaum `` sequential karhunen - loeve basis extraction and its application to images '' , image processing , ieee transactions on 9.8 : 1371 - 1374 , 2000 .m rahmani , and g atia , `` a subspace learning approach for high dimensional matrix decomposition with efficient column / row sampling '' , in proceedings of the 33rd international conference on machine learning , pp .1206 - 1214 . 2016 .m rahmani , and g atia , `` randomized subspace learning approach for high dimensional low rank plus sparse matrix decomposition '' , 49th asilomar conference on signals , systems and computers , ieee , 2015 .b babagholami - mohamadabadi , a jourabloo , m zolfaghari , and mt manzuri shalmani , `` bayesian supervised dictionary learning '' , in uai application workshops , pp .11 - 19 . 2013 .b babagholami - mohamadabadi , a jourabloo , a zarghami , and mb baghshah , `` supervised dictionary learning using distance dependent indian buffet process '' , international workshop on machine learning for signal processing , ieee , 2013 .bloomfield and w. steiger , `` least absolute deviations : theory , applications and algorithms '' , springer science and business media , vol . 6 , 2012 . s. boyd , n. parikh , e. chu , b. peleato and j. eckstein , `` distributed optimization and statistical learning via the alternating direction method of multipliers '' , foundations and trends in machine learning , 3(1 ) , 1 - 122 , 2011 .i. daubechies , r. devore , m. fornasier and c.s .gunturk , `` iteratively reweighted least squares minimization for sparse recovery '' , communications on pure and applied mathematics , 63(1 ) , 1 - 38 , 2010 .candes , t. tao , decoding by linear programming " , ieee transactions on information theory , 51.12 : 4203 - 4215 , 2005 .iso / iec jtc 1/sc 29/wg 11 requirements subgroup , `` requirements for an extension of hevc for coding of screen content '' , in mpeg 109 meeting , 2014 .https://sites.google.com/site/shervinminaee/research/image-segmentation https://web.stanford.edu/ boyd / papers / admm/
this paper considers how to separate text and/or graphics from smooth background in screen content and mixed content images and proposes an algorithm to perform this segmentation task . the proposed methods make use of the fact that the background in each block is usually smoothly varying and can be modeled well by a linear combination of a few smoothly varying basis functions , while the foreground text and graphics create sharp discontinuity . this algorithm separates the background and foreground pixels by trying to fit pixel values in the block into a smooth function using a robust regression method . the inlier pixels that can be well represented with the smooth model will be considered as background , while remaining outlier pixels will be considered foreground . we have also created a dataset of screen content images extracted from hevc standard test sequences for screen content coding with their ground truth segmentation result which can be used for this task . the proposed algorithm has been tested on the dataset mentioned above and is shown to have superior performance over other methods , such as the hierarchical k - means clustering algorithm , shape primitive extraction and coding , and the least absolute deviation fitting scheme for foreground segmentation . image decomposition , robust regression , ransac algorithm , screen content images .
multi - object multi - sensor management / control is a challenging optimal nonlinear control problem focused on directing multiple sensors to obtain _ most informative _measurements for the purpose of multi - object filtering .this problem is different from classical control problems as the overall controlled system is a highly complex stochastic multi - object system , where not only the number of objects vary randomly in time , but also the measurements returned by each sensor are subject to missed detections and false alarms .indeed , the multi - object state and multi - object observations are inherently finite - set - valued , and standard optimal control techniques are not directly applicable . in stochastic multi - object systems , we can still cast the multi - object multi - sensor control problem as a partially observed markov decision process ( pomdp ) , where the states and observations are instead finite - set - valued , and control vectors are drawn from a set of admissible sensor actions based on the current information states , which are then assessed against the values of an objective function associated with each multi - sensor action . in this framework , a solution would include three major steps : ( 1 ) modeling the overall system as a stochastic multi - object system , ( 2 ) devising a tractable ( accurate or approximate ) way to propagate the multi - object posterior , and ( 3 ) solving an optimization problem to find the multi - sensor control command , according to an objective function .this paper presents a formulation of the multi - sensor control problem as a pomdp with finite - set - valued states and measurements , a labeled random set filter used to propagate the multi - object posterior , and a task - driven objective ( cost ) function . to our knowledge ,the problem of multi - sensor control for labeled random set filters is only recently considered by meng _ et al . _ . in this method , local vo - vo filtersglmb )filter . in this work ,we follow the simpler name suggested by r. mahler in his book . ]are operating at each sensor node , and the resulting vo - vo densities ( posteriors ) are fused using the generalized covariance intersection ( gci ) rule as formulated in .the approach opted by meng _ et al . _ to solve the multi - sensor control problem is an _ exhaustive search _ scheme , in which the objective function is computed for all possible combinations of sensor control actions .this approach works well for a few sensors only , but in presence of numerous sensors , may become computationally intractable .the major contribution of this paper is the introduction of a guided search to solve the multi - dimensional discrete optimization problem embedded in multi - sensor control .we avoid the curse of dimensionality by using an accelerated scheme inspired by the coordinate descent method .this leads to significant improvement in the runtime of the algorithm and its real - time feasibility , especially in presence of numerous sensors .another contribution is the detailed sequential monte - carlo ( smc ) implementation of the proposed multi - sensor control framework with labeled multi - bernoulli ( lmb ) filters running in each sensor node .the novel idea inherent in the proposed smc implementation is that sensor control and the actual filters are all implemented using the same particles , hence substantial savings are achieved in terms of memory and computational requirements .we also experimentally analyse the computational complexity of the proposed method and demonstrate that it varies almost quadratically with the number of controlled sensors ( polynomial complexity ) .this is while an exhaustive search similar to the one used in has exponential ( hence , non - polynomial ) complexity .extensive simulation studies involving numerous controllable sensors demonstrate that our method returns acceptable tracking results quantified in terms of ospa error values .indeed , in comparison to the state of art ( running exhaustive search in an approach similar to ) , the proposed multi - sensor control method returns similar tracking errors but converges significantly faster .the organization of the paper is as follows .section [ sec : prob_stat ] presents a formalized statement of the multi - sensor control problem in pomdp framework and sets out the background and design requirements for various components of the framework .the proposed multi - sensor control solution is then presented in section [ sec : approach ] , outlining the general framework and proposed choices for its components , as well as a step - by - step algorithm for the smc implementation .simulation results are presented in section [ sec : sim_res ] .section [ sec : conc ] concludes the paper .consider a stochastic multi - object system , in which at any discrete ( or sampling ) time , the multi - object state is a labeled random finite set ( rfs ) comprised of a random number of single - object states , where and denote the state and label spaces , respectively , and means `` all finite subsets of . ''the system is modeled as a one - step - ahead markovian process which is characterized by a transition density .a practical approximation for the process can be formulated based on assuming that while transiting from time to time , each existing object independently continues to exist with a survival probability and single - object transition density , and a number of new objects are born according to a given rfs density . at each time , the multi - object state is partially observed by a network of sensors , each returning a set of measurements ( called detections or point measurements ) .let be the measurement set returned by the -th sensor , . denoting the space of point measurements by ,the space of measurement sets will be .each sensor can be controlled ( e.g. translated , rotated ) according to a sensor command where is a finite space of sensor commands .the cumulated measurement is an -tuple of measurement sets , the relationship between the multi - sensor measurement and the multi - object state is stochastically modeled by the multi - object likelihood function , where is the multi - sensor command .the likelihood function is usually modeled in terms of a single - object likelihood , a state - dependent detection probability and assuming a poisson process for the number of false alarms which together are modeled as a poisson rfs characterized by an intensity function .the multi - sensor control problem can be formally cast in the framework of the following 6-tuple discrete - time pomdp : where is an objective function that associates a reward or cost with a choice of multi - sensor control command given the recent multi - object state or its statistical characteristics . in a one - step - ahead multi - sensor control solution ,the aim is to find the multi - sensor command , that satisfies given the pomdp with the components given in , the probability density of multi - object state of the system can be recursively estimated by a multi - target bayes filter .let us denote the density of multi - object state at time by , where denotes the ensemble of all multi - sensor measurements accumulated up to time . in a bayesian filtering scheme, the density is recursively propagated through two steps : prediction and update .the predicted density is computed by the multi - object chapman - kolmogorov equation : with the arrival of new observations from the sensors controlled by a multi - sensor action , a posterior density is obtained using multi - object bayes rule : given the posterior recursion and , the objective function component of the pomdp in , is usually defined as a function of the probability density of the multi - object state , and the optimization component of the multi - sensor control framework is expressed as the integrals in and are set integrals as defined in .the recursion and has no analytic solution in general .an smc implementation of the bayes multi - object filter ( with rfs states without labels ) is given in .however , this technique is computationally prohibitive which at best is able to accommodate a small number of targets .this smc implementation of the multi - object bayes filter was employed by the multi - target sensor control algorithm proposed in . due to general intractability of propagation of the full posterior density given by and ,several alternatives have been proposed which are designed to propagate important statistics or parameters instead of the full posterior .well - known examples of such filters are probability hypothesis density ( phd ) filter and its cardinalized version ( cphd ) , and the multi - bernoulli filter and its cardinality - balanced version ( cb - member ) . in a series of works , various implementations of these filters such as smc and track - before - detect ( tbd ) were introduced , as well as a robust version of multi - bernoulli filter .these methods can not generate target tracks ( using labels ) in a rigorously mathematical way , and are usually applied in conjunction with a label management strategy .since 2010 , a series of random set filters have been developed , in which the multi - object random state includes label .the _ labeled random finite sets _ were shown to admit conjugacy of a particular form of prior density ( the vo - vo density ) with the general multiple point measurement set likelihood . following this result ,the vo - vo filter was introduced .variants of the vo - vo filter such as the labeled multi - bernoulli ( lmb ) filter and m--glmb filter were also proposed and applied in various applications .the proposed multi - sensor control framework can be implemented with different multi - object filters . for the sake of completion and presenting a step - by - step pseudocode , we have chosen to implement our method with the lmb filter .the choice of objective function is a critical part of the control solution design task .the objective functions commonly used in sensor control solutions in the stochastic signal processing and control literature , can be generally divided into two types : information - driven and task - driven .the information - driven reward function quantifies the expected information gain from prior to posterior after a hypothesized sensor control action .for example , rnyi divergence was usedby ristic _et al . _ for sensor control with random set filters in general and phd filters in particular .recently , in a number of works , the cauchy - schwarz divergence has been adopted as the reward function .the task - driven cost functions are usually formulated in terms of the expected error of estimation .examples of such cost functions include the map estimate of cardinality variance , statistical mean of cardinality variance , posterior expected error of cardinality and states ( peecs ) and statistical mean of the ospa error .a general discussion and comparison between task - driven and information - driven objective functions for sensor management is presented in . in the multi - sensor control framework proposed in this paper, we use peecs as the objective ( cost ) function .the rationale behind this choice is that while computing peecs can be faster than the common divergence functions , comparable or better tracking accuracies can be achieved via minimizing peecs as the sensor control cost function . in presence of multiple sensors( or sensor nodes in a sensor network ) , usually a multi - object bayes filter runs at each node and the local posteriors need to be fused .the generalized covariance intersection ( gci ) rule has been widely used for consensus - based fusion of multiple multi - object densities of various forms .examples include the fusion of poisson multi - object posteriors of multiple local phd filters , i.d.d .clusters densities of several local cphd filters , multi - bernoulli densities of local multi - bernoulli filters , and lmb or vo - vo densities of several local lmb or vo - vo filters . the problem of multi - sensor control for labeled random set filters is recently considered by meng _ et al . . in this method, local vo - vo filters are operating at each sensor node , and the resulting vo - vo densities ( posteriors ) are fused using the gci - rule ( as formulated in ) . the common underlying assumption for solving the multi - sensor control problem is that in an _ exhaustive search _ scheme , the objective function is computed for all possible combinations of sensor control actions .this approach works well for a relatively small number of sensors .for instance , the case study presented in the work of meng _ et al . _ involves only two sensors . in presence of numerous sensors ,their combined control becomes computationally intractable if implemented via an exhaustive search .indeed , the computational cost of overall multi - sensor control procedure will grow exponentially with the number of sensors .our framework includes a _ guided search _ method that solves the optimization problem without the need for an exhaustive search and can be utilized to simultaneously control numerous sensors .given the system model presented in section [ sec : prob_stat ] , an effective design for a multi - sensor control framework is presented in this section .we first outline an overview of the general components and steps involved in our proposed approach . having the big picture in mind, we then present the details of various components as implemented in our experiments .let us assume that at each time , the fused prior from the previous step , , is processed through the prediction step of the bayes filter . a multi - object set estimate , then extracted from the predicted density and used to compute predicted ideal measurement sets ( pims ) for each sensor node and each possible control command applied to that node , denoted by for sensor . in the next step , at each sensor node , a _ pseudo update _ is performed using each pims associated with a control command .the resulting pseudo posteriors are then processed by an _module to output an optimal set of control commands .the control actions are then applied to the sensors ( for instance , they are displaced or rotated according to the chosen action command ) following which , the measurement sets are acquired from the sensors . using those measurement sets ,the predicted multi - object density is locally updated in each sensor node , then the local posteriors are fused using a fusion rule such as the gci - rule .the fused posterior is post - processed ( _ e.g. _ low weight components are pruned or particles are resampled ) .the resulting posterior is then used as prior in the next time step .the notion of labeled multi - bernoulli ( lmb ) rfs was introduced for the first time in , with the lmb filter recursion further developed in .the lmb distribution is completely described by its components where is the _ probability of existence _ of an object with label , and is the probability density of the object s state conditional on its existence . the lmb rfs density is given by ^{\mathbf{x}},\ ] ] where is the set of all labels extracted from labeled states in , and in which means the cardinality of " , and ^{\bm{x } } \triangleq \prod_{(x,\ell)\in\bm{x } } p^{(\ell)}(x),\ ] ] and is the probability of joint existence of all objects with labels and non - existence of all other labels . in a bayes multi - object filter ,suppose that the prior is an lmb with parameters .in an smc implementation , the density function of each component with label is approximated by particles and weights , where is the dirac delta function . in the prediction step of an lmb filter , the lmb prioris turned into the following new lmb density with evolved particles and probabilities of existence including the lmb birth components : where and let us denote the predicted lmb parameters by where . note that in above equations , denotes inner product of two functions . as part of the multi - sensor control framework ,a multi - object state estimate needs to be computed from the predicted density .a maximum a posteriori ( map ) estimate for the number of objects can be found from cardinality distribution , where .given the number of objects , we find the labels with highest probabilities of existence .for each label , an expected a posteriori ( eap ) state estimate is given by and the set of all estimates is denoted by .the subscript `` pseudo '' is used because the estimates are resulted from the predicted , and not the updated , density .assume that at a sensor node , the control command is applied , and a measurement set denoted by is acquired .let us denote the updated lmb by according to lmb update equations derived in , the parameters of the above density are given by : where ^{i_+ } \\p^{(\theta)}(x,\ell ) & = & \frac{p_+^{(\ell)}(x ) \psi_z(x,\ell;\theta)}{\eta_z^{(\theta)}(\ell ) } \\ \eta_z^{(\theta)}(\ell ) & = & \langle p_+^{(\ell)}(x),\psi_z(x,\ell;\theta ) \rangle \\ \psi_z(x,\ell;\theta ) & = & \left\ { \begin{array}{lcr } \frac{p_d(x,\ell ) g(z_{\theta(\ell)}|x,\ell)}{\kappa(z_{\theta(\ell ) } ) } , & \mathrm{if } & \theta(\ell)>0 \\ 1-p_d(x,\ell ) , & \mathrm{if } & \theta(\ell ) = 0 \end{array } \right.\end{aligned}\ ] ] and is the space of mappings such that implies , and the weight term , , is given by : during the update step of lmb filter , the particles do not change , and only their weights evolve .hence , in other words , all the updated lmb posteriors will have the same particles but with different weights and existence probabilities .this makes the fusion of the posteriors generated at each sensor straightforward . for sensor fusion purposes, we use the gci - rule as derived in for fusion of multiple lmb densities . for each multi - sensor command candidate ,the corresponding posteriors are lmb s with parameters where each density is approximated by the same particles but different weights , the gci - rule returns the following fused existence probabilities and densities : r_^ ( ) = & [ eq : fused_r_1 ] + p_^()(x ) = & [ eq : fused_p_1 ] where is a constant weight indicating the strength of our emphasis on sensor in the fusion process .these weights should be normalized , i.e. . in our simulation studies , we assumed that all sensor nodes have equal priority , and used the values . substituting each density with its particle approximationturns the integrals to weighted sums over the particles .it is here that sharing the same particles between all the densities becomes instrumental for computation of fused parameters .the fused existence probability is given by : the fused densities also take the form of weighted sum of dirac deltas : where is the fused weight of each particle in the fused pseudo - posterior . to maintain tractability , lmb components with extremely small existence probabilities should be pruned .this is performed after gci - fusion of the posteriors . for the objective function , we chose the task - driven cost function termed peecs in .it returns a linear combination of the cardinality and state estimation errors which are quantified by computing the variance of cardinality and weighted sum of single - object variances , respectively .consider the fused lmb posterior parametrized by where the peecs cost associated with the multi - sensor control choice is then given by : where is a user - defined parameter representing the level of emphasis on desired accuracy of number of targets versus accuracy of state estimates , and in which we have : the final step to solve the control problem in pomdp framework is to find the optimum point of the objective function . the common approach , which is usually tractable with few sensors , is based on an exhaustive grid search in the discrete multi - sensor control command space . in this approach , for all possible -tuples an ideal measurement set ( pims ) is synthetically generated from the prediction at each sensor node , and using the pims , a local lmb update is run to create a pseudo - posterior .for each possible multi - sensor control command , the corresponding pseudo posteriors are fused and the objective function is computed .the optimal sensor control decision is then given by : where denotes the objective function computed from the fused pseudo - posterior via updating the predicted density if control actions are applied .note that in the above equation ( and the rest of this paper ) , the objective function is assumed to be a cost .if it is a reward , its optimization would require maximization .the above search requires the `` fusion of pseudo - posteriors followed by computation of the objective function '' to be repeated for times where denotes the cardinality of single sensor control commands space .the computational cost increases exponentially with the number of sensors , and becomes intractable when a relatively large number of sensors are involved .we propose a guided search routine inspired by the coordinate descent method that significantly accelerates the optimization process and makes it suitable for real time implementation .our guided search is an iterative coordinate descent type method with random initializations .coordinate descent algorithms are well - known for their simplicity , computational efficiency and scalability .an overview of coordinate descent algorithms for various optimization problems with different constraints is presented in .these algorithms are derivative - free and perform a line search along one coordinate direction at the current point in each iteration and use different coordinate directions cyclically to find a local optimum point .coordinate descent provides a sub - optimal solution with non - differentiable objective functions . considers convergence of coordinate descent methods to a stationary , but not necessarily minimum , point for objective functions that include a non - differentiable and separate contribution ( also called non - smooth part " ) . in a later work, spall analyzes the convergence of more general seasaw processes for optimization and identification , showing that under reasonable conditions , the cyclic scheme converges to the optimal joint value for the full vector of unknown parameters ( sensor commands , in the context of our work ) . to find the best -tuple of control commands in the -dimensional command space , our guided search starts with random initialization of control commands , denoted by .we then solve the optimization problem via exhaustive search in the space of coordinates associated with sensor 1 .we replace the candidate multi - sensor control action with .repeating the one - dimensional search for all the other coordinates associated with different sensors , our candidate turns into .we repeat this cycle over to obtain the next candidates until convergence , i.e. until we find for which .when used in such an iterative ( cyclic ) routine , the search is proven to converge in finite time .the converged -tuple can be a local optimum .hence , we need to repeat the process with multiple random initializations and choose the best candidate as the multi - sensor control command , .the required number of repeated convergence with random initializations depends on the number of local optima and the desired chance of success .if there are local optima , in the worst case scenario they all have basins of attraction with the same hyper - volume , i.e. each basin of attraction is comprised of of all the points in .thus , the chance of a randomly initialized search converging to the global optimum will be .. ] with random initializations , the total chance of success is .hence , the required number of random initializations is given by where means rounding up to the next integer . in our experiments , choosing the number of local optima at led to sufficient random initializations for satisfactory results .based on equation , with a probability of success of 95% , for , 10 and 20 sensors , we would require 29 , 59 and 119 random initializations which need far less computation than exhaustive search in the multi - dimensional space .interestingly , the required number of initializations in equation does not depend on , i.e. it does not increase with the resolution of the sensor command space .algorithm [ alg:1 ] shows a complete step - by - step pseudocode for multi - sensor control within the lmb filter , that outputs a fused posterior . starting with an lmb prior ( which is the fused lmb posterior from previous time ) , the function implements the lmb prediction step .multiple object states are then estimated from the predicted lmb density by calling the function which implements equations and .the coordinate descent guided search for multi - sensor control is implemented through the line numbers 324 in algorithm [ alg:1 ] . before the search begins , for every sensor , and every possible action command , a pims is computed .using that set of ideal measurements , the lmb density is then updated by calling function , and its parameters ( existence probabilities , particles and their weights ) are recorded .the function computes the peecs cost value for each set of local posteriors associated with multiple sensor control commands .both within the cost computation steps , and at the conclusion of the algorithm [ alg:1 ] , we need to apply the gci - rule to fuse the locally ( pseudo-)updated lmb posteriors .the function performs this task .we conducted an extensive set of experiments involving various scenarios with different numbers of targets , sensors , target motion models and sensor detection profile models . in each experiment , we compared the performance of the proposed multi - sensor control solution with the exhaustive search - based method ( similar to ) , in terms of both accuracy and computational cost .this section includes representative set of our simulation results .those show the advantages of the proposed method , particularly , in terms of computation time .all scenarios share the following parameters .the targets maneuver in an area of .the single target state is comprised of its label and unlabeled state .the label is formed as where is the birth time of the target and is an index to distinguish targets born at the same time .the unlabeled state is four - dimensional and includes the cartesian coordinates of the target and its speed in those directions , denoted by ^\top ] , each target ( if detected ) leads to a bearing and range measurement vector with the measurement noise density given by ^\top , r)$ ] in which diag with rad and m being the scales of range and bearing noise .thus , the single target likelihood function is , where ^\top.\ ] ] each measurement set acquired from each sensor also includes poisson distributed clutter with the fixed clutter rate of in all scenarios , the density of each labeled bernoulli component in the filter is approximated by particles .all simulation experiments were coded using matlab r2015b and ran on an intel core i7 - 4770 cpu .40ghz , and 8 gb memory . in this scenario , we tried the commonly used case study in which five targets move with relatively small displacements ( are pseudo - stationary ) .to realize such movements , we applied the ncv motion model with the very small variance m borrowed from similar simulations reported in . in this scenario ,the detection profile of each sensor is range - dependent .the detection probability of target with the state by sensor is given by : where m , and m denotes the maximum range of detection , and denote the sensor - target distance given by : the detection probability decreases with increasing sensor - target distance . because of this , and considering that the targets stay almost in the same distance away from each other all the time , the sensor control is intuitively expected to drive all the sensors towards the center of the pseudo - stationary targets .the birth process is modeled by an lmb density with components .each component has the same existence probability of , and a gaussian density , where the mean and covariance of gaussians are ^\top ; & m_b^{(2 ) } & = & [ 650\ 0 \500\ 0]^\top ; \\m_b^{(3 ) } & = & [ 620\ 0\ 700\ 0]^\top ; & m_b^{(4 ) } & = & [ 750\ 0\ 800\ 0]^\top ; \\m_b^{(5 ) } & = & [ 700\ 0\ 400\ 0]^\top ; & & & \\p_b & = & \multicolumn{4}{l}{\text{diag}\big(1 , 5\times 10^{-5 } , 1 , 5\times\!10^{-5}\big ) . } \end{array}\ ] ] each sensor can be displaced by the multi - sensor control to one of the following possible displacement commands : where m , and .thus , nine control actions are possible at each time step as shown in fig .[ fig : sensor_disps ] .nine possible sensor displacements .note that where denotes zero displacement.,width=2 ] figures [ fig : sensor_movement ] ( a ) and [ fig : sensor_movement ] ( b ) show the sensor movements in cases with three and four sensors , respectively .as expected , our proposed multi - sensor control method drives all the sensors towards the center of the five targets . [ cols="^ " , ] we also tried the experiment with 10 sensors but the exhaustive search - based method turned out to be intractable in our system ( only up to five sensors are feasible ) .our accelerated solution however , succeeded with sensors moving generally towards the center of targets as expected ( see fig .[ fig:10sensor ] ) , and the ospa errors were reasonably small similar to the results shown in figs .[ fig : sensor_ospas](a ) and [ fig : sensor_ospas](b ) . controlled movements of 10 sensors in scenario 1: the sensors generally approach the center of targets as expected .best viewed in color.,width=3 ] we also applied our method to control various numbers of sensors in the same multi - target tracking scenario .we tried up to 36 sensors , and for each case , recorded the run times as plotted in fig .[ fig : runtimes ] .the results show that with increasing the number of sensors , the run time increases almost quadratically ( the best quadratic fit is also displayed ) . indeedthe computational complexity of our method is almost , which is significantly lower than exhaustive search - based multi - sensor control , i.e. .recorded run times for scenario 1 in presence of various numbers of sensors.,width=3 ] in this section , we present the results of multi - sensor control for targets that maneuver with a relatively high speed .in such cases , the sensors are expected to follow and possibly approach the center of the moving targets .we present the results of six sensors controlled to track five targets . for the purpose of visualization of the sensor control performance, we tuned the motion model parameters of the targets in such a way that they move approximately in the same direction with the same speed . to achieve such target maneuvers, we used the ncv motion model but with the following covariance matrix : also the birth model parameters were different from scenario 1 , as listed below : ^\top ; & m_b^{(2 ) } & = & [ 1200\ 0 \ 300\ 0]^\top ; \\m_b^{(3 ) } & = & [ 1100\ 0\ 300\ 0]^\top ; & m_b^{(4 ) } & = & [ 1200\ 0\ 400\ 0]^\top ;\\ m_b^{(5 ) } & = & [ 1200\ 0\ 200\ 0]^\top ; & & & \\p_b & = & \multicolumn{4}{l}{50\,i_4 } \end{array}\ ] ] where denotes the four - dimensional identity matrix .we examined the performance of our proposed multi - sensor control method , first with six sensors in a similar fashion to scenario 1 with parameters of state - dependent detection probability to be a snapshot of the final target locations and their paths as well as the controlled sensors and their paths are shown in fig .[ fig : disp_snap ] .it clearly shows how the sensors move and converge to follow the targets . a video of the simulation is available as supplementary material .target and controlled sensor paths in scenario 2 for displacement sensor control actions.,width=3 ] the proposed multi - sensor control method is not limited to displacement sensor control actions only .the control actions can have other forms .for instance , the sensors can be spun to control angles .we ran a simulation with six sensors that could spin in the interval of 0 to + 180 . with these sensors ,the detection profile is angle - related . for each sensor ,the control action command is an axis angle to which the sensor would spin when the control action is applied .denoting the angle of direction by , we considered the action command space of the detection probability was assumed to vary with the relative angle of the target with respect to the sensor s axis direction , denoted by as shown in fig .[ fig : angle_control_pd ] .the variations were modeled as follows : schematics of notations used to formulate control action space and detection probability in scenario 3 with spinning control actions.,width=1 ] figure [ fig : rot_snap ] shows a snapshot of the targets and how the sensors axes have been controlled to point towards them . a video of the simulation is available as supplementary material that demonstrates the continuous spinning of the sensors in such a way that in general they all point towards the group of targets moving in the scene . a snapshot of the maneuvering target locations and the controlled angles of sensors in scenario 3.,width=3 ]a complete pomdp framework for devising multi - sensor control solutions in stochastic multi - object systems was introduced , and a suitable set of choices for various components of the proposed pomdp were outlined .details of one possible implementation were presented in which the multi - object state is modeled as an lmb rfs , and the smc implementation of the lmb filter is employed .the proposed framework makes use of a novel guided search approach for multi - dimensional optimization in the multi - sensor control command space , for minimization of a task - driven control objective function .it also utilizes generalized covariance intersection ( gci ) method for multi - sensor fusion .a step - by - step algorithm was detailed for smc implementation of the proposed method with lmb filters running at each sensor node .numerical studies were presented for several scenarios where numerous controllable ( mobile ) sensors track multiple moving targets with different levels of observability .the results demonstrated good performance in controlling numerous sensors ( in terms of ospa errors ) .they also showed that our proposed method runs substantially faster than the traditional exhaustive search - based technique .indeed we showed that while the computational cost of traditional methods grow exponentially with increasing the number of sensors , our method has only second order computational complexity .this project was supported by the australian research council through arc discovery grant dp160104662 , as well as national nature science foundation of china grants 61673075 .r. p. s. mahler and t. r. zajic , `` probabilistic objective functions for sensor management , '' in _ proceedings of spie , signal processing , sensor fusion , and target recognition _ , vol .5429 , orlando , 2004 , conference proceedings , pp .233244 . c. fantacci , b. n. vo , b. t. vo , g. battistelli , and l. chisci , `` consensus labeled random finite set filtering for distributed multi - object tracking , '' _ arxiv e - prints _ , 2016 , http://arxiv.org/abs/1501.01579 .b. n. vo , s. singh , and a. doucet , `` sequential monte carlo methods for multi - target filtering with random finite sets , '' _ ieee transactions on aerospace and electronic systems _ , vol .41 , no . 4 , pp . 12241245 , 2005 .b. t. vo , b. n. vo , and a. cantoni , `` the cardinality balanced multi - target multi - bernoulli filter and its implementations , '' _ ieee transactions on signal processing _ , vol .57 , no . 2 , pp . 409423 , 2009 .b. n. vo , b. t. vo , n. t. pham , and d. suter , `` joint detection and estimation of multiple objects from image observations , '' _ ieee transactions on signal processing _58 , no .10 , pp . 51295141 , 2010 .r. hoseinnezhad , b. n. vo , and b. t. vo , `` visual tracking in background subtracted image sequences via multi - bernoulli filtering , '' _ ieee transactions on signal processing _ ,61 , no . 2 ,pp . 392397 , 2013 .f. papi , b. n. vo , b. t. vo , c. fantacci , and m. beard , `` generalized labeled multi - bernoulli approximation of multi - object densities , '' _ ieee transactions on signal processing _ , vol .63 , no . 20 , pp .54875497 , 2015 .m. beard , b. t. vo , b. n. vo , and s. arulampalam , `` sensor control for multi - target tracking using cauchy - schwarz divergence , '' in _ fusion 2015 _ , washton , d.c , 2015 , conference proceedings , pp .937944 .c. kreucher , a. o. hero iii , and k. kastella , `` a comparison of task driven and information driven sensor management for target tracking , '' in _ proceedings of cdc - ecc 05 _ , seville , 2005 , conference proceedings , pp . 40044009 .g. battistelli , l. chisci , c. fantacci , a. farina , and r. p. s. mahler , `` distributed fusion of multitarget densities and consensus phd / cphd filters , '' in _ proceedings of spie , signal processing , sensor / information fusion , and target recognition _ , vol . 9474 , baltimore , 2015 , conference proceedings , pp .g. battistelli , l. chisci , c. fantacci , a. farina , and a. graziano , `` consensus cphd filter for distributed multitarget tracking , '' _ ieee journal of selected topics in signal processing _ , vol . 7 , no . 3 , pp .508520 , 2013 .w. bailu , y. wei , r. hoseinnezhad , l. suqi , k. lingjiang , and y. xiaobo , `` distributed fusion with multi - bernoulli filter based on generalized covariance intersection , '' _ ieee transactions on signal processing _ , vol .65 , pp . 242255 , jan 2017 .
sensor management in multi - object stochastic systems is a theoretically and computationally challenging problem . this paper presents a novel approach to the multi - target multi - sensor control problem within the partially observed markov decision process ( pomdp ) framework . we model the multi - object state as a labeled multi - bernoulli random finite set ( rfs ) , and use the labeled multi - bernoulli filter in conjunction with minimizing a task - driven control objective function : posterior expected error of cardinality and state ( peecs ) . a major contribution is a guided search for multi - dimensional optimization in the multi - sensor control command space , using coordinate descent method . in conjunction with the generalized covariance intersection method for multi - sensor fusion , a fast multi - sensor algorithm is achieved . numerical studies are presented in several scenarios where numerous controllable ( mobile ) sensors track multiple moving targets with different levels of observability . the results show that our method works significantly faster than the approach taken by a state of art method , with similar tracking errors . wang : multi - sensor control for multi - object bayes filters partially observed markov decision process , multi - target tracking , random finite sets , labeled multi - bernoulli filter , coordinate descent .
a large proportion of fish species are characterized by elongated bodies that swim forward by flapping sideways .these sideways oscillations produce periodic propulsive forces that cause the fish to swim along time - periodic trajectories , .the kinematics of the flapping motion and the resulting swimming performance , as well as their relationship to the swimmer s morphology , have been the subject of numerous studies , see , for example , .however , little attention has been given to the stability of underwater locomotion .the importance of motion stability and its mutual influence on body morphology and behavior is noted in the work of weihs , see and references therein .weihs uses clever arguments and simplifying approximations founded on a deep understanding of the equations governing underwater locomotion to obtain educated estimates " of the stability of swimming fish without ever solving the complicated set of equations .the swimming motion is said to be unstable if a perturbation in the conditions surrounding the swimmer s body result in forces and moments that tend to increase the perturbation , and it is stable if these emerging forces tend to reduce such perturbations or keep them bounded so that the fish returns to or stays near its original periodic swimming . stability may be achieved actively or passively .active stabilization requires neurological control that activate musculo - skeletal components to compensate for external perturbations acting against stability . on the other hand , passive stability of the locomotion gaits requires no additional energy input by the fish . in this sense, one can argue that stability reduces the energetic cost of locomotion .therefore , from an evolutionary perspective , it seems reasonable to conjecture that stability would have a positive selection value in behaviors such as migration over prolonged distances and time .however , stability limits maneuverability and body designs / flapping motions that are adapted for stable swimming are not suitable for high maneuverability and vice versa , . in this work , we study stability of periodic swimming using a simple model consisting of a planar elliptic body undergoing prescribed flapping motion in unbounded potential flow . by flapping motion ,we mean periodic heaving and pitching of the body as shown in figure [ fig : model ] .we formulate the equations of motion governing the resulting locomotion and examine its efficiency .we then investigate the stability of this motion using floquet theory ( see ) .we find that stability depends in a non - trivial way on the body geometry ( aspect ratio of the ellipse ) as well as on the amplitudes and phases of the flapping motion .most remarkable is the ability of the system to transition from stability to instability and back to stability as we vary some of these parameters .this model is reminiscent of the three - link swimmer used by kanso __ to examine periodic locomotion in potential flow , see .the three - link swimmer undergoes periodic shape deformations that result in coupled heaving , pitching and locomotion . here, we ignore body deformations for the sake of simplicity and prescribe the heaving and pitching motion directly .note that the three - link swimmer was also used by jing & kanso to study the effect of body elasticity on the stability of the coast motion of fish ( motion at constant speed ) .they found that elasticity of the body may lead to passive stabilization of the ( otherwise unstable ) coast motion , see .the present model consisting of a single elliptic body is mostly similar to the system studied by spagnolie _( 2010 ) both experimentally and numerically , see . in the latter ,an elliptic body undergoes passive pitching ( via a torsional spring ) subject to prescribed periodic heaving in viscous fluid , whereas in our model both the pitching and heaving motions are prescribed and the fluid medium is inviscid . despite these differences , the two models exhibit qualitatively similar behavior as discussed in section [ sec : discussion_and_conclusion ] .the paper is organized as follows . in section [ sec : problem_setup ] , we formulate the equations of motion governing the locomotion of a periodically flapping body in unbounded potential flow .we analyze the body s locomotion and efficiency when subject to small amplitude flapping motion in section [ sec : small_amplitude_actuations ] , and consider the more general case of finite amplitude flapping in section [ sec : locomotion ] . in section [ sec : stability_of_periodic_solution ] , we assess the stability of the periodic locomotion using floquet theory . the main findings and their relevance to biolocomotion are discussed in section [ sec : discussion_and_conclusion ] .and is submerged in unbounded potential fluid .motion is observed in inertial frame with position of mass center given by and orientation by .fish flaps in - and -directions , and propels in -direction.,scaledwidth=40.0% ] consider a planar elliptic body with semi - major axis and semi - minor axis , submerged in unbounded potential flow that is at rest at infinity .the elliptic body is neutrally buoyant , that is to say , the body and fluid densities are equal to .its mass is given by , and its moment of inertia about the center of mass is .let denote the position of the mass center with respect to a fixed inertial frame and let denote the orientation angle of the ellipse measured from the positive -direction to the ellipse s major axis , see figure [ fig : model ] .the linear and angular velocities are given by and , respectively , where the dot correspond to derivative with respect to time . in order to emulate the flapping motion of a swimming body, we assume that and vary periodically in time due to some periodic flapping force and flapping moment generated by the swimming body .note that in the case of a body swimming by deforming itself , and are a result of the body deformation . here , we do not account for the body deformation but rather prescribe and as periodic functions of time .namely , we set and solve for the resulting locomotion in the -direction .the equations governing the motion of the flapping body are basically kirchhoff s equations expressed in inertial frame and subject to forcing and in the - and -directions , that is , and where , and are the hydrodynamic forces and moment acting on the body . for motions in potential flow , , and be obtained using a classic procedure , \ddot{x } + \frac{1}{2}(m_2-m_1 ) \ddot{y } \sin2\theta - ( m_2-m_1 ) ( \dot{x } \sin 2 \theta - \dot{y}\cos 2\theta)\dot{\theta},\\[1ex ] f_y & = \frac{1}{2}\left [ -(m_1+m_2 ) - ( m_2-m_1 ) \cos2\theta\right]\ddot{y } + \frac{1}{2}(m_2-m_1 ) \ddot{x } \sin2\theta + ( m_2-m_1 ) ( \dot{x } \cos 2 \theta + \dot{y } \sin 2\theta)\dot{\theta},\\[1ex ] \tau & = -j \ddot{\theta } + \frac{1}{2}(m_2- m_1 ) \left(\dot{x}^2\sin 2\theta - \dot{y}^2 \sin 2\theta - 2 \dot{x}\dot{y } \cos 2\theta \right ) .\end{split}\ ] ] here , are , respectively , the added mass of the elliptic body along its major and minor directions and is the added moment of inertia , see , e.g. , . substituting into and , one can use to solve for , and to compute the forcing and needed in order for the body to achieve the prescribed flapping motion in .the total angular momentum of the body - fluid system is given by whereas the total linear momentum can be written as the momentum is conserved since there is no external forcing applied in the direction .therefore , one has , which yields .\label{eq : xdot}\ ] ] that is to say , equation admits an integral of motion whose value is given by the above equation .the distance traveled by the body s center of mass in one period of flapping , , is given by the total kinetic energy of the body - fluid system is given by by the work - energy theorem , the time derivative of the kinetic energy is equal to the total power input by the flapping force and moment .thus , the work done by flapping is equivalent to the kinetic energy . to this end, the average work done in one period is given by we define the _ cost of locomotion _ as the average work divided by the average distance over one period , namely hence , smaller means less energy expenditure for a fixed distance traveled .it is convenient to denote the _ efficiency _ of the system as the inverse of the cost of locomotion , that is . before we proceed to examining the locomotion and efficiency of such swimmer , we non - dimensionalize the system by scaling time with , length with and mass with .the variables are subsequently written in dimensionless form .the important parameters for this system are : aspect ratio , as well as the flapping amplitudes , , and phases and .initial points are marked by .right : trajectories of mass center in plane with snapshots of body in motion overlaid . simulationsare for and various aspect ratios : ( a ) , ( b ) , ( c ) , ( d ) .,scaledwidth=55.0% ] plane .initial points are marked by .right : trajectories of mass center in plane .simulations are for and various heaving amplitudes : ( a ) , ( b ) , ( c ) , ( d ) .,scaledwidth=55.0% ] plane .initial points are marked by .right : trajectories of mass center in plane .simulations are for and various pitching amplitudes : ( a ) , ( b ) , ( c ) , ( d ) .,scaledwidth=45.0% ] plane .initial points are marked by .middle and right : trajectories of mass center in plane .simulations are for and various combinations of phases : ( a ) , ( b ) , ( c ) , ( d ) , ( e ) , ( f ) , ( g ) , ( h ) .,scaledwidth=75.0% ] consider the case with small flapping amplitudes and .let and where both and are of the same order of magnitude .one gets and , but and are not necessarily small .use the approximation and and substitute into to obtain .\label{eq : xlin}\ ] ] clearly , the velocity in the direction depends on the aspect ratio and .this suggests that as long as , is the same function of time .its magnitude is of order . in other words , for small amplitude flapping ,the motion in direction is small compared to the flapping motion in and .approximate expressions of and are obtained by substituting and into , for small amplitude flapping , we can express the cost of locomotion in closed form }{\dfrac{a - b}{b } \epsilon_y\epsilon_\theta |\sin(\phi_y + \phi_\theta)| } = \dfrac{\pi \left[\gamma(\gamma + 1)\epsilon_y^2 + 2(\gamma^2 + 1)\epsilon_\theta^2\right]}{2\gamma(\gamma - 1 ) \epsilon_y\epsilon_\theta |\sin(\phi_y + \phi_\theta)|}. \label{eq : elin}\ ] ] hence , to minimize ( or , equivalently , to maximize efficiency ) , one needs the closed form expressions do not hold for large amplitudes and where the efficiency needs to be analyzed numerically , as done in the next section .we examine the swimming trajectories and their dependence on the following parameters : aspect ratio , amplitudes and , and phases and .the swimming motion is given by and , where the latter is integrated numerically to get .consider the case where , , , and consider various aspect ratios , as shown in figure [ fig : periodicmotiongamma ] .note that as we vary the aspect ratio , the total area of the elliptic body remains constant ( this is guaranteed by the way we non - dimensionlize length using ) .as expected , the net locomotion is almost zero when the elliptic body is close to a circular shape ( ) and it reaches a maximum as the elliptic body approaches a flat plate ( ) . in figure[ fig : periodicmotionay ] , is set to 4 and is varied .one can see that the net locomotion depends linearly on , which is also evident from . in figure[ fig : periodicmotionatheta ] , different cases of are shown .the net locomotion depends nonlinearly on .interestingly , the trajectories that correspond to and are almost identical , whereas for the locomotion is in the negative direction . motions for various phases and are shown in figures [ fig : periodicmotionphi ] . notice the shape and orientation of the closed path in the parameter space depend on the difference in phase .this can be readily verified by eliminating from and expressing the closed path in the plane as as varies , the closed path in the plane is elliptic , except for ( ) in which case it is a segment of the straight line given by . from , one has that possesses the following symmetries whereas the flapping motion in has the following symmetries based on these symmetries , one can immediately conclude that , when all other parameters are held fixed , motions that correspond to and are mirror images of each other : their distances and energies are the same , as seen from . for with ,one gets the same path in the space .when tracing the same path in the -plane ( but starting at different initial points ) , the resulting trajectories in the plane are similar ( with different initial positions ) .note that , in general , flapping motions that trace a straight line in the -plane do not correspond to zero net locomotion in the -plane , except for and .this is evident from the example of in figure [ fig : periodicmotionphi] .the locomotion here is not a result of a geometric phase but a dynamic phase , see . as a function of : ( a ) aspect ratio ,( b ) heaving amplitude , ( c ) pitching amplitude .the base parameter values are set to , , , .solid lines are nonlinear numerical solutions , while dashed lines are based on small amplitude approximation given in .,title="fig:",scaledwidth=28.0% ] as a function of : ( a ) aspect ratio , ( b ) heaving amplitude , ( c ) pitching amplitude .the base parameter values are set to , , , .solid lines are nonlinear numerical solutions , while dashed lines are based on small amplitude approximation given in .,title="fig:",scaledwidth=28.0% ] as a function of : ( a ) aspect ratio , ( b ) heaving amplitude , ( c ) pitching amplitude .the base parameter values are set to , , , .solid lines are nonlinear numerical solutions , while dashed lines are based on small amplitude approximation given in .,title="fig:",scaledwidth=28.0% ] we now compute the average work and cost of locomotion .ideally , one would like to find optimal parameter values that minimize ( maximize efficiency ) and/or maximize ( see , for example ) . instead of minimizing over the five dimensional parameter space , we study the dependence of on the system s parameters by varying one parameter at a time . in figure[ fig : elinnonlin ] , we set and vary and , respectively .solid lines correspond to the numerical nonlinear solutions and dashed lines are obtained by substituting the parameters into .figure [ fig : elinnonlin] shows that , for , there exist a optimal value of , whereas the small amplitude approximation in predicts that is a decreasing function of .figure [ fig : elinnonlin] shows an optimal value of and that the small amplitude results qualitatively follows the nonlinear behavior of .this is because the work depends quadratically on , the displacement depends linearly on and that the small amplitude approximation in preserves the form of dependence on this parameter .however , figure [ fig : elinnonlin] shows that when varying , the small amplitude results provide good approximation of the nonlinear efficiency only up to . in figure[ fig : effvarygamma ] to [ fig : effvaryatheta ] , we examine the dependence of on by discretizing the domain \times [ -\pi \,,\ , \pi] ] .note that the parameters that minimize , thus maximize efficiency , are approximately , and , where one example of the optimal phases is , with corresponding locomotion shown in figure [ fig : periodicmotiongamma ] . for this optimal motion ,the pitching angle is zero when the heaving motion is maximum ( out of phase ) , which qualitatively agrees with the results in .the optimal aspect ratio agrees with the optimal shape aspect ratio obtained in the comprehensive optimization study in , and is representative of the aspect ratio of various carangiform swimmers such as bass ( ) in , tuna ( ) in and saithe ( ) in .the optimal heave to cord ratio ( where ) and maximum angle both agree with the optimal motions for the rigid flapping body ( ) given in .this is remarkable given the simplicity of our model in comparison to the models in . for the cases and various aspect ratio .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .lower value of corresponds to higher efficiency.,title="fig:",scaledwidth=16.0% ] for the cases and various aspect ratio .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .lower value of corresponds to higher efficiency.,title="fig:",scaledwidth=16.0% ] for the cases and various aspect ratio .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .lower value of corresponds to higher efficiency.,title="fig:",scaledwidth=16.0% ] for the cases , and various heaving amplitude .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .lower value of corresponds to higher efficiency.,title="fig:",scaledwidth=16.0% ] for the cases , and various heaving amplitude .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .lower value of corresponds to higher efficiency.,title="fig:",scaledwidth=16.0% ] for the cases , and various pitching amplitude .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .lower value of corresponds to higher efficiency.,title="fig:",scaledwidth=16.0% ] for the cases , and various pitching amplitude .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .lower value of corresponds to higher efficiency.,title="fig:",scaledwidth=16.0% ] for the cases , and various pitching amplitude .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] , and rewriting and as follows where detailed expressions for , and are listed in appendix . in sections [ sec : problem_setup][sec : locomotion ] , we prescribed the flapping motion and according to and used to solve for and to solve for and . the resulting motion , , and as well as the forcing and are periodic with period .we let denote the corresponding to such periodic motion .we study the stability of by introducing a small perturbation such that while keeping and the same as that producing the periodic solution .in other words , we account for arbitrary perturbations in the fluid environment while keeping the same flapping forces to check if such perturbations destabilize the periodic trajectory . , and , with phases varied on a mesh in \times [ -\pi \,,\ , \pi] ] .the stable regions are shaded areas , and the unstable regions are white areas .notice the reflection symmetry about and periodicity of in both and that we observed in the efficiency analysis is again seen in the stability plot .two examples with different stability characteristics are shown .the solid lines are unperturbed periodic solutions , and dashed lines are solutions with random initial perturbations with magnitude .clearly , the trajectory corresponding to parameters in the stable region remain close to the periodic trajectory for the integration time whereas that corresponding to parameters in the unstable region does not . in figure[ fig : eig ] , we examine the behavior of the real and imaginary parts of the characteristic multipliers as a function of for and . in other words ,we explore the behavior of s for , along the dashed line in the left plot of figure [ fig : basecase ] .one can see that two characteristic multipliers are always located at ( represented by ) .the other two are represented by .the dynamics changes from stable to unstable when the two conjugates collide at and split onto real axis . for the considered parameters , when varies from to , stability changes from unstable to stable and stable to unstable four times in total . , and varied from to : ( a ) real and imaginary parts in complex plane .two characteristic multipliers always locate at are represented by .the two complex conjugates are represented by .( b ) real and ( c ) imaginary parts of characteristic multipliers .motion becomes unstable when the two complex conjugates collide at and split on real axis.,title="fig:",scaledwidth=30.0% ] + , and varied from to : ( a ) real and imaginary parts in complex plane .two characteristic multipliers always locate at are represented by .the two complex conjugates are represented by .( b ) real and ( c ) imaginary parts of characteristic multipliers .motion becomes unstable when the two complex conjugates collide at and split on real axis.,title="fig:",scaledwidth=35.0% ] , and varied from to : ( a ) real and imaginary parts in complex plane .two characteristic multipliers always locate at are represented by .the two complex conjugates are represented by .( b ) real and ( c ) imaginary parts of characteristic multipliers .motion becomes unstable when the two complex conjugates collide at and split on real axis.,title="fig:",scaledwidth=35.5% ] we now examine the stability behavior as we change and , respectively .figure [ fig : varygamma ] shows stability regions for while varying . for bodies closer to circular shape ( ) , the motion is stable for all ( but this stability property is not very useful since the net displacement is almost zero ) .when , unstable regions start to appear .as increases , unstable regions grow while stable regions shrink .the total area of stable regions becomes minimum when , and the stable areas around persist .interestingly , as continues to increase , new stable regions start to emerge and grow from the previous unstable areas around .then , at these spots , unstable regions emerge and grow from the newly formed stable regions , and so on and so forth .the boundaries between stable and unstable regions around become blurry as becomes larger , and remain stable .this trend is reminiscent of the phenomenon observed in spagnolie et al . , in which the authors noticed the motion of an elliptic body subject to prescribed heaving and passive pitching goes through states from `` coherence to incoherence , and back again '' as the aspect ratio changes . note that the latter studies are in viscous fluid whereas the analysis here is for an inviscid fluid model .interestingly , this simplified model is able to capture , at least qualitatively , the behavior observed in . and various aspect ratio .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .shaded areas correspond to stable cases , white areas correspond to unstable cases.,title="fig:",scaledwidth=17.0% ] and various aspect ratio .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .shaded areas correspond to stable cases , white areas correspond to unstable cases.,title="fig:",scaledwidth=17.0% ] + and various aspect ratio .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .shaded areas correspond to stable cases , white areas correspond to unstable cases.,title="fig:",scaledwidth=17.0% ] and various aspect ratio .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .shaded areas correspond to stable cases , white areas correspond to unstable cases.,title="fig:",scaledwidth=17.0% ] , and various heaving amplitude .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .shaded areas correspond to stable cases , white areas correspond to unstable cases.,title="fig:",scaledwidth=17.0% ] , and various heaving amplitude .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .shaded areas correspond to stable cases , white areas correspond to unstable cases.,title="fig:",scaledwidth=17.0% ] , and various heaving amplitude .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .shaded areas correspond to stable cases , white areas correspond to unstable cases.,title="fig:",scaledwidth=17.0% ] , and various heaving amplitude .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .shaded areas correspond to stable cases , white areas correspond to unstable cases.,title="fig:",scaledwidth=17.0% ] , and various heaving amplitude .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .shaded areas correspond to stable cases , white areas correspond to unstable cases.,title="fig:",scaledwidth=17.0% ] , and various pitching amplitude .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .shaded areas correspond to stable cases , white areas correspond to unstable cases.,title="fig:",scaledwidth=17.0% ] , and various pitching amplitude .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .shaded areas correspond to stable cases , white areas correspond to unstable cases.,title="fig:",scaledwidth=17.0% ] + , and various pitching amplitude .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .shaded areas correspond to stable cases , white areas correspond to unstable cases.,title="fig:",scaledwidth=17.0% ] , and various pitching amplitude .each plot is evaluated on a mesh in \times [ -\pi \,,\ , \pi] ] in plane .shaded areas correspond to stable cases , white areas correspond to unstable cases.,title="fig:",scaledwidth=17.0% ] figure [ fig : varyay ] shows stability regions for while varying . for small , the body is mostly rotating , and the motion is stable for all but with no net locomotion . as increases , unstable regions start to form around , as can be seen in figure [ fig : varyay](c ) . as continues to increase ,unstable regions grow while stable regions shrink .then , layers of stable / unstable regions start to form around . unlike in figure [ fig : varygamma ] , areas around do not remain stable .overall , the total area of stable regions decreases as becomes larger .interestingly , the area and shape of the stable regions depend nonlinearly on whereas the trajectory of the mass center depends _ linearly _ on . in figure [fig : varyatheta ] , we vary while keeping and .when is small , the whole plane in is stable but again does not result in net locomotion .as increases , unstable regions start to emerge and grow , while stable regions shrink but persist around , and as increases further , unstable regions start to emerge within the stable strips .this trend of switching from stability to instability and back to stability when varying parameters is very interesting .it suggests that such swimmers can change their stability character by changing their flapping motion , and thus can easily switch from stable periodic swimming to an unstable motion ( more maneuverable ) when they feel the need to , such as when evading a predator .based on this , one can conjecture that when it comes to live organisms , maneuverability and stability need not be thought of as disjoint properties , rather the organism may manipulate its motion in favor of one or the other depending on the task at hand .whether live organisms change their stability properties at will is yet to be investigated experimentall .we studied the locomotion , efficiency and stability of periodic swimming of fish using a simple planar model of an elliptic swimmer undergoing prescribed sinusoidal heaving and pitching in potential flow .we obtained expressions for the locomotion velocity for both small and finite flapping amplitudes , and showed how trajectories depend on key parameters , namely , aspect ratio , amplitudes and and phases and .efficiency is defined as the inverse of cost of locomotion .the dependence of on the parameters were shown for both small and finite amplitude flappings .we observed that the efficiency maximizing parameters are approximately and , where , whose values are in excellent agreement with results based on experimental and computational motions of flapping fish , see and references therein .we then studied the stability of periodic locomotion using floquet theory .to our best knowledge , besides the work of weihs which uses approximate arguments , this is the first work that rigorously studies the stability of periodic locomotion albeit in a simplified model .we focused on evaluating stability on the whole parameter space , and examined the effect of varying and .we observed that stable and unstable regions in the plane evolved as these parameters change .particularly noteworthy is the back and forth switching between stability and instability around the spots and .this switching is reminiscent to the observation in that the motion of a heaving and pitching foil switches from coherence to incoherence and back to coherence when varying the aspect ratio of foil . in our study, we found a similar behavior when varying not only the aspect ratio but also the flapping parameters .this indicates that such swimmer can change its stability character by changing its flapping motion , and thus can easily switch from stable periodic swimming to an unstable , yet more maneuverable , state .based on this , one could conjecture that , when it comes to live organisms , maneuverability and stability are not disjoint properties but may be manipulated depending on the needs of the organism . clearly , this statement is speculative until verified by experimental evidence . to date , little is known experimentally on the stability of underwater periodic motions , let alone the stability of biological swimmers .future extensions of this work will include the effects of body deformation and body elasticity , vortex shedding , and frequency of flapping on the observed stability of periodic swimming , as well as on motion efficiency such as in .and in the complex -plane is mapped to the exterior region of a circle with radius in the -plane .the mapping is given by .,scaledwidth=45.0% ] in potential flow , the fluid forces and moment can be obtained from the _ added - mass theory _ or from the _ extended blasius theorem _ . in this appendix, we present both derivations and show their equivalence .the exterior region of the ellipse in the complex -plane ( ) is mapped to the exterior region of a circle with radius in the -plane , see figure [ fig : mapping ] .the mapping is given by where .the complex potential of the fluid in -plane is given by , where is the velocity of the mass center mapped into the -plane .therefore , the forces and moment exerted by the surrounding fluid on a moving body are given by the _ extended blasius theorem _ . in -plane , + \rho a_{\mathcal{b } } \ddot{z}_o,\\[2ex ] \tau & = \frac{\rho}{2 } \text{re}\left [ 2 \dot{\bar{z}}_o \oint_{\partial\mathcal{b } } ( z - z_o ) \frac{\text{d}w}{\text{d}z}\text{d}z - \oint_{\partial\mathcal{b } } ( z - z_o)\left(\frac{\text{d}w}{\text{d}z}\right)^2\text{d}z + \frac{\text{d}}{\text{d}t}\left ( \oint_{\partial\mathcal{b } } |z - z_o|^2 \frac{\text{d}w}{\text{d}z } \text{d}z\right)\right ] , \end{split}\label{eq : blasius}\ ] ] where is the area of the ellipse , is the boundary of the body , and the reader is reminded that the densities of the body and fluid are both .notice that the last term in needs to be treated separately .all other integrals are analytic and , using residual theory , can be taken around an infinitely large circle instead of the boundary of the body , which greatly simplifies the calculations .for the last term in moment , since is not analytic , one can not use this technique .instead , it needs to be integrated on the boundary . substituting and into ,one obtains the hydrodynamic forces and moment given by since the hydrodynamical forcing terms are equivalent with the expressions given in , which is repeated here for completeness , \ddot{x } + \frac{1}{2}(m_2-m_1 ) \ddot{y } \sin2\theta - ( m_2-m_1 ) ( \dot{x } \sin 2 \theta - \dot{y}\cos 2\theta)\dot{\theta},\\[1ex ] f_y & = \frac{1}{2}\left [ -(m_1+m_2 ) - ( m_2-m_1 ) \cos2\theta\right]\ddot{y } + \frac{1}{2}(m_2-m_1 ) \ddot{x } \sin2\theta + ( m_2-m_1 ) ( \dot{x } \cos 2 \theta + \dot{y } \sin 2\theta)\dot{\theta},\\[1ex ] \tau & = -j \ddot{\theta } + \frac{1}{2}(m_2- m_1 )\left(\dot{x}^2\sin 2\theta - \dot{y}^2 \sin 2\theta - 2 \dot{x}\dot{y } \cos 2\theta \right ) .\end{split}\ ] ] and the governing equations are again repeated here when expressed in a body - fixed frame , the hydrodynamic forces and moment take a simpler form in terms of the _ added mass coefficients _ and .roughly speaking , as a body moves through potential flow , the body - fluid system behaves as an augmented body with modified mass and inertia that account for the added mass and added inertia due to the presence of the fluid .the added mass and inertia depend only on the geometry of the body and direction of motion .the kirchhoff s equations of motion in terms of the body - fixed frame variables are given by where the body frame velocities and forces are given by and is the same form in inertial frame . transforming via a rotation to inertial frame ,one obtains the equations given in and , and the hydrodynamical forcing terms are given by .it is then straightforward to verify that and are equivalent . for completeness , we rewrite equation where and 2\dot{\theta}\dot{x } \cos2\theta + 2 \dot{\theta}\dot{y } \sin2\theta\\[1.2ex ] \dot{\theta}\\[1.2ex ] ( \dot{x}^2 - \dot{y}^2 ) \sin2\theta - 2\dot{x}\dot{y}\cos2\theta \end{pmatrix } , \qquad \mathbf{f}^{\rm flap } = \dfrac{1}{2\rho\pi c^2 } \begin{pmatrix } 0\\[1.2ex ] f^{\rm flap}\\[1.2ex ] 0\\[1.2ex ] \tau^{\rm flap } \end{pmatrix}.\ ] ] these equations can be rewritten as one can linearize above equation and obtain the entries of the jacobian are given by \dfrac{2 c^2 \dot{\theta}(c^2 - r^2 \cos2\theta)}{-r^4 + c^4 } & \dfrac{2 r^2 c^2 \dot{\theta}\sin2\theta } { r^4 - c^4 } & \mathbb{j}_{23 } & \mathbb{j}_{24}\\[1.3ex ] 0 & 0 & 0 & 1\\[1.3ex ] \mu\alpha & -\mu\beta & \mu\left[(\dot{x}^2 - \dot{y}^2)\cos2\theta + 2\dot{x}\dot{y}\sin2\theta\right ] & 0 \end{pmatrix},\ ] ] where , \quad \mathbb{j}_{14 } = \frac{2c^4}{r^4 - c^4}\left[-\dot{x } r^2 \sin2\theta + \dot{y } ( c^2 + r^2\cos2\theta ) \right],\\\mathbb{j}_{23 } & = \dfrac{c^2}{\rho\pi ( r^4 - c^4)}\left[f^{\rm flap } \sin2\theta - 4 r^2 \rho \pi \alpha \dot{\theta}\right ] , \quad \mathbb{j}_{24 } = \frac{2c^4}{r^4 - c^4}\left[\dot{x } ( c^2 - r^2\cos2\theta ) - \dot{y } r^2 \sin2\theta\right ] .\end{split}\ ] ]the authors would like to thank dr . andrew a. tchieu and professor paul k. newton for the enlightening discussions .the work of ek is partially supported by the national science foundation through the career award cmmi 06 - 44925 and the grant ccf 08 - 11480 .
most aquatic vertebrates swim by lateral flapping of their bodies and caudal fins . while much effort has been devoted to understanding the flapping kinematics and its influence on the swimming efficiency , little is known about the stability ( or lack of ) of periodic swimming . it is believed that stability limits maneuverability and body designs / flapping motions that are adapted for stable swimming are not suitable for high maneuverability and vice versa . in this paper , we consider a simplified model of a planar elliptic body undergoing prescribed periodic heaving and pitching in potential flow . we show that periodic locomotion can be achieved due to the resulting hydrodynamic forces , and its value depends on several parameters including the aspect ratio of the body , the amplitudes and phases of the prescribed flapping . we obtain closed - form solutions for the locomotion and efficiency for small flapping amplitudes , and numerical results for finite flapping amplitudes . we then study the stability of the ( finite amplitude flapping ) periodic locomotion using floquet theory . we find that stability depends nonlinearly on all parameters . interesting trends of switching between stable and unstable motions emerge and evolve as we continuously vary the parameter values . this suggests that , for live organisms that control their flapping motion , maneuverability and stability need not be thought of as disjoint properties , rather the organism may manipulate its motion in favor of one or the other depending on the task at hand .
this project is part of a broader ongoing investigation into the use of methods from data analysis to identify the presence of structures and relations between the syntactic parameters of the world languages , considered either globally across all languages , or within specific language families and in comparative analysis between different families .we analyze the sswl database of syntactic structures of world languages , using methods from _ topological data analysis_. after performing principal component analysis to reduce the dimensionality of the data set , we compute the persistent homology .the generators behave erratically when computed over the entire set of languages in the database . however, if restricted to specific language families , non - trivial persistent homology appears , which behaves differently for different families .we focus our analysis on the two largest language families covered by the sswl database : the niger - congo family and the indo - european family .we show that the indo - european family has a non - trivial persistent generator in the first homology . by performing cluster analysis, we show that the four major language families in the database ( indo - european , niger - congo , austronesian , afro - asiatic ) exhibit different cluster structures in their syntactic parameters .this allows us to focus on specific cluster filtering values , where other non - trivial persistent homology can be found , in both the indo - european and the niger - congo cases .this analysis shows that the indo - european family has a non - trivial persistent generator of the first homology , and two persistent generators of the zeroth homology ( persistent connected components ) , with substructures emerging at specific cluster filtering values .the niger - congo family , on the other hand , does not show presence of persistent first homology , and has one persistent connected component .we discuss the possible linguistic significance of persistent connected components and persistent generators of the first homology .we propose an interpretation of persistent components in terms of subfamilies , and we analyze different possible historical linguistic mechanisms that may give rise to non - trivial persistent first homology .we focus on the non - trivial persistent first homology generator in the indo - european family and we try to trace its origin in the structure of the phylogenetic network of indo - european languages .the first hypothesis we consider is the possibility that the non - trivial loop in the space of syntactic parameters may be a reflection of the presence of a non - trivial loop in the phylogenetic network , due to the historical anglo - norman bridge " connecting french to middle english , hence creating a non - trivial loop between the latin and the germanic subtrees .however , we show by analyzing the syntactic parameters of these two subtrees alone that the persistent first homology is not coming from this part of the indo - european family .we show that it is also not coming from the indo - iranian branch .moreover , we show that adding or removing the hellenic branch from the remaining group of indo - european languages causes a change in both the persistent first homology and the number of persistent component .this work was performed within the activities of the last author s mathematical and computational linguistics lab and cs101/ma191 class at caltech .the last author is partially supported by nsf grants dms-1007207 , dms-1201512 , and phy-1205440 .the idea of codifying different syntactic structures through _ parameters _ is central to the principles and parameters model of syntax , , , within generative linguistics . in this approach ,one associates to a language a string of binary ( or valued ) variables , the syntactic parameters , that encode many features of its syntactic structures .examples of such parameters include _ subject verb _ , which has the value when in a clause with intransitive verb the order subject verb can be used ; _ noun possessor _ , which has value when a possessor can follow the noun it modifies ; _ initial polar q marker _ , which has value when a direct yes / no question is marked by a clause initial question - marker ; etc .the syntactic structures of the world s languages " ( sswl ) database , which we used in this investigation , includes a set of different parameters , ( partially ) mapped for a set of of the known world languages . of parameters known and of variance preserved ; and with of parameters known and of variance preserved .third graph : barcode for a random subset of 15 languages , of variance preserved .[ allling1],title="fig : " ] of parameters known and of variance preserved ; and with of parameters known and of variance preserved .third graph : barcode for a random subset of 15 languages , of variance preserved .[ allling1],title="fig : " ] of parameters known and of variance preserved ; and with of parameters known and of variance preserved .third graph : barcode for a random subset of 15 languages , of variance preserved .[ allling1],title="fig : " ] the comparative study of syntactic structures across different world languages plays an important role in linguistics , see for a recent extensive treatment . in particular , in this study , we focus on data of syntactic parameters for two of the major families of world languages : the indo - european family and the niger - congo family .these are the two families that are best represented in the sswl database , which includes 79 indo - european languages and 49 niger - congo languages .the niger - congo family is the largest language family in the world ( by number of languages it comprises ) .general studies of syntactic structures of niger - congo languages are available , see for instance , , though many of the languages within this family are still not very well mapped when it comes to their syntactic parameters in the sswl database .the indo - european family , on the other hand , is very extensively studied , and more of the syntactic parameters are mapped . despite this difference ,the data available in the sswl database provide enough material for a comparative data analysis between these two families .the point of view based on syntactic parameters has also come to play a role in the study of historical linguistics and language change , see for instance .an excellent expository account of the parametric approach to syntax is given in .one of the sources of criticism to the principles and parameters model is the lack of a good understanding of the space of syntactic parameters , . in particular, the theory does not clearly identify a set of independent binary variables that can be thought of as a universal set of parameters " , and relations between syntactic parameters are not sufficiently well understood .it is only in recent years , however , that accessible online databases of syntactic structures have become available , such as the wals database of or the sswl database .the existence of databases that record syntactic parameters across different world languages for the first time makes them accessible to techniques of modern _ data analysis_. our hope is that a computational approach , based on various data analysis techniques applied to the syntactic parameters of world languages , may help the investigation of possible dependence relations between different parameters and a better understanding of their overall structure . in the present study , we focused on the data collected in the sswl database , and on _ topological data analysis _ based on _persistent homology_. the structures we observe do not , at present , have a clear explanation in terms of linguistic theory and of the principles and parameters model of syntax .the presence of persistent homology in the syntactic parameter data , and its different behavior for different language families begs for a better understanding of the formation and persistence of topological structures from the historical linguistics viewpoint , and from the viewpoint of syntactic theory ., title="fig : " ] , title="fig : " ] , title="fig : " ] , title="fig : " ]an important and fast developing area of data analysis , in recent years , has been the study of high dimensional structures in large sets of data points , via topological methods , see , , .these methods of _ topological data analysis _ allow one to infer global features from discrete subsets of data as well as find commonalities of discrete sub - objects from a given continuous object .the techniques developed within this framework have found applications in fields such as pure mathematics ( geometric group theory , analysis , coarse geometry ) , as well as in other sciences ( biology , computer science ) , where one has to deal with large sets of data .topology is very well - suited in tackling these problems , being qualitative in nature .specifically , topological data analysis achieves its goal by transforming the data set under study into a family of simplicial complexes , indexed by a proximity parameter .one analyzes said complexes by computing their _ persistent homology _ , and then encoding the persistent homology of the data set in the form of a parametrized version of a betti number called a _barcode graph_. such graphs exhibit explicitly the number of connected components and of higher - dimensional holes in the data .we refer the reader to , , for a general overview and a detailed treatment of topological data analysis and persistent homology . as an example, persistent homology was used recently to study the topology of a space of 3d images , where the authors determined that the barcode representation from persistent homology matched the homology of a klein bottle .suppose given a set of points in some euclidean space .let denote the euclidean distance function in .the vietoris - rips complex of scale , over a field , is defined as the chain complex whose space of -simplices corresponds to the -vector space spanned by all the unordered -tuples of points where each pair has distance .the boundary maps , with , are the usual ones determined by the incidence relations of and -dimensional simplices . for ,one denotes by the -th homology with coefficients in of the vietoris - rips complex . when the scale varies , one obtains a system of inclusion maps between the vietoris - rips complexes , , for . by functoriality of homology , these maps induce corresponding morphisms between the homologies , .a homology class in that is not in the image of is a birth ; a nontrivial homology class in that maps to the zero element of is a death , and a nontrivial homology class in that maps to a nontrivial homology class in is said to persist . mapping the deaths , births , and persistence of a set of generators of the homology ,as the radius grows gives rise to a barcode graph for the betti numbers of these homology groups .those homology generators that survive only over short intervals of radii are attributed to noise , while those that persist for longer intervals are considered to represent actual structure in the data set .when we analyze the persistent topology of different linguistic families ( see the detailed discussion of results in [ toplingsec ] ) , we find different behaviors , in the number of persistent generators in both and .as typically happens in many data sets , the generators for with behave too erratically to identify any meaningful structure beyond topological noise . in general , the rank of the -th homology group of a complex counts the number of -dimensional holes " that can not be filled by an -dimensional patch .in the topological analysis of a point cloud data set , the presence of a non - trivial generator of the at a given scale of the vietoris - rips complex implies the existence of a set of data points that is well described by an -dimensional set of parameters , whose shape in the ambient space encloses an -dimensional hole , which is not filled by other data in the same set . in this sense ,the presence of generators of persistent homology reveal the presence of structure in the data set . in our case, the database provides a data point for each recorded world language ( or for each language within a given family ) , and the data points live in the space of syntactic parameters , or in a space of a more manageable lower dimension after performing principal component analysis . in this setting , the presence of an -dimensional hole " in the data ( a generator of the persistent ) shows that ( part of ) the data cluster around an -dimensional manifold that is not filled in " by other data points .possible coordinates on such -dimensional structures represent relations among the syntactic parameters , over certain linguistic ( sub)families .since the only persistent generators we encountered are in the and , we discuss more in detail their respective meanings . the rank of the persistent counts the number of connected components of the vietoris - rips complex .it reveals the presence of clusters of data , within which data are closer to each other ( in the range of scales considered for the vietoris - rips complex ) than to any point in any other component .thus , a language family exhibiting more than one persistent generator of has linguistic parameters that naturally group together into different subfamilies .it is not known , at this stage of the analysis , whether in such cases the subsets of languages that belong to the same connected component correspond to historical linguistic subfamilies or whether they cut across them .we will give some evidence , in the case of the indo - european family , in favor of matching persistent generators of the to major historical linguistic sub - families within the same family .certainly , in all cases , the connected components identified by different generators of the persistent can be used to define a grouping into subfamilies , whose relation to historical linguistics remains to be investigated .the presence of an -generators also means that part of the data ( corresponding to one of the components of the vietoris - rips complex ) clusters around a one - dimensional closed curve .more precisely , one can identify the first homology group of a space with the group of homotopy classes $ ] of ( basepoint preserving ) maps to the circle .this means that , if there is a non - trivial generator of the persistent , then there is a choice of a circle coordinate that best describes that part of the data .the freedom to change the map up to homotopy makes it possible to look for a smoothing of the circle coordinate .it is not obvious how to interpret these circles from the linguistic point of view .the fact that a generator of the represents a -dimensional hole means that , given the data that cluster along this circle , no further data point determine a -dimensional surface interpolating across the circle . as the topological structures we are investigating stem from a vietoris - rips complex that measures proximity between syntactic parameters of different languages , we can propose a heuristic interpretation for the presence of such circles as the case of a ( sub)family of languages where each language in the subfamily has other neighboring " languages with sufficiently similar syntactic parameters , so that one can go around the whole subfamily via changes of syntactic parameters described by a single circle coordinate , while parameter changes that move along two - dimensional manifolds and interpolate between data points on the circle can not be performed while remain within the same ( sub)family. two different possible models of how a non - trivial generator of the persistent first homology can arise point to different possible explanations in historical - linguistic terms . ,title="fig : " ] as shown in figure [ biffig ] , the first model is a typical hopf bifurcation picture , where a circle arises from a point ( with the horizontal direction as time coordinate ) .this model would be compatible with a phylogenetic network of the corresponding language family that is a tree , where one of the nodes generates a set of daughter nodes whose points in the parameter space contain a nontrivial loop .the second possibility is of a line closing up into a circle .this may arise in the case of a language family whose phylogenetic network is not a tree , but it contains itself a loop that closes off two previously distant branches .there are well known cases where the phylogenetic network of a language family is not necessarily best described by a tree .the most famous case is probably the anglo - norman bridge in the phylogenetic tree " of the indo - european languages , see figure [ ietreefig ] .however , it is important to point out that the presence of a loop in the phylogenetic network of a language family does not imply that this loop will leave a trace in the syntactic parameters , in the form of a non - trivial first persistent homology .conversely , the presence of persistent first homology , by itself , is no guarantee that loops may be present in the phylogenetic network , for example due to possibilities like the hopf bifurcation picture described above .thus , one can not infer directly from the presence or absence of a persistent conclusions about the topology of the historical phylogenetic network .the only conclusion of this sort that can be drawn is that a persistent suggests a phylogenetic loop as one of the possible causes .conversely , one can read the absence of non - trivial persistent first homology as a suggestion ( but not an implication ) of the fact that the phylogenetic network may be a tree and that phenomena like the anglo - norman bridge did not occur in the historical development of that family .we will discuss this point more in detail in the case of the indo - european language family .this is a very good example , which shows how the possible correlation between loops in the space of syntactic parameters and in the phylogenetic network is by no means an implication .indeed , the indo - european language family contains both a known loop in the phylogenetic network , due to the anglo - norman bridge ( see figure [ ietreefig ] ) , and a non - trivial generator of the persistent . however , we will show using our topological data analysis method that these two loops are in fact unrelated , contrary to what intuition might have suggested . ] , , and .[ ie1fig],title="fig : " ] , , and .[ ie1fig],title="fig : " ] , , and .[ ie1fig],title="fig : " ]the sswl database was first imported into a pivot table in excel .the on - off parameters are represented in binary , in order to compute the distances between languages .however , the parameter values are not known for many of the languages in the database : over one hundred of the languages have , at present , less than half of their parameters known .thus , we decided to replace empty language parameters with a value of 0.5 .all together , we ended up with 252 languages , each with 115 different parameters .we then proceeded to our analysis based on the results from perseus homology software .this is achieved through a series of matlab scripts . the script named data_select_full.m allows for selection of subsets of the raw data .it performs principal component analysis on the raw parameter data and saves it to a text file for use in perseus .the format of the data is that of a vietoris - rips complex .this script has two important parameters : a completeness threshold , and a percent variance to preserve .the completeness threshold removes the languages below a threshold of known parameters .the percent variance allows us to reduce the dimensionality of our data .the next script , named barcode.m , was used to create barcode graphs for data visualization .perseus outputs the birth and death times for each persistent homology generator , which are then used to construct the barcode graph of the persistence intervals to visualize the structure and determine the generators .the radii in our complexes are incremented by of the mean distance between languages .data analysis was initially set up as a three step process : select the data with the script data_select_full.m , analyze it with perseus , and use barcode.m to visualize the results . the final script , named run_all ,streamlines this process under a single input command .finally , our analysis includes examining how many data points belong to clusters of points at any given radius .clusters are constructed by creating -spheres of uniform radius centered at each data point . if the -spheres of two data points overlap , then those data points are in the same cluster .a non - trivial cluster is a cluster with at least two data points contained within .the scripts group_select.m and graph_clusters.m make it possible to visualize the number of clusters and non - trivial clusters as radius increases . at indices and .[ ie3fig],title="fig : " ] at indices and .[ ie3fig],title="fig : " ]a preliminary analysis performed over the entire set of languages in the sswl database shows that the non - trivial homology generators of and behave erratically. moreover , there are too many generators of and to draw any meaningful conclusion about the structure of the underlying topological space .one can see the typical behavior represented in figure [ allling1 ] . in the first graph of figure [ allling1 ] we included the languages with more than of the parameters known , while in the second we removed all languages with more than of the parameters unaccounted for . herepercentage of parameters is with respect to the largest number of syntactic parameters considered in the sswl database .one can compare this with the case of a randomly generated subset of languages , presented in the third graph of figure [ allling1 ] .notice that , while in the cases represented in the first two graphs of figure [ allling1 ] there is noise " in the and region , that prevents a clear identification of persistent generators , the homology of random subsets of the data , as displayed in the third graph of figure [ allling1 ] , is relatively sparse , containing only topologically trivial information .this observation lead us to the hypothesis that the behavior seen in figure [ allling1 ] stems from a superposition of some more precise , but non - uniform , topological information associated to the various different linguistic families . in order to test this hypothesis , we decided to examine specific language families as an additional method of data filtering .we chose the four largest families represented in the original database : indo - european with 79 languages , niger - congo with 49 , austronesian with 18 , and afro - asiatic with 14 .although some of the languages in the database included latitude and longitude coordinates , these were ignored when determining language family . , , and .[ nc1fig],title="fig : " ] , , and .[ nc1fig],title="fig : " ] , , and .[ nc1fig],title="fig : " ] a first observation , when comparing syntactic parameters of different linguistic families , is that they exhibit different cluster structure of the syntactic parameters .this is illustrated in figure [ clusterfig ] , in the case of the our largest families in the sswl database .based on this cluster analysis , we then focused on the cases of the indo - european and the niger - congo language family and we searched for nontrivial generators of the first homology in appropriate ranges of cluster filtering .the cluster analysis of figure [ clusterfig ] suggests that cluster filter values between and should provide interesting information .we computed additional barcode diagrams corresponding to cluster filtering values and . and indices and , and for cluster filtering value and indices and .[ nc2fig],title="fig : " ] and indices and , and for cluster filtering value and indices and .[ nc2fig],title="fig : " ] and indices and , and for cluster filtering value and indices and .[ nc2fig],title="fig : " ] and indices and , and for cluster filtering value and indices and .[ nc2fig],title="fig : " ] in the graphs presented in the following subsection , the barcode graphs are labeled by a set of three indices .the first two indices refer to the principal component analysis and the third index to the runs of the perseus program computing births and deaths of homology generators of the vietoris - rips complex .more precisely , the first index ( 7 or 10 ) refers to the percent variance divided by 10 , while the second index ( 0 , 3 or 5 ) refers to the percent complete divided by 10 .they are discussed above in [ datasec ] .the third parameter is the number of steps in perseus . if present , the additional parameter given by the number after cluster " is one hundred times the radius used for cluster filtering .we analyzed the persistent homology of the syntactic parameters for the indo - european language family .as shown in figure [ ie1fig ] , at values and one sees persistent generators of and intervals in the varying -sphere radius , for which nontrivial generators exist . at values , as shown in figure [ ie1fig ] , one sees one persistent generator of and two persistent generators of .the existence of a persistent generator for the suggests that there should be a circle coordinate " description for at part of the syntactic parameters of the indo - european languages .the fact that there are two persistent generators of in the same diagram indicates two connected components , only one of which is a circle : this component determines which subset of syntactic parameters admits a parameterization by values of a circle coordinate .based on the cluster analysis described in [ clustersec ] above , we then focused on specific regions of cluster filtering values that were more likely to exhibit interesting topology .for example , for cluster filtering value , the results show , respectively , one generator of and one generator of , for indices , and one generator of and a possibility of two persistent generators of the , for indices , see figure [ ie3fig ] .the appearance of persistent generators of the as specific cluster filtering values identifies other groups of syntactic parameters that may admit circle variable parameterizations .what these topological structures in the space of syntactic parameters , and these subsets admitting circle variables description , mean in terms of linguistic theory remains to be fully understood .we analyze some historical - linguistic hypotheses in the following subsection .it is often argued that the phylogenetic tree " of the family of indo - european languages should not really be a tree , because of the historical influence of french on middle english , see figure [ ie3fig ] , which can be viewed as creating a bridge ( sometimes referred to as the anglo - norman bridge ) connecting the latin and the germanic subtrees and introducing non - trivial topology in the indo - european phylogenetic network .it is well known that the influx of french was extensive at the lexical level , but it is not clear whether one should expect to see a trace of this historical phenomenon when analyzing languages at the level of syntactic structures .it is , however , a natural question to ask whether the non - trivial loop one sees in the persistent topology of syntactic parameters of the indo - european family may perhaps be a syntactic remnant of the anglo - norman bridge .however , a further analysis of the sswl dataset of syntactic parameters appears to exclude this possibility .indeed , we computed the persistent homology using only the indo - european languages in the latin and germanic groups .if the persistent generator of were due to the anglo - norman bridge one would still find this non - trivial generator when using only this group of languages , while what we find is that the group of latin and germanic languages alone carry no non - trivial persistent first homology , see figure [ noanfig ] . and[ noanfig],title="fig : " ] and .[ noanfig],title="fig : " ] in order to understand the nature of the two persistent generators of , we separated out the indo - iranian subfamily of the indo - european family , to test whether the two persistent connected components would be related to the natural subdivision into the two main branches of the indo - european family . even though the indo - iranian branch is the largest subfamily of indo - european languages ,it is much less extensively mapped in sswl than the rest of the indo - european family , with only 9 languages recorded in the database .thus , a topological data analysis performed directly on the indo - iranian subfamily is less reliable , but one can gain sufficient information by analyzing the remaining set of indo - european languages , after removing the indo - iranian subfamily .the result is illustrated in figure [ group3afig ] .we see that indeed the number of persistent connected component is now just one , which supports the proposal of relating persistent generators of to major subdivisions into historical linguistic subfamilies .moreover , the persistent generator of the is still present , which shows that the non - trivial first homology is not located in the indo - iranian subfamily . .[ group3afig ] ] in order to understand more precisely where the non - trivial persistent first homology is located in the indo - european family , we performed the analysis again , after removing the indo - iranian languages and also removing the hellenic branch , including both ancient and modern greek .the resulting persistent topology is illustrated in figure [ group3bfig ] . by comparing figures[ group3afig ] and [ group3bfig ] one sees that the position of the hellenic branch of the indo - european family has a direct role in determining the persistent topology .when this subfamily is removed , the number of persistent connected components ( generators of ) jumps from one to three , while the non - trivial single generator of disappears .although this observation by itself does not provide an explanation of the persistent topology in terms of historical linguistics of the indo - european family , it points to the fact that , if historical linguistic phenomena are involved in determining the topology , they appear to be related to the role that ancient greek and the hellenic branch played in the historical development of the indo - european languages . .[ group3bfig ] ] when performing a more detailed cluster analysis on the indo - european family , one finds sub - structures in the persistent topology . for instance , as shown in figure [ ie3fig ] , one sees a possible second generator of the persistent for cluster filtering value 165 , with indices .these substructures may also be possible traces of other historical linguistic phenomena .we performed the same type of analysis on the syntactic parameters of the niger - congo language family .the interesting result we observed is that the behavior of persistent homology seems to be quite different for different language families .figure [ nc1fig ] shows the barcode diagrams for persistent homology at index values , , and , which can be compared with the diagrams of figure [ ie1fig ] for the indo - european family . in the niger - congo family , we now see persistent homology , respectively , of ranks , , and ( compare with ranks , , in the indo - european case ) . a lower rank in the fewer connected components in the vietoris - rips complex , which seems to indicate that the syntactic parameters are more concentrated and homogeneously distributed across the linguistic family , and less spread out " into different sub - clusters .following the cluster analysis of [ clustersec ] , we also considered the persistent homology for the niger - congo family at specific cluster filtering values . while for cluster filtering value and indices one sees one persistent generator of and a possibility of a persistent generator in the , cluster filtering value with indices , as well as cluster filtering value with indices and show one persistent generator in the .this persistent homology viewpoint seems to suggest that syntactic parameters within the niger - congo language family may be spread out more evenly across the family than they are in the indo - european case , with a single persistent connected component , whereas the indo - european ones have two different persistent connected component , one of which has circle topology .we showed that methods from topological data analysis , in particular persistent homology , can be used to analyze how syntactic parameters are distributed over different language families . in particularwe compared the cases of indo - european and niger - congo languages . 1 . to what extent do persistent generators of the ( that is , the persistent connected components ) of the data space of syntactic parameter correspond to different ( sub)families of languages in the historical linguistic sense ?for example , are the three generators visible at scale in the congo - niger family a remnant of the historical subdivision into the mande , atlantic - congo , and kordofanian subfamilies ? 2 .what is the meaning , in historical linguistic terms , of the circle components ( persistent generators of ) in the data space of syntactic parameters of language families ?is there a historical - linguistic interpretation for the second generator one sees at cluster filtering value 165 and scale in the indo - european family ? or for the generator one sees with the same cluster filtering , at scale in the niger - congo case ? 3 .to what extent does persistent topology describe different distribution of syntactic parameters across languages for different linguistic families ?t. shopen , _ language typology and syntactic description : volume 1 , clause structure _ ; _ volume 2 , complex constructions _ ; _ volume 3 : grammatical categories and lexicon _ , cambridge university press , 2007 .
we study the persistent homology of the data set of syntactic parameters of the world languages . we show that , while homology generators behave erratically over the whole data set , non - trivial persistent homology appears when one restricts to specific language families . different families exhibit different persistent homology . we focus on the cases of the indo - european and the niger - congo families , for which we compare persistent homology over different cluster filtering values . we investigate the possible significance , in historical linguistic terms , of the presence of persistent generators of the first homology . in particular , we show that the persistent first homology generator we find in the indo - european family is not due ( as one might guess ) to the anglo - norman bridge in the indo - european phylogenetic network , but is related to the position of ancient greek and the hellenic branch within the network .
shannon s rate - distortion function for a stationary zero - mean gaussian source with memory and under the mse fidelity criterion can be written in a parametric form ( the reverse water - filling solution ) [ eq : shannonsrdf ] where denotes the _ power spectral density _ ( psd ) of and the distortion psd is given by the water level is chosen such that the distortion constraint is satisfied .it is well known that in order to achieve shannon s rdf in the quadratic gaussian case , the distortion must be independent of the output .this clearly implies that the distortion must be _ correlated _ to the source .interestingly , many well known source coding schemes actually lead , by construction , to source - uncorrelated distortions .in particular , this is the case when the source coder satisfies the following two conditions : a ) the linear processing stages ( if any ) achieve _ perfect reconstruction _ ( pr ) in the absence of quantization ; b ) the quantization error is uncorrelated to the source .the first condition is typically satisfied by pr filterbanks , transform coders and feedback quantizers .the second condition is met when subtractive ( and often when non - subtractive ) dither quantizers are employed .thus , any pr scheme using , for example , subtractively dithered quantization , leads to source - uncorrelated distortions .an important fundamental question , which was raised by the authors in a recent paper , is : `` what is the impact on shannon s rate - distortion function , when we further impose the constraint that the end - to - end distortion must be uncorrelated to the input ? '' in , we formalized the notion of , which is the quadratic rate - distortion function subject to the constraint that the distortion is uncorrelated to the input . for a gaussian source , we defined as =\boldsymbol{0 } , \\\frac{1}{n } tr(\boldsymbol{k}_{y - x } ) \leq d , \frac{1}{n}|\boldsymbol{k}_{y - x}|^{\frac{1}n } > 0 } } \tfrac{1}{n } i(x ; y),\ ] ] where the notation denotes the covariance matrix of and refers to the determinant . for zero mean gaussian stationary sources, we showed in that the above minimum ( in the limit when ) satisfies the following equations : [ eq : rperp_equations ] is the psd of the optimal distortion , which needs to be gaussian . notice that here the parameter ( akin to in ) does not represent a `` water level '' . indeed , unless is white , the psd of the optimal distortion for is not white , _ for all . and shannon s are discussed in . ] in the present paper we prove achievability of by constructing coding schemes based on dithered lattice quantization , which , in the limit as the quantizer dimension approaches infinity , are able to achieve for any positive .we also show that can be realized causally , i.e. , that for all gaussian sources and for all positive distortions one can build forward test channels that realize without using non - causal filters .this is contrary to the case of shannon s rate distortion function , where at least one of the filters of the forward test channel that realizes needs to be non - causal . to further illustrate the causality of , we present a causal transform coding architecture that realizes it .we also show that the use of feedback noise - shaping allows one to achieve with memoryless entropy coding .this parallels a recent result by zamir , kochman and erez for .we conclude the paper by showing that , in all the discussed architectures , the rate - loss ( with respect to ) when using a finite - dimensional quantizer can be upper bounded by the space - filling loss of the quantizer .thus , for any gaussian source with memory , by using noise - shaping and scalar dithered quantization , the _ scalar _ entropy ( conditioned to the dither ) of the quantized output exceeds by at most 0.254 bit / dimension .a randomized lattice quantizer is a lattice quantizer with subtractive dither , followed by entropy encoding .the dither is uniformly distributed over a voronoi cell of the lattice quantizer.due to the dither , the quantization error is truly independent of the input .furthermore , it was shown in that the coding rate of the quantizer , i.e. can be written as the mutual information between the input and the output of an additive noise channel , where denotes the channel s additive noise and is distributed as .more precisely , and the quadratic distortion per dimension is given by .it has furthermore been shown that when is white there exists a sequence of lattice quantizers where the quantization error ( and therefore also the dither ) tends to be approximately gaussian distributed ( in the divergence sense ) for large .specifically , let have a probability distribution ( pdf ) , and let be gaussian distributed with the same mean and covariance as .then with a convergence rate of if the sequence is chosen appropriately . in the next sectionwe will be interested in the case where the dither is not necessarily white . by shaping the voronoi cells of a lattice quantizerwhose dither is white , we also shape , obtaining a colored dither . this situation was considered in detail in from where we obtain the following lemma ( which was proven in but not put into a lemma ) .[ lem : shapedlattice ] let be white , i.e. is uniformly distributed over the voronoi cell of the lattice quantizer and .furthermore , let , where denotes the shaped voronoi cell and is some invertible linear transformation . denote the covariance of by .similarly , let having covariance matrix and let where .then there exists a sequence of shaped lattice quantizers such that the divergence is invariant to invertible transformations since .thus , for any .the simplest forward channel that realizes is shown in fig . [fig : forwartdtc ] . according to ,all that is needed for the mutual information per dimension between and to equal is that be gaussian with psd equal to the right hand side ( rhs ) of . in view of the asymptotic properties of randomized lattice quantizers discussed in section [ sec : background ] ,the achievability of can be shown by replacing the test channel of fig.[fig : forwartdtc ] by an adequately _ shaped _-dimensional randomized lattice quantizer and then letting .in order to establish this result , the following lemma is needed .[ lem : excessrate ] _ let , , and be mutually independent random vectors .let and be arbitrarily distributed , and let and be gaussian having the same mean and covariance as and , respectively. then _ where stems from the well known result , see , e.g. , .we can now prove the achievability of .+ [ thm : achievable ] _ for a source being an infinite length gaussian random vector with zero mean , is achievable . _let be the sub - vector containing the first elements of .for a fixed distortion , the average mutual information per dimension is minimized when and are jointly gaussian and see .let the -dimensional shaped randomized lattice quantizer be such that the dither is distributed as , with .it follows that the coding rate of the quantizer is given by .the rate loss due to using to quantize is given by \nonumber\\ & \overset{(a)}{\leq } \tfrac{1}{n}d(f_{{{e'}^{(n)}}}(e)\|f_{{{e'_g}^{(n)}}}(e)),\label{eq : middle}\end{aligned}\ ] ] where is the pdf of the gaussian random vector , independent of and , and having the same first and second order statistics as .in , inequality follows directly from lemma [ lem : excessrate ] , since the use of subtractive dither yields the error independent of . to complete the proof , we invoke lemma [ lem : shapedlattice ] , which guarantees that the rhs of vanishes as . 1 . for zero mean stationary gaussian random sources , is achieved by taking in theorem [ thm : achievable ] to be the complete input process . for this case , as shown in , the fourier transform of the autocorrelation function of tends to the rhs of .2 . for vector processes ,the achievability of follows by building in theorem [ thm : achievable ] from the concatenation of infinitely many consecutive vectors .3 . note that if one has an infinite number of parallel scalar random processes , can be achieved _ causally _ by forming in theorem [ thm : achievable ] from the -th sample of each of the processes and using entropy coding after .the fact that can be realized causally is further illustrated in the following section .we will next show that for a gaussian random vector with positive definite covariance matrix , can be realized by _ causal _ transform coding . a typical transform coding architectureis shown in fig .[ fig : causal_tcnf ] . in this figure, is an matrix , and is a gaussian vector , independent of , with covariance matrix .the system clearly satisfies the perfect reconstruction condition .the reconstruction error is the gaussian random vector , and the mse is , where . by restricting to be lower triangular , the transform coder in fig .[ fig : causal_tcnf ] becomes causal , in the sense that , the -th elements of and can be determined using just the first elements of and the -th element of . to have , it is necessary and sufficient that where the covariance matrix of the optimal distortion is since is lower triangular , is the cholesky decomposition of , which always exists ., there exists a unique having only positive elements on its main diagonal that satisfies , see . ]thus , can be realized by causal transform coding . in practice ,transform coders are implemented by replacing the ( vector ) awgn channel by a quantizer ( or several quantizers ) followed by entropy coding .the latter process is simplified if the quantized outputs are independent . when using quantizers with subtractive dither , this can be shown to be equivalent to having in the transform coder when using the awgn channel .notice that , since in is invertible , the mutual information per dimension is also equal to . by the chain rule of mutual informationwe have with equality iff the elements of are mutually independent . if is gaussian , this is equivalent to being diagonal .clearly , this can not be obtained with the architecture shown in fig .[ fig : causal_tcnf ] using causal matrices ( while at the same time satisfying ) .however , it can be achieved by using error feedback , as we show next .consider the scheme shown in fig .[ fig : causal_tc ] , where is lower triangular and is strictly lower triangular .again , a sufficient and necessary condition to have is that , see , i.e. , ^{t } = { \boldsymbol{k}}_{z^{\star}}\nonumber\\ \iff ( { \boldsymbol{i}}-{\boldsymbol{f}})({\boldsymbol{i}}-{\boldsymbol{f}})^{t } = { \boldsymbol{a}}{\boldsymbol{k}}_{z^{\star } } { \boldsymbol{a}}^{t}/{\sigma^{2}}_{w}. \label{eq : i_f_i_f}\end{aligned}\ ] ] on the other hand , equality in is achieved only if for some diagonal matrix with positive elements .if we substitute the cholesky factorization into , we obtain , and thus substituting the above into we obtain ({\boldsymbol{i}}-{\boldsymbol{f}})^{t}\label{eq : the_one}\end{aligned}\ ] ] thus , there exist and , there exists a _ unique _matrix having zeros on its main diagonal that satisfies , see . ] and satisfying and .substitution of into yields , and . from and the fact that it follows that , and therefore for gaussian vector sources derived in . ] thus achieving equality in .we have seen that the use of error feedback allows one to make the average scalar mutual information between the input and output of each awgn channel in the transform domain equal to . in the following sectionwe show how this result can be extended to stationary gaussian processes .in this section we show that , for any colored stationary gaussian stationary source and for any positive distortion , can be realized by noise shaping , and that is achievable using _ memory - less _ entropy coding . the fact that can be realized by the additive colored gaussian noise test channel of fig .[ fig : forwartdtc ] suggests that could also be achieved by an _ additive white gaussian noise _ ( awgn ) channel embedded in a noise - shaping feedback loop , see fig .[ fig : block_diag_nsdpcm ] . in this figure, is a gaussian stationary process with psd .the filters and are lti .the awgn channel is situated between and , where white gaussian noise , independent of , is added .the reconstructed signal is obtained by passing through the filter , yielding the reconstruction error .the following theorem states that , for this scheme , the _ scalar _ mutual information across the awgn channel can actually equal .[ thm : realizable_fq ] _ consider the scheme in fig .[ fig : block_diag_nsdpcm ] .let , be independent stationary gaussian random processes .suppose that the differential entropy rate of is bounded , and that is white .then , for every , there exist causal and stable filters , and such that _ consider all possible choices of the filters and such that the obtained sequence is white , i.e. , such that } ] , and that a bounded differential entropy rate of implies that . from the paley - wiener criterion ( see also , e.g. , ) , this implies that , and can be chosen to be stable and causal .furthermore , recall that for any fixed , the corresponding value of is unique ( see ) , and thus fixed .since the variance is also fixed , it follows that each frequency response magnitude that satisfies can be associated to a unique value of .since is strictly causal and stable , the minimum value of the variance is achieved when i.e. , if has no zeros outside the unit circle ( equivalently , if is minimum phase ) , see , e.g. , .if we choose in a filter that satisfies , and then we take the logarithm and integrate both sides of , we obtain } d{\omega}\\ & = \frac{1 } { 2\pi}\ ! { \int\limits_{-\pi}^{\pi } } { \!\log\ ! \left [ \frac { { \hksqrt{s_{x}{({\textrm{e}^{j{\omega}}})}\!+\ ! \alpha } } + { \hksqrt{s_{x}{({\textrm{e}^{j{\omega } } } ) } } } } { { \hksqrt{\alpha } } } \right ] } d{\omega}= r^{\perp}(d).\end{aligned}\ ] ] where has been used .we then have that where follows from the gaussianity of and , and from the fact that is independent of ( since is strictly causal ) .this completes the proof .alternatively , in , equality is achieved iff the right hand side of equals , i.e. , if has the optimal psd .equality holds because , which follows from .the fact that is stationary has been used in , wherein equality is achieved iff is minimum phase , i.e. , if holds .equality in holds if an only if the elements of are independent , which , from the gaussianity of , is equivalent to .finally , stems from the fact that is independent of .notice that the key to the proof of theorem [ thm : realizable_fq ] relies on knowing a priori the psd of the end to end distortion required to realize .indeed , one could also use this fact to realize by embedding the awgn in a dpcm feedback loop , and then following a reasoning similar to that in . in order to achieve by using a quantizer instead of an awgn channel, one would require the quantization errors to be gaussian .this can not be achieved with scalar quantizers .however , as we have seen in [ sec : background ] , dithered lattice quantizers are able to yield quantization errors approximately gaussian as the lattice dimension tends to infinity. the sequential ( causal ) nature of the feedback architecture does not immediately allow for the possibility of using vector quantizers .however , if several sources are to be processed simultaneously , we can overcome this difficulty by using an idea suggested in where the sources are processed in parallel by separate feedback quantizers .the feedback quantizers are operating independently of each other except that their scalar quantizers are replaced by a single vector quantizer .if the number of parallel sources is large , then the vector quantizer guarantees that the marginal distributions of the individual components of the quantized vectors becomes approximately gaussian distributed .thus , due to the dithering within the vector quantizer , each feedback quantizer observes a sequence of i.i.d .gaussian quantization noises .furthermore , the effective coding rate ( per source ) is that of a high dimensional entropy constrained dithered quantizer ( per dimension ) .the fact that the scalar mutual information between and equals the mutual information rate between and in each of the parallel coders implies that can be achieved by using a memoryless entropy coder .the results presented in sections [ sec : realiz_tc ] and [ sec : noise_shap ] suggest that if a test channel embedding an awgn channel realizes , then a source coder obtained by replacing the awgn channel by a dithered , finite dimensional lattice quantizer , would exhibit a rate close to . the next theorem, whose proof follows the line of the results given in , provides an upper bound on the rate - loss incurred in this case . _consider a source coder with a finite dimensional subtractively dithered lattice quantizer .if when replacing the quantizer by an awgn channel the scalar mutual information across the channel equals , then the scalar entropy of the quantized output exceeds by at most bit / dimension ._ let be the noise of the awgn channel , and and denote the channel input and output signals . from the conditions of the theorem, we have that if we now replace the awgn by a dithered quantizer with subtractive dither , such that the quantization noise is obtained with the same first and second order statistics as , then the end to end mse remains the same .the corresponding signals in the quantized case , namely and , will also have the same second order statistics as their gaussian counterparts and .thus , by using lemma [ lem : excessrate ] we obtain finally , from ( * ? ? ?* theorem 1 ) , we have that . substitution of into this last equation yields the result .we have proved the achievability of by using lattice quantization with subtractive dither .we have shown that can be realized causally , and that the use of feedback allows one to achieve by using memoryless entropy coding .we also showed that the scalar entropy of the quantized output when using optimal finite - dimensional dithered lattice quantization exceeds by at most bits / dimension .m. s. derpich , j. stergaard , and g. c. goodwin , `` the quadratic gaussian rate - distortion function for source uncorrelated distortions , '' in _ proc . of the data compression conference ,dcc _ , 2008 , to appear ( available from http://arxiv.org ) .
we prove achievability of the recently characterized quadratic gaussian rate - distortion function ( rdf ) subject to the constraint that the distortion is uncorrelated to the source . this result is based on shaped dithered lattice quantization in the limit as the lattice dimension tends to infinity and holds for all positive distortions . it turns out that this uncorrelated distortion rdf can be realized causally . this feature , which stands in contrast to shannon s rdf , is illustrated by causal transform coding . moreover , we prove that by using feedback noise shaping the uncorrelated distortion rdf can be achieved causally and with memoryless entropy coding . whilst achievability relies upon infinite dimensional quantizers , we prove that the rate loss incurred in the finite dimensional case can be upper - bounded by the space filling loss of the quantizer and , thus , is at most 0.254 bit / dimension .
one of the key questions of computational epidemiology is how best to distribute limited resources of treatment and vaccination so that they will be most effective in suppressing or reducing outbreaks of disease .this problem is heightened by the entangled networks of interactions via which diseases can spread : in a large complex network , contact with a high - degree hub can see a virus spread rapidly throughout the population even if the probability of transmission from an individual contact is low .early works on network immunization drew attention to the differences between random immunization and targeted immunization strategies .a simple random immunization strategy can consist in fixing a fraction or a density of immunized nodes and averaging the outcome of the epidemic process over all possible realizations of the immunization set . on the contrary , targeted immunization strategies correlate the choice of immunized nodes with some topological feature , such as the degree or other centrality measures .this can be experimentally shown to have some positive effect in reducing the spread of diseases .most topologically - based algorithms for immunization follow an incremental procedure , in which the set of immunized nodes is initially empty then it is progressively filled adding one by one the nodes that are most relevant with respect to a particular topological metric . despite the computational cost , recalculating the topological metric after each immunization step ( i.e. after removing the immunized node from the graph ) usually provides much better results than computing it only once on the original graph .further improvements were obtained by means of more complex immunization strategies , based on graph partitioning and on the optimization of the susceptible size . beside the heterogeneity of contacts , also clustering ,community structure and modularity have a major impact on disease dynamics , therefore the same immunization strategy can produce contrasting results on networks with different topological features .this is a consequence of the fact that topological heuristic methods neglect important features of the spreading rule and most common metrics used to measure their effectiveness , such as the largest connected component or the largest non - immune cluster size , are proxies that may not reflect the true susceptibility to an epidemic .these techniques also neglect the cost of vaccination , which may vary widely depending on the chosen target . to overcome these limitations , several authors tried to quantify more explicitly the effects of immunization strategies on the outbreak dynamics and network immunizationwas mathematically formulated as a proper optimization problem , that can be proven to be np - hard in a plethora of different variants .standard optimization techniques such as monte - carlo ( mc ) methods or integer / linear programming are computationally very expensive and may take a prohibitive amount of time to reach reasonably good results even on relatively small networks . on the other hand , the _ greedy _ optimization strategies usually proposed are guaranteed to approximate the optimal result by a constant factor only in some fortunate case .recent progress in combinatorial optimization have shown that algorithms based on the message - passing principle , and developed using methods from the statistical physics of disordered systems , outperform in many cases both greedy algorithms and simulated annealing , even in complex optimization problem involving stochastic parameters and dynamical rules .in many cases in which mc algorithms get trapped in local minima of the ( free-)energy function , message - passing algorithms can find considerably better results . the remarkable performances of these algorithms are combined with considerably good computation time scaling properties . while on a wide variety of optimization problems the computational complexity of simulated annealing scales exponentially with the system size , message - passing algorithms typically require a time that scales roughly linearly with the number of messages ( i.e. the number of edges ) . in this paper we show that , under some approximations , network immunization can be written as a constrained optimization problem , in which the constraints are fixed - point equations for some local ( node or edge ) variables describing the average stationary state of the dynamics . these constraints and a suitably defined objective ( energy ) function are then used to derive a message - passing approach to the optimization problem , and to design efficient algorithms on large networks .we apply this method to find optimal immunization strategies for both susceptible - infected - recovered ( sir ) and susceptible - infected - susceptible ( sis ) models .in section [ sec2 ] we recall the main ideas and formulas of mean - field methods in epidemic models , that are usually used to estimate the average stationary properties of an epidemic outbreak .the optimal immunization problem is introduced in section [ sec3 ] , opportunely defined in terms of mean - field quantities .section [ sec4 ] is devoted to the definition of the message - passing approach and the derivation of the corresponding belief - propagation ( bp ) and max - sum ( ms ) equations . in section [ sec5 ], we use bp to understand in detail the immunization properties on the prototypical case of random regular graphs .the comparison with other optimization methods on more general graphs is discussed in section [ sec6 ] .over the years , a large number of stochastic epidemic models have been introduced , with the aim of addressing some specific features of different diseases . in the most simple model , the epidemic spreading induces in the nodes irreversible stochastic transitions from a _ susceptible _ state to an _ infected _ one .infected individuals can _ recover _ either returning to the susceptible state or becoming permanently resistant to the disease .one can then increase the complexity of the stochastic model introducing other intermediate states , or compartments , such as _ exposure _ and _ latency_. in the following , we discuss the most basic models of epidemic spreading , providing for each of them a set of approximated equations of mean - field type valid on very general graph structures .their solution describes the statistical properties of the stationary state corresponding to a given set of initial conditions and external parameters .in addition , such equations allow to measure the level of infection once a configuration of initially immunized nodes is chosen .the _ susceptible - infected - recovered _( sir ) model was formulated by kermack and mckendrick to describe the irreversible propagation through a population of individuals of an infectious disease , such as measles , mumps , or cholera .the sir stochastic dynamics is defined over a graph , representing the contact network of a set of individuals . at any given time step ( e.g. a day ), a node can be in one of three states : susceptible ( ) , infected ( ) , and recovered / removed ( ) .the state of node at time is represented by a variable .we assume that each node is initially infected with probability ] , then recover with probability .once recovered , individuals do not get sick anymore ( they are effectively removed from the graph ) . the probability that an infected node directly transmits the disease to before recovers is given by is thus possible to construct a completely static representation of the process that maps the final state onto the outcome of a bond percolation process .this relationship can be made mathematically clear as follows .let us consider a tree - like graph and define to be the probability that node is eventually infected when considering the graph obtained in the absence of the neighboring node .exploiting the factorization of probabilities on the sub - branches of the tree emerging from , the quantity satisfies the equation \ ] ] where denotes the set of neighbors of . since infected nodes eventually recover , in the final state nodes can only be either in the susceptible state or in the recovered one . from the knowledge of the conditional marginals , one can compute the probability that a node is eventually infected , i.e. the probability that is recovered in the final state , .\ ] ] of nodes that have been infected by the final ( infinite ) time in the sir dynamics on a random regular graph of nodes and degree , as function of the transmission probability for ( from left to right ) .the symmetric bars indicate the fluctuations around the average value computed on realizations of the stochastic process . the red full line is computed from the solution of - .[ fig - sir - ip ] ] although - are exact only on trees , they have been successfully applied to study the sir model also on general random graphs . a comparison between the solutions of these equations and the results of simulations of the sir stochastic process is shown in figure [ fig - sir - ip ] for a random regular graph ( rrg ) of nodes and degree . for simplicity we considered uniform self - infection probabilities , and uniform transmission probabilities , . in the sir stochastic process , we defined a measure of the `` outbreak '' size as the average fraction of nodes that have been infected during the epidemic spreading . since all infected nodes eventually recover , this metric can be also defined as \ ] ] where ] .the results are reported as a red full line in fig.[fig - sir - ip ] .the agreement between the mean - field theory represented by - and the simulations is very good for sufficiently large values of , then it deteriorates for of the order of and large values of .the reason for such a discrepancy is that eqs .are correct on tree - like structures , i.e. when the disease transmission events to one node coming from two neighbors are not correlated .the `` decorrelation '' assumption is not correct when the actual number of sources of spontaneous infection is very small .this is obvious in the case of a unique source of infection : the contagion path has the same source , hence the infection of a node due to disease transmission from her neighbors is a highly correlated process , that is not well captured by - .more precisely the solution to eqs .- gives an upper bound for the real probability to be infected . in the limit of infinitely large networks, this approach is expected to provide a correct description of the average final state of the system for any finite value of and .equations - can be easily modified to include immunization of nodes . by considering a set of binary variable , in which node is immune to the disease , we get and \right\},\ ] ] \right\}.\ ] ] given a configuration of immune nodes , that we call _ immunization set _ , the solution of - provides a measure of the corresponding epidemic outbreak .it is possible to show that for a given set of parameters and , the solution of - is unique , therefore each configuration of immune nodes corresponds to a unique solution of the equations - .this property will be crucial for the validity of the optimization method developed in this work .the _ susceptible - infected - susceptible _ ( sis ) model is the prototype of reversible models of epidemic spreading , in which after recovery a node is again susceptible of being infected .the state of node at time is now represented by a binary variable . at each time step , an infected node can transmit the disease to each of its susceptible neighbors with probability , while it recovers with rate ( becoming susceptible again ) . the stochastic process admits an absorbing state in which all nodes are susceptible and the disease has disappeared from the population .when the transmission probabilities are sufficiently large , an active stationary state also exists , that is metastable and attractive for the dynamical process .although in any finite population a fluctuation will eventually bring the system into the absorbing state , the lifetime of the metastable endemic state scales with the size of the graph in such a way that an absorbing phase transition as function of the transmission probabilities is expected to occur in the thermodynamic limit .the critical threshold usually depends on the parameters of the dynamical process as well as on the topological structure of the underlying interaction graph . a variant of the sis model with spontaneous self - infection was recently introduced in order to simplify the numerical and mathematical analysis of the model .the presence of spontaneous self - infection destroys the absorbing state , whereas the metastable state becomes the ( unique ) stationary state of the dynamics . on the other hand , for a given small self - infection probability , the dynamics shows a clear boundary between low infection region and a region of global spreading as function of the transmission probabilities . by scaling down self - infection, one can extrapolate information on the epidemic phase transition occurring for zero self - infection , avoiding the problems associated with the existence of an absorbing state .the sis model on a given graph with nodes is a markov chain with states , whose stationary probability distribution can not be explicitly computed for large systems .a simple mean - field approximation that turned out to provide a good qualitative and quantitive description of the stationary state of the sis model is obtained replacing the exact probability distribution by a product measure over the nodes of the graph .this factorization , also known as the _ -intertwined model _ , leads to a set of `` quenched '' mean - field equations for the evolution of single - node infection probabilities with .notice the difference between the sir and sis cases : while in the sir model indicates the ( mean - field ) probability that node is eventually infected before the final state , in the sis model it represents the ( mean - field ) probability that is infected in the stationary state of the dynamics .in the `` quenched '' mean - field approximation , the infection probability of node at time satisfies the equation \right\}\ ] ] where is the transmission probability from to , is the spontaneous self - infection probability . in the stationary state ,the mean - field variables are given by the solution of the fixed - point equations }{r_i + q_i+ ( 1-q_i ) \left[1-\prod_{j\in\partial i } ( 1- p_{ji } m_j ) \right]}\ ] ] where says whether node is immune or not . as for the sis model , it is possible to show that the mean - field quantity gives an upper bound for the real value of the infection probability of node in the stationary state .the approximation can be improved considering second - order quantities , i.e. deriving closed equations for single - point marginals and pair - correlations , but the actual form of these equations is not unique and depends on the moment closure approximation adopted . nevertheless , in most cases equations already provide a very good description of the stationary state of the sis stochastic process . a measure of the outbreak size of the epidemics is given by the average fraction of infected nodes in the active stationary state .if the stationary state is infinitely long - lived , can be operatively defined as \right\}\ ] ] where ] is the probability that node eventually recovers ( i.e. the probability that the node has been infected during the epidemic spreading ) given the configuration of immunized nodes . estimating the probability ] .since there is no information on the shape of this function , we proceed discretizing the interval ] so that . then , for any disjoint sets and , we have one can start from the empty set and add one by one the elements of , using the convolution rules . finally , the convolution function over the complete set can be used to compute the out - coming message as where the two terms , defined as refer to node being immunized or not ( the proportionality symbol as usual means that the message has to be properly normalized ) . using the convolution trick ,the computational complexity of the update rule on a node of degree reduces to .the factor comes from the computation of the trace over the auxiliary variables by means of a convolution function : for each value of the values taken by , we have to sum over all values taken by and by . finally , in order to explore directly the optimal immunization assignments, we can define the ms messages for an arbitrary set , we define the convolution function } } \sum_{k\in d } \hat{p}_{ki}\left(m_{ki } , q_i + ( 1-q_i)\left[1- \frac{t}{1-m_{ki}p_{ki } } \right]\right)\ ] ] where ] interval is divided in bins . in this way , however , the number of operations required to compute the trace in scales exponentially with the degree of the node , therefore we employ again the convolution method . for an arbitrary set , we define the quantity } } \prod_{k\in d } p_{ki}(m_{k } , m_{i}).\ ] ] then , for any disjoint sets and , we have and the bp equations become }{r_i+q_i+(1-q_i)\left[1-(1-p_{ji}m_{j})s\right]}\right ) e^{-\beta\mathscr{e}_i(s_i , m_i ) } \\ & = & e^{-\beta\mu c_i } + m_{\partial i{\setminus}j } \left(\frac{1-m_i - r_i m_i}{(1-q_i)(1-m_i)(1-p_{ji}m_j)},m_i\right ) e^{-\beta\epsilon\ell_i m_i}.\end{aligned}\ ] ] again we can derive ms equations where and is an ( irrelevant ) additive constant .on a general graph , the bp equations and are valid under the hypothesis of fast decay of correlations with the distance or replica symmetric ( rs ) assumption . under this assumption ,the statistical properties of the system are described by a unique gibbs state ( i.e. replica symmetry ) , and the bp equations admit a unique solution .random graphs are natural benchmark structures for evaluating the quality of the results obtained solving numerically the bp equations with histograms and the performances of the corresponding optimization method . in order to isolate and study the effects of immunization on the statistical properties of epidemic spreading , we consider a completely homogeneous setup : a rrg with uniform values of both spontaneous self - infection and disease transmission along the edges , i.e. , and , . for the sake of simplicity, we also consider uniform loss parameters and uniform immunization costs , i.e. , . in the bp approachexplained in section [ sec4 ] , we can give a larger statistical weight to allocations of the immunized nodes that correspond to lower values of the energy .increasing for ( see ) , the distribution becomes biased towards immunization sets that generate a smaller expected number of infected nodes compared to random immunizations of the same density . in the limit of the weight is concentrated on the minima of the energy function , i.e. on the optimal immunization sets . in this frameworkan interesting global observable is the generalization of the quantity defined in section [ sec3 ] when we perform an average over all possible immunization sets with the corresponding weight .we call this quantity . we can exploit the definition of in terms of the variables , and use bp to obtain an estimate of , where is given by and for the sir and sis models respectively .the chemical potential can be used to control the average fraction of immunized nodes , denoted by , that is computed from the solution of the bp equations as and for we use and for the sir and sis models respectively .it is thus possible to compute as function of for a fixed choice of the other parameters .we present here results obtained for infinitely large regular random graphs , obtained using the bp equations in the single - link approximation , i.e. when we solve self - consistently the bp equations assuming all nodes to have essentially the same statistical properties . for both sir and sis models ,the results of the bp equations in the single - link approximation are shown in fig.[fig - optbp ] with the choice of parameters and and different values of and . in the case of random immunization ( ) , the results obtained using bp on infinitely large rrg are compared with the average behavior observed by sampling the solutions of - ( and respectively ) and by simulating the sir ( and sis ) stochastic process on finite rrg of nodes .the latter are obtained sampling over configurations of immunized nodes for each value of .the agreement is very good for both models . , with uniform self - infection probability and uniform transmission probability .( a ) for the sir model , we plot the average density of nodes that got infected during the epidemic spreading as function of the average density of immunized nodes .bp results on infinitely large graphs are reported for and ( black full line ) and for and ( red dashed line ) , 5 ( green dot - dashed line ) , 10 ( blue double - dot - dashed line ) , and 20 ( violet dot - double - dashed line ) .results of sampling over eqs.- ( orange circles ) , corresponding to random immunization , are also displayed .( b ) for the sis model , we plot the average density of infected nodes in the stationary state as function of the average density of immunized nodes .bp results on infinitely large graphs are reported for and ( black full line ) and for and ( red dashed line ) , 5 ( green dot - dashed line ) , 10 ( blue double - dot - dashed line ) , and 20 ( violet dot - double - dashed line ) .results of sampling over eqs .( orange circles ) , corresponding to random immunization , are also displayed .[ fig - optbp],title="fig : " ] , with uniform self - infection probability and uniform transmission probability . ( a ) for the sir model , we plot the average density of nodes that got infected during the epidemic spreading as function of the average density of immunized nodes .bp results on infinitely large graphs are reported for and ( black full line ) and for and ( red dashed line ) , 5 ( green dot - dashed line ) , 10 ( blue double - dot - dashed line ) , and 20 ( violet dot - double - dashed line ) .results of sampling over eqs.- ( orange circles ) , corresponding to random immunization , are also displayed .( b ) for the sis model , we plot the average density of infected nodes in the stationary state as function of the average density of immunized nodes .bp results on infinitely large graphs are reported for and ( black full line ) and for and ( red dashed line ) , 5 ( green dot - dashed line ) , 10 ( blue double - dot - dashed line ) , and 20 ( violet dot - double - dashed line ) .results of sampling over eqs .( orange circles ) , corresponding to random immunization , are also displayed .[ fig - optbp],title="fig : " ] increasing for , the solutions of the bp equations show a monotonic decrease in the density of infected nodes .the reduction of the infection level is particularly visible for intermediate values of , while it becomes almost negligible at small and large density of immunized nodes .the fact that the density is not directly fixed ( as for micro - canonical systems ) , but it is implicitly varied as an effect of tuning the chemical potential and then evaluated from the outcome of the bp equations , is the cause of an undesirable issue at large values of .it happens that , for large , the free - energy of the statistical mechanics problem is not convex over the whole interval ] interval employed in order to solve the bp equations ( here ) .also the agreement with the results of simulations is very good , and it improves increasing the number of configurations of immunized nodes employed in the simulations ( here we performed an average over realizations at fixed density ) .the distribution is always very heterogeneous and the average value ( reported in fig.[fig - optbp ] ) is not at all representative of the behavior of the system . in general, is characterized by a series of isolated peaks at low values of and by a continuous distribution in the bulk .the origin of the delta peaks is strictly related with the local structure of the graph around a node after immunization .for instance , the first peak for corresponds to isolated nodes , i.e. nodes that are completely surrounded by immunized ones and thus disconnected from the rest of the graph .such nodes are infected with probability , that is exactly the position of the first delta peak .a second peak occurs at , corresponding to the probability of a node being part of an isolated infected dimer . for infected trimers ( chain of length three ), the central node corresponds to , while the external nodes have .these peaks are visible in figure [ fig - pmsir ]. then one can continue identifying other local clusters , such as star - like structures and small chains , each one corresponding to an isolated peak in the distribution .the continuous bulk of the distribution should be instead identified with a superposition of values due to large - scale clusters of infected nodes . of the probability that a node got infected during the sir process on random regular graphs of degree for .results are computed using bp by means of ( black full line ) on infinite networks , by sampling solutions of - ( red dashed lines ) and by means of simulations of the stochastic dynamics , both on a finite network of nodes and with a sample of immunization sets .[ fig - pmsir ] ] in figure [ fig - pmsis ] we reported a similar plot for the sis model .the structure of is generally different , with few isolated peaks in which the weight of the distribution concentrates .this effect could be due to the naive mean - field approximation applied at the level of the equations used as hard - constraints , that is less accurate than that applied in the sir model .of the probability that a node is infected in the stationary state of the sis process on random regular graphs of degree for .results are computed using bp by means of ( black full line ) on infinite networks , by sampling solutions of ( red dashed lines ) and by means of simulations of the stochastic dynamics , both on a finite network of nodes and with a sample of immunization sets .[ fig - pmsis ] ] there is a remarkable difference in the shape of when a non - uniform weight is associated with immunization sets that generate different levels of infection . in the bp formalism , this is done by increasing from zero .figure [ fig - optpe - sir]a displays the distribution for the sir model at a fixed value of , and different values of . increasing distribution concentrates on a narrower interval of values of .the information in these plots is important because it can be used to compute , for a given node in a graph , the probability that such node is infected for a given immunization strategy .one can also ask how many immunization sets exist at a given density that generate an average infection level of . for a rrg with degree and ,this is shown in fig.[fig - optpe - sir]b , where we plot the entropy of immunization sets of fixed density as function of .these results are obtained from the solutions of the bp equations increasing from 1 ( for ) . in figure[ fig - optpe - sis ] we report analogue plots for the sis model with . for a random allocation of immunized nodes ( black circles in fig.[fig - optpe - sis]a ) , exhibits ( in addition to a delta in zero ) a series of narrow peaks at positive .increasing , those at larger slowly disappear leaving only a delta peak at .fig.[fig - optpe - sis]b shows the behavior of the entropy curve at for the sis model . in both models ,the entropy at does not vanish continuously when the minimum value of infection is reached , but it remains considerably large .this is in accord with the behavior of the entropy curves displayed in fig.[fig - optbp2 ] .a continuously vanishing behavior can be observed instead if we select a density value that falls in the region in which , at large , the entropy curve becomes negative ( see later discussion about ms results ) . of the probability that a node got infected during the sir process on random regular graphs of degree , , , and different values of and .( b ) entropy of immunization sets of density as function of ( obtained in creasing for ) .[ fig - optpe - sir],title="fig : " ] of the probability that a node got infected during the sir process on random regular graphs of degree , , , and different values of and .( b ) entropy of immunization sets of density as function of ( obtained in creasing for ) .[ fig - optpe - sir],title="fig : " ] of the probability that a node is infected in the stationary state of the sis process on random regular graphs of degree , , , and different values of and .( b ) entropy of immunization sets of density as function of ( obtained in creasing for ) .[ fig - optpe - sis],title="fig : " ] of the probability that a node is infected in the stationary state of the sis process on random regular graphs of degree , , , and different values of and .( b ) entropy of immunization sets of density as function of ( obtained in creasing for ) .[ fig - optpe - sis],title="fig : " ] for , the bp results have been obtained using a discretization of the interval ] . in this formulation ,constraints and energy terms are local , allowing the application of the cavity method and the development of efficient message - passing algorithms such as bp .our results obtained using bp equations on random regular graphs shed light on the statistical properties of immunization sets , uncovering in which regions of the parameter space , and to what extent , targeted immunization is actually more effective than random immunization .the zero - temperature limit of these equations gives the ms algorithm , that can be used to find a solution to the optimization problem .we showed , both on synthetic and real networks , that the ms algorithm outperforms several popular immunization methods based on topological metrics and greedy strategies .the solution found using ms is not guarantee to be optimal , therefore we also performed simulated annealing , that is able to reach the optimum , at least for a sufficiently slow ( maybe exponential ) annealing schedule . for networks of moderate size, we could compare the lowest - energy immunization set found by mc methods with the solution found by ms .the latter was always at least as good as the former , providing an experimental evidence of the validity of the optimization technique .moreover , unlike mc - based methods , the ms algorithm scales only linearly with the network size . as a drawback ,the algorithm scales as , where is the number of bins necessary to represent the distribution of real values as histograms .we emphasize that the discretization method used in the current implementation is a very naive and straight - forward one , and there are several ways to considerably reduce by adopting more efficient representation of the messages ( as explained in sec.[sec5 ] ) .for this reason we expect that message - passing algorithms as the one proposed here could be used to study the immunization problem even on very large networks. this will be the scope of future research .the results of the comparison between different optimization methods on a variety of networks show that , as long as the network and the parameters are sufficiently homogeneous , all methods give approximately the same results . in this case, heuristic strategies based on topological metrics , such as degree - centrality or eigenvector centrality could be preferred , as they are very simple and fast .when the energy function includes inhomogeneous costs , simpler heuristics turned out to be sub - optimal in our experiments .moreover , greedy - based methods , that take into account the correct energy function , are based on a progressive scheme ( as immunized nodes are added incrementally ) and often fail to reach the ground state whenever the energy landscape is rugged .this is well demonstrated by results obtained on the small , but representative , zachary s karate club network .the method presented here could be applied to a number of other optimization problems including , but not restricted to , other epidemic models in discrete or continuous time , provided that similar fixed - point equations are defined for node or edge variables .the control variables in that case could be a set of node - dependent external parameters . using message - passing techniques it could be possible to select the configuration of external parameters that corresponds to some desired outcome for the global state of the system .this general formulation opens to a wide spectra of applications in problems involving the control of network dynamics .the heuristic algorithms considered in the paper are based on an incremental procedure , by means of which the algorithm adds one by one nodes to the immunization set , each time choosing the node that minimizes some score function .hence , given a graph and one of the scoring strategies described below , the immunization algorithm is based on the following iteration : the actual definition of the score vector depends on the heuristic strategy considered . for degree centrality, the score of a node is just the number of non - immunized neighbors . for the eigenvector centrality, the score of a node is recursively a function of the scores of neighbors , where is a properly defined constant .the score vector satisfies the eigenvector equation , where is adjacency matrix , such that if there is an edge between nodes and and otherwise . under the condition that and the graph is connected , the constant corresponds to the greatest eigenvalue of the adjacency matrix ( perron - frobenius theorem )hence , the score vector can be computed by iteration from a homogeneous initial condition , using the power method , i.e. defining where is an appropriately defined normalization constant ( recomputed at each time step ) .if the iteration converges , it gives the eigenvector centrality of the nodes . in the greedy algorithm implemented in the present paper ,the score of a node is equal to the variation in energy , computed from , associated with the addition of to the immunization set . given and , we define the space of binary configurations corresponding to immunization sets can be sampled using a montecarlo algorithm , that at large inverse temperature converges towards the the minima of the energy function . in practice ,starting from a randomly selected binary configuration , the convergence to a global minimum can be achieved only using an annealing schedule that guarantees a sufficiently slow decrease in temperature . given a randomly chosen initial condition , and an initial value for the inverse temperature , the adopted annealing schedule reaches a final value in a number of proposed single - spin flip .unfortunately , the number of steps required to reach the minimum of the energy at large often scales exponentially with the system size . *choose an initial condition , set ; * randomly select a node to be flipped ; * given and compute the variation of energy ; * accept the move with probability ; * set * if , then go to point 1 . in our simulations we tested different experimental setups , using both linear schedule with and a faster exponential one ,in which 12 & 12#1212_12%12[1][0] link:\doibase 10.1103/physrevlett.86.3200 [ * * , ( ) ] link:\doibase 10.1103/physreve.63.066117 [ * * , ( ) ] link:\doibase 10.1103/physreve.65.036104 [ * * , ( ) ] link:\doibase 10.1103/physreve.65.056109 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.91.247901 [ * * , ( ) ] link:\doibase 10.1209/epl / i2004 - 10286 - 2 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.101.058701 [ * * , ( ) ] link:\doibase 10.1371/journal.pone.0022124 [ * * , ( ) ] link:\doibase 10.1103/physreve.84.061911 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/11/12/123018 [ * * , ( ) ] link:\doibase 10.1371/journal.pcbi.1000736 [ * * , ( ) ] link:\doibase 10.1002/rsa.20315 [ * * , ( ) ] link:\doibase 10.1080/15427951.2009.10129184 [ * * , ( ) ] in link:\doibase 10.1109/drcn.2011.6076889 [ _ _ ] ( ) pp .link:\doibase 10.1209/0295 - 5075/99/68007 [ * * , ( ) ] http://arxiv.org/abs/1303.3984[__ ] , ( ) link:\doibase 10.1016/j.procs.2013.05.454 [ * * , ( ) ] in link:\doibase 10.1145/956750.956769 [ _ _ ] , ( , , ) p. _ _ , ( , ) link:\doibase 10.1016/j.jcss.2006.02.003 [ * * , ( ) ] in link:\doibase 10.1145/1807342.1807370 [ _ _ ] , ( , , ) p. in link:\doibase 10.1145/1963405.1963499 [ _ _ ] , ( , , ) p.link:\doibase 10.1007/s00134 - 011 - 2341-y [ * * , ( ) ] in link:\doibase 10.1145/2380718.2380746 [ _ _ ] , ( , , ) p. ( , ) link:\doibase 10.1088/1742 - 5468/2011/11/p11009 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.106.190601 [ * * , ( ) ] link:\doibase 10.1103/physreve.87.062115 [ * * , ( ) ] http://arxiv.org/abs/1203.1426[__ ] , ( ) _ _ ( , , ) link:\doibase 10.1137/s0036144500371907 [ * * , ( ) ] link:\doibase 10.1098/rspa.1927.0118 [ * * , ( ) ] link:\doibase 10.1103/physreve.66.016128 [ * * , ( ) ] link:\doibase 10.1088/1742 - 5468/2005/08/p08011 [ * * , ( ) ] link:\doibase 10.1103/physreve.82.016101 [ * * , ( ) ] link:\doibase 10.1145/1284680.1284681 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.105.218701 [ * * , ( ) ] link:\doibase 10.1007/s00607 - 011 - 0155-y [ * * , ( ) ] link:\doibase 10.1016/j.comcom.2012.04.015 [ * * , ( ) ] link:\doibase 10.1209/0295 - 5075/97/48004 [ * * , ( ) ] link:\doibase 10.1103/physreve.86.016116 [ * * , ( ) ] link:\doibase 10.1103/physreve.87.012811 [ * * , ( ) ] link:\doibase 10.1103/physreve.86.026116 [ * * , ( ) ] link:\doibase 10.1103/physreve.82.035101 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.128702 [ * * , ( ) ] link:\doibase 10.1016/j.mbs.2012.07.002 [ * * , ( ) ] _ _ , vol .( , ) _ _ ( , ) link:\doibase 10.1103/physrevlett.104.207208 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.101.037208 [ * * , ( ) ] link:\doibase 10.1073/pnas.1004751108 [ * * , ( ) ] , link:\doibase 10.1016/s0378 - 8733(01)00042 - 9 [ * * , ( ) ]
the problem of targeted network immunization can be defined as the one of finding a subset of nodes in a network to immunize or vaccinate in order to minimize a tradeoff between the cost of vaccination and the final ( stationary ) expected infection under a given epidemic model . although computing the expected infection is a hard computational problem , simple and efficient mean - field approximations have been put forward in the literature in recent years . the optimization problem can be recast into a constrained one in which the constraints enforce local mean - field equations describing the average stationary state of the epidemic process . for a wide class of epidemic models , including the susceptible - infected - removed and the susceptible - infected - susceptible models , we define a message - passing approach to network immunization that allows us to study the statistical properties of epidemic outbreaks in the presence of immunized nodes as well as to find ( nearly ) optimal immunization sets for a given choice of parameters and costs . the algorithm scales linearly with the size of the graph and it can be made efficient even on large networks . we compare its performance with topologically based heuristics , greedy methods , and simulated annealing .
recently , wireless sensor networks ( wsns ) are used for many applications , e.g. health monitoring of buildings , factory automation , and energy management systems . for example , in building energy management system ( bems ) , numerous sensors are deployed in fields such as office room to observe environmental information , e.g. temperature , brightness , human detection , and so on .the information are gathered via wireless links and employed for saving power consumption .however limited life - time of sensors has long been an issue in wsns .conventionally , sensor s energy is supplied by electrical plug , battery or environmental energy , which respectively have the following disadvantages : limitation of installation placement , requirement of battery replacement , or lack of stability . to deal with the problem, we propose to introduce wireless energy transmission to wsns , in which sensors can be released from both wires and batteries and be stably supplied with energy .wireless energy transmission schemes are categorized into three types , i.e. radio wave emission , resonant coupling and inductive coupling . in the radio wave emission method ,energy is collected at the receiver ( rx ) using rectenna ( rectifying antenna ) to receive and convert radio wave into direct current .compared with the other schemes , long range transmission can be realized by increasing transmit power or antenna gain .space solar power satellite ( ssps ) is one of the examples of this scheme . in wsns ,numerous ubiquitous sensors are distributed in indoor environments . in order to supply energy to all of them ,the radio wave emission is employed in this paper .so far , radio wave emission technology has been mainly used for radio frequency identification ( rfid ) systems .however , rfid systems have not been designed for the wide - area coverage targeted in this paper . additionally , in passive rfid systems in which sensors do not have external battery ,radio wave is emitted only when reader / writers ( r / ws ) request tag data .consequently , sensors can not transmit the data on their own initiative .furthermore , in a single transmitter ( tx ) system , the coverage of energy supply field is restricted by maximum transmit power limited by the radio regulation . to extend the coverage ,multiple transmitters can be introduced to the systems .this , however , results in the collision between multiple transmitters . to avoid the collision while complying with the regulation , a collision avoidance scheme among multiple r / ws , e.g. time division multiple access ( tdma ) ,should be employed . however , employing tdma results in decreasing the time efficiency in terms of energy charging and increasing the system complexity in proportion to the number of introduced r / ws . on the other hand , if multiple transmitters simultaneously perform energy transmission , destructive interference between multiple wave sources results in deadspots where sensors can not be activated .this scheme is called simple multi - point in this paper . to tackle these problems , we propose a system called the wireless grid to seamlessly supply wireless energy to sensors as shown in fig .[ fig : wg ] .grid nodes , located at ceilings or integrated in fluorescent lamps , continuously supply wireless energy to sensors to charge their rechargeable batteries , e.g. capacitor .this energy is utilized at the sensors for performing both environmental sensing and communication with the grid nodes .the wireless grid can expand the energy coverage due to our proposal of multi - point wireless energy transmission with carrier shift diversity .owing to the introduced carrier shift diversity , which was originally proposed for data communication , artificial fading can be created to cancel out deadspots by time averaging . by this way, wireless grid can seamlessly extend the coverage and supply energy to all sensors distributed in indoor environments .several companies and researchers have attempted introducing wireless energy transmission into wireless sensor networks by employing single - point rfid systems or harvesting the ambient rf power - . in these papers , proposed to transmit the special waveform to improve the coverage . two orthogonal polarization and narrowband frequency modulation to improve the uniformity of the power density in a metallic over - moded waveguide cavity . however , the single - point scheme is difficult to improve the coverage due to path - loss and multi - path .in addition , harvesting the ambient power depends on the environment . on the other hand , employed multi - transmitter or multi - antenna systems to improve the received power of certain sensors with fixed location by phase control on each antenna .different from these researches , this paper realizes seamless coverage of energy supply field to activate all sensors distributed in the indoor environments by the multi - point wireless energy transmission with carrier shift diversity .furthermore , as a proof of concept , we conduct indoor experiments , partially presented in , to verify the effectiveness of the proposed scheme . in our experiments , we compare the received power distribution as well as the coverage in a single - point scheme , a conventional multi - point scheme , and the proposed multi - point scheme in the same environment .the experimental results show that the proposed scheme can overcome power attenuation due to the path - loss as well as the effect of standing - wave created by multipath and interference between multiple wave sources . inparticular , the maximum available value of required power to maintain 100% coverage in the proposed scheme is improved by 18 db compared with that in the single - point case .for the rest of this paper , sec .[ sec:2 ] gives theoretical support for the benefits of the proposed multi - point wireless energy transmission .the theory is validated by both experiments in real indoor environment and simulations assuming free space in sec .[ sec:3 ] . finally , sec .[ sec:4 ] concludes this paper .figure [ fig:950 ] shows the spectrum mask of 950 mhz band which is available for wireless energy transmission including uhf rfid systems . according to japanese regulation ,the maximum transmit power and the maximum equivalent isotropically radiated power ( eirp ) are respectively limited to 30 dbm and 36 dbm over an energy transmission channel of 200 khz bandwidth .there are four non - lbt ( listen before talk ) channels where transmission without carrier sensing is allowed . in our proposed wireless grid ,these four channels are all continuously and simultaneously employed for wireless energy transmission while the other channels are used for data communication .the 950 mhz band in japan has been reallocated for lte mobile systems while the 920 mhz band is now available for rfid systems with wireless energy transmission .it is noted that these four non - lbt channels are shifted to the 920 mhz band in the same format . in the carrier shift diversity ,the center frequency of the carrier of each energy transmission point is slightly shifted with a predefined amount to create artificial fading .the time - varying fading shuffles the interference pattern between multiple wave sources with suitable selection of carrier offsets . in multi - point wireless energy transmission with carrier shift diversity ,the available frequency bandwidth of 200 khz is divided into orthogonal subcarriers and these subcarriers are respectively allocated to different wireless energy transmission points .the carrier frequency of the tx can be defined as where and respectively denote the center frequency and the channel bandwidth as shown in fig .[ fig : csd ] .[ t ] [ t ] [ h ] [ h ] in this paper , the conventional single - point , simple multi - point and our proposed multi - point are abbreviated as sp , mp and mpcsd respectively henceforth .figure [ fig : mpc ] shows one example of power distribution when employing these three schemes . in fig .[ fig : mpc ] ( a ) , the coverage of sp is limited by the maximum transmit power defined by the radio regulation .furthermore , in real environments , the effect of standing - wave created by multipath becomes more remarkable in the case when the power difference between the direct and reflected waves is small .this effect will also occur due to reflections from floor and ceiling when horizontal polarized waves are concerned . due to this effect , it is difficult to provide seamless energy transmission to sensor nodes since the sensor nodes might be located at the deadspots .for example , reported that r / ws in real rfid systems can not read ic tags even located at a distance shorter than the designed coverage of the product .to solve the coverage limitation due to path - loss in sp , additional transmission points can be introduced as shown in figs .[ fig : mpc ] ( b ) and ( c ) to enhance the area of energy supply field .however , merely increasing the number of energy transmission points does not solve the limitation of the coverage as shown in fig .[ fig : mpc ] ( b ) .the effect of interference between multiple wave sources can be avoided by applying a tdma scheme to mp .however , the time efficiency of energy supply decreases and the complexity of the system increases in proportion to the numbers of transmitters . to deal with the problem ,we propose to apply carrier shift diversity to mp . by using the carrier shift diversity ,the destructive interference can be significantly alleviated as shown in fig .[ fig : mpc ] ( c ) while energy can be simultaneously supplied by multiple transmitters . in other words , the proposed method can realize a seamless coverage extension without reducing the time efficiency of energy supply compared to that in mp . assuming a system model for sp as shown in fig .[ fig : sp_sys ] , the received voltage at the sensor node from the -th path ( ) is represented as , \end{aligned}\ ] ] where is the distance between the tx and the rx , is the dc voltage obtained from the -th path , denotes the angular frequency with carrier frequency of , and is an initial phase .the voltage obtained from all of paths becomes according to , a single - shunt rectenna converts rf signal into dc power with 100% efficiency by using a perfect matching network and an ideal diode with lossless property .therefore , the average received power can be described as \right ) , \nonumber \\ \label{equ : sp_power}\end{aligned}\ ] ] where } $ ] and respectively express the function of time averaging and the output load of sensor nodes .this equation implies that the average received power in sp is affected by the effect of standing - wave created by multipath as shown in the second term of eq .( [ equ : sp_power ] ) . in general, is a function of the length of each path and its related reflection coefficients . according to fresnel equations , reflection coefficient depends on polarization and incidence angle to the surface of obstacles .therefore , the multipath components with short path lengths or high reflection coefficient , are dominant in the received power .[ t ] assuming a system model for mp and mpcsd as in figs .[ fig : mpwo_sys ] and [ fig : mpw_sys ] respectively , the received voltage at the sensor node obtained from the -th tx ( ) is represented as to simplify this equation , we can convert eq .( [ equ : single ] ) to the following equation , \label{equ : sig } \end{aligned}\ ] ] where is the distance between the -th tx and the rx , is the dc voltage obtained from the -th tx including the multipath effect , denotes the angular frequency with carrier frequency of , and is an initial phase as a result of standing - wave created by multipath .the received voltage obtained from all the transmitters becomes the average received power in the case of mp can be described as \right ) .\label{equ : wo}\end{aligned}\ ] ] on the other hand , in the case of mpcsd , according to the effect of the carrier shift , the average received power is given as } \right ) \nonumber \\ & = & \frac{1}{r_\mathrm{out}}\sum_{n=1}^{n}v^2_{n}\left(r_{n}\right)\end{aligned}\ ] ] here , the received power is averaged over a period of , which is the largest period of the artificial fading of mpcsd . in this equation ,the artificial fading created by carrier shift diversity cancels out the second term of eq .( [ equ : wo ] ) ( the effect of interference between multiple wave sources ) by time averaging . therefore , the average received power turns into the summation of the power from all transmitters .although the received power obtained from the -th transmitter is independently affected by standing - wave caused by multipath , the probability that all of the received powers become deadspots is significantly small in comparison to the case of sp . it is noted that the artificial fading period should be smaller than the minimum period of data transmission to supply the average received power to the sensor ic .therefore , should satisfy the following inequality , in practical systems , since the effect of interference between multiple wave sources is reduced in proportion to the distance between transmitters , about 20 subcarriers might be sufficient for indoor environment .in addition , since the available frequency bandwidth is 200 khz , the duty cycle of sensor nodes should be more than 100 which is sufficient to perform the transmission of sensing data .therefore , the carrier shift diversity can be employed in wsns .[ t ] [ h ]in this section , we aim to evaluate the concept of mpcsd in real environments .therefore , we conduct experiments in indoor environment as shown in fig .[ fig : ex_model ] .as mentioned , the effect of standing - wave created by multipath has an influence to the energy coverage , so that we perform the measurement in a three dimensional space in order to observe this effect .in addition , for comparison with the ideal case of no multipath environment we conduct a simulation assuming a free - space environment . the experimental system , environment and equipments are shown in figs . [ fig : ex_model ] , [ fig : ex_model_pic ] and [ fig : ins ] respectively . a rfid r / w ( fig .[ fig : ins](a ) ) equipped with horizontal patch antennas ( fig .[ fig : ins](b ) ) of 6 dbi gain is employed to perform wireless energy transmission , while an ic tag antenna ( fig .[ fig : ins](c ) ) is used to receive energy . in addition , a variable phase shifter ( fig . [ fig : ins](d ) ) is used to produce carrier offset . the center frequency is 952.4 mhz and the transmit power per each antenna is 30 dbm . both the tx antenna and are set at the same height of 1.05 m. the measurement is performed in a three dimensional space . while the coordinates of tx1 and tx2 are fixed as ( 0 m , 0 m , 0 m ) and ( 0 m , 6.7 m , 0 m ) , that of rx is moved within the horizontal plane ( m m , m 6.2 m , = 0 m ) and the vertical plane ( 0 m , 0.5 m 6.2 m , m 0.15 m ) by a positioner with a step of 3 cm to evaluate the coverage of wireless energy transmission as shown in fig .[ fig : meas ] . for sp , we consider both cases that wireless power are transmitted from either tx antenna or . for mp ,signal from the rfid r / w is divided by a power divider , and then transmitted from both the tx antenna and . in the case of mpcsd , the carrier offset is produced by a variable phase shifter between the divider and the tx antenna .the output phase can be changed by imposing different voltages controlled by a d / a board equipped on a pc . in this experiment ,continuous constant phase change is created by setting the phase to repeatedly increase from to in a step of as shown in fig .[ fig : v_t ] . by this method ,a carrier offset of 50 hz apart from the center frequency 952.4 mhz can be generated .[ ! t ] .experimental parameters . [ cols="<,^",options="header " , ] [ h ] [ h ] [ h ] [ h ] [ h ] in order to easily understand power distribution property and coverage performance of the three schemes , we conduct a simulation assuming a free space environment where the placement of txs and rx are the same as those of the experiments . figure [ fig : pd_fs ] shows the power distribution of each scheme along the straight line ( 0 m , 0.5 m 6.7 m , 0 m ) . in sp , when the rx antenna is far from the corresponding tx antenna , the received power attenuates in proportion to free space path - loss . in mp , the area of energy supply fieldcan be enhanced owing to the increased number of transmission points , as compared to that of sp . however , at the central area between the two tx antennas , the received power is degraded by the effect of destructive interference between multiple wave sources . on the contrary , in mpcsd ,the degraded received power at deadspots are remarkably improved as shown in fig .[ fig : pd_fs ] .t ] [ !t ] [ !t ] [ !t ] in order to evaluate the energy transmission schemes , the following metric is introduced .equation ( [ equ : coverage ] ) shows our definition of coverage for wireless energy transmission . where denotes a measurement point , is the total number of measurement points , denotes the required power to activate the sensor ic , and is the sensor activation indicator which is defined as where is the received power simulated or measured at point .therefore , the coverage expresses the ratio of area where the received power is higher than the required power in the simulated or measured environment .figure [ fig : co_fs ] shows results of the coverage in the free space simulation . to evaluate each scheme ,six special points are marked in the figure .points ( 1 ) - ( 3 ) show the maximum available values of required powers to maintain 100% coverage in sp , mp , and mpcsd respectively. points ( 4 ) - ( 6 ) show the crossing points among sp , mp , and mpcsd .the maximum available value in sp as seen from the point ( 1 ) is determined by transmit power , path - loss attenuation , and antenna gain . since mp creates deadspots due to destructive interference between multiple wave sources , the maximum available value in mp is much less than that in sp and mpcsd as seen from the points ( 1 ) - ( 3 ) . on the other hand ,the maximum available value of mpcsd is 8.4 db higher than that in sp as shown by the gap between point ( 1 ) and ( 3 ) .in addition , the points ( 4 ) and ( 5 ) indicate that mp is only effective when supplying the power to certain sensors .furthermore , all schemes converge to 0% at the same required power as shown from the point ( 6 ) .therefore , fig .[ fig : co_fs ] shows that mpcsd is the most effective scheme to improve the uniformity of the coverage .measurement results on power distributions of sp in horizontal plane and vertical plane are shown in figs .[ fig : pd_h ] ( a)(b ) and [ fig : pd_v ] ( a)(b ) respectively . when the rx antenna is far from the corresponding tx antenna , the received power attenuates in proportion to free space path - loss in both cases .in addition , when the rx antenna is close to the opposite wall of the tx antenna , the received power is fluctuated due to the standing - wave effect caused by reflection from the wall .furthermore , at the measurement point around 3.5 m in the both cases of tx antenna and tx antenna , the received power degrades due to the standing - wave caused by the reflections from the floor ( or ceiling ) in horizontal plane . in vertical plane , the degraded point changes depending on rx antenna height , because of the phase difference between the direct wave and the reflected wave from the floor ( or ceiling ) .the results show that the coverage is limited by path - loss attenuation and deadspots due to the standing - wave created by multipath .the power distributions of mp in horizontal plane and vertical plane are shown in figs .[ fig : pd_h ] ( c ) and [ fig : pd_v ] ( c ) respectively . in mp , the area of energy supply field can be enhanced owing to the increased number of transmission points , as compared to sp in fig .[ fig : pd_h ] ( a)(b ) .in addition , since another tx is additionally located at the opposite wall , the degradation due to the standing - wave caused by reflection from the wall can be reduced .however at the central area between the two tx antennas , the received power is severely degraded by destructive interference between the two wave sources . on the contrary , in the case of mpcsd ,the degraded received power at deadspots are remarkably improved as shown in figs .[ fig : pd_h ] ( d ) and [ fig : pd_v ] ( d ) . in terms of the deadspots caused by multipath ,the number of deadspots decreases compared with that of sp since deadspots of mpcsd occur only when both the received powers from the two single - points are degraded at the same time .figure [ fig : co_h ] shows the coverage of each scheme measured in horizontal plane ( - plane ) at discrete , , , m .the trend of all schemes are similar to that of free space simulation in fig .[ fig : co_fs ] .however , as seen from the points ( 1 ) and ( 3 ) in the figure , the coverage in sp and mpcsd is shrunken compared to that of free space simulation while the coverage in mp is almost the same value as seen from the point ( 2 ) .it implies that the effect of standing - wave caused by multipath and interference between multiple wave sources decreases the coverage performances .figure [ fig : co_v ] shows the coverage of each scheme measured in vertical plane ( - plane ) at discrete , , , m .the trend of all schemes are similar to the results in the case of free space simulation in fig .[ fig : co_fs ] .however , the maximum available values of required powers in sp and mpcsd vary with respect to more remarkably as compared to the horizontal case , as seen at the points ( 1 ) and ( 3 ) in fig . [fig : co_v ] .it implies that power degradation due to reflections from the floor ( or ceiling ) depends on the rx antenna height . on the other hand , the maximum available values of required power in mpis similar to that in the horizontal case since the effect of interference between multiple wave sources is dominant compared to the effect of standing - wave caused by multipath , as seen from the point ( 2 ) in fig .[ fig : co_v ] . finally , fig .[ fig : co_all ] shows the coverage of each scheme measured in the whole horizontal or vertical plane . in sp ,the maximum available value of required power in vertical plane is lower that that in horizontal plane .it means that the effect of multipath is most dominant for sp . in mp ,the values of the two cases are almost the same .it is because that the effect of interference between multiple wave sources is dominant compared to that of standing - wave created by multipath in mp . in mpcsd ,the values of required power in both horizontal and vertical case are also almost the same .it implies that the effect of standing - wave created by multipath can be reduced by mpcsd . in horizontal and vertical planes ,the values in mpcsd are respectively 18.2 db and 25.6 db higher than that in sp as shown in the figure , so that the gain of mpcsd in this experiment is much higher than that in the free space simulation .[ t ] [ t ] [ t ]this paper conducted indoor experiments to verify the effectiveness of the multi - point wireless energy transmission with carrier shift diversity which can improve the coverage of energy supply field .we compared the received power distribution and the coverage performance of different energy transmission schemes including conventional single - point , simple multi - point and our proposed multi - point scheme . to easily observe the effect of standing - wave caused by multipath and interference between multiple wave sources ,the measurements were performed in a three dimensional space of an empty room and also simulated in free - space conditions .the experimental results showed that standing - wave due to multipath and interference between multiple - wave sources are respectively dominant in the single - point scheme and in the simple multi - point scheme . on the other hand , in the proposed multi - point scheme, the effect of standing - wave created by multipath and interference between multiple wave sources can be mitigated . in this experimental environment , the maximum available values of required power in the proposed scheme in horizontal and vertical planes are respectively 18.2 db and 25.6 db higher than that of the single - point scheme while the gain was 8.4 db in free space simulation .it can be concluded that the proposed scheme can mitigate power attenuation due to the path - loss as well as the effect of standing - wave created by multipath and interference between multiple wave sources , so that the proposed scheme can improve the coverage of energy supply field . for future works to improve the uniformity of the coverage of energy supply field , we need to consider the design of antenna directivity , antenna polarization and density of transmitters at given conditions of target environments .k. sakaguchi , r. p. wicaksono , k. mizutani , g. k. tran , `` wireless grid to realize ubiquitous networks with wireless energy supply , '' _ ieice tech .442 , sr2009 - 113 , pp .149 - 154 , mar .2010 .s. rahimizadeh , s. korhummel , b. kaslon , zoya popovic , `` scalable adaptive wireless powering of multiple electronic devices in an over - moded cavity , '' _ ieee antenna and propagation magazine , _ vol .1 , feb . 2011 .r. j. vyas , b. b. cook , y. kawahara , m. m. tentzeris , `` e - wehp : a batteryless embedded sensor - platform wirelessly powered from ambient digital - tv signals , '' _ ieee trans .microwave theory and techn .61 , issue 6 , jun .r. shigeta , t. sasaki , d. m. quan , y. kawahara , r. j. vyas , m. m. tentzeris , t. asami , `` ambient rf energy harvesting sensor device with capacitor - leakage - aware duty cycle control , '' _ ieee sensors journal , _ aug .
this paper presents a method to seamlessly extend the coverage of energy supply field for wireless sensor networks in order to free sensors from wires and batteries , where the multi - point scheme is employed to overcome path - loss attenuation , while the carrier shift diversity is introduced to mitigate the effect of interference between multiple wave sources . as we focus on the energy transmission part , sensor or communication schemes are out of scope of this paper . to verify the effectiveness of the proposed wireless energy transmission , this paper conducts indoor experiments in which we compare the power distribution and the coverage performance of different energy transmission schemes including conventional single - point , simple multi - point and our proposed multi - point scheme . to easily observe the effect of the standing - wave caused by multipath and interference between multiple wave sources , 3d measurements are performed in an empty room . the results of our experiments together with those of a simulation that assumes a similar antenna setting in free space environment show that the coverage of single - point and multi - point wireless energy transmission without carrier shift diversity are limited by path - loss , standing - wave created by multipath and interference between multiple wave sources . on the other hand , the proposed scheme can overcome power attenuation due to the path - loss as well as the effect of standing - wave created by multipath and interference between multiple wave sources . 2 experiments validating the effectiveness of multi - point wireless energy transmission with carrier shift diversity daiki maehara , gia khanh tran , kei sakaguchi , kiyomichi araki + minoru furukawa + 2 institute of technology , tokyo , japan + email : maehara.ee.titech.ac.jp + dengyo kosaku co. , ltd , saitama , japan 2 draft : july 2014
electrophoresis is the standard method of dna separation by length ( viovy , 2000 ) .since the mobility of dna molecules in free solution is independent of their length , the dna is commonly introduced into a gel , which serves as a random sieve .unfortunately , the efficiency of gel electrophoresis decreases with the length of the dna .moreover , bioanalytic devices are more and more miniaturized , and incorporating random gels with characteristic pore sizes in the nanometer range into future nanodevices will presumably become increasingly problematic . therefore , much recent effort has been devoted to designing and making well - defined microstructured devices for dna separation ( _ e .g. _ , han , 1999 ; 2000 ; 2002 ; bader , 1999 ; hammond , 2000 ; duong , 2003 ) . in this paper , we focus on a geometry proposed by han and coworkers ( han , 1999 ) , which is based on the idea of entropic trapping .the dna is introduced into a small microchannel with alternating deep regions and shallow constrictions .the channel thickness in the constrictions is much smaller than the radius of gyration of the dna molecules .the deep region is large enough to accommodate the equilibrium shape of the dna molecules .entering the constrictions is thus entropically penalized , and the deep regions can be interpreted as entropic traps . a schematic cartoon of the structure is shown in fig .[ fig : device ] .-direction is much larger ( ) ., scaledwidth=90.0% ] in these structures , han made the counterintuitive observation that long dna molecules travel _ faster _ than short molecules .they explained their finding by means of a simple kinetic model . in order to escape from one of the deep regions ,the dna must overcome an activation barrier .the escape process is initiated by a thermally activated stretching of a chain portion ( length ) into the shallow constriction .this costs entropic penalty proportional to , but it also decreases the electric potential energy by an amount , where is the electric field . hence there exists a critical length , below which the entropy penalty dominates and the chain is driven back into the deep region . at ,the energy gain dominates and the chain escapes .the free energy at represents the activation barrier for the escape process .it depends solely on the electric field , .the rate of escape attempts , , increases with the chain size , since the amount of polymer in contact with the constriction is larger for larger chains .the mean trapping time in this simple model is given by where is the temperature and the boltzmann factor .this implies that long chains travel faster .the results of han have motivated recent monte carlo simulations by tessier ( 2002 ) .these authors used the bond fluctuation model , a well - known lattice model for polymers , to study the mobility of polymers in a geometry similar to that suggested by han within local monte carlo dynamics ( single monomer moves ) .their results confirm the trapping picture of han , and even details of the kinetic model .for example , they present evidence that the penetration depth of the chain into the constriction can be used as a `` reaction coordinate '' for escaping , with a critical value .when looking more closely at the simulation data , one notices that the trapping in the system of tessier is unexpectedly strong .although the width of the constriction is more than twice as large as in the experiments ( in units of the persistence length ) , the molecules spend almost the entire time in a trapped state , and very little time traveling , even at intermediate fields .this effect might be an artifact of the lattice model .we note that the width of the constriction in the simulations is just 10 lattice constants , while every monomer occupies a cube with 8 lattice sites , and the average bond length is 2.8 lattice constants . moreover , the monte carlo dynamics is not realistic .monomers are picked and moved randomly ( with some moves being rejected ) , whereas in the real system , they are pulled into the constriction by the electric force .dynamic monte carlo is known to reproduce diffusional and relaxational dynamics in systems near local equilibrium . nevertheless , it is not clear how well it can be applied to study driven systems .the details of the dynamics should matter particularly at higher electric fields , in situations where the chains travel so fast that they no longer reach local equilibrium in the traps .therefore , simulations of off - lattice models with a more realistic dynamical evolution are clearly desirable . as a first step in this direction , chen and escobedo ( 2003 )have recently studied the free energy landscape of chains in a single , non - periodic trap with monte carlo simulations of an ( off - lattice ) bead - spring model .the initial configuration in these simulations is that of a fully relaxed chain in the absence of an electric field .with that starting point , the free energy barrier turns out to depend on the chain length for short chains , and to level off at higher chain lengths . the data for do not seem to support the relation . since the simulations still used monte carlo , the results on dynamical properties were limited . in the present paper , we present brownian dynamics simulations of a full ( non - equilibrium ) periodic array of entropic traps .we employ a rouse - chain model , such as has been successfully used by others to study the migration of dna in various geometries ( deutsch , 1987 ; 1988 ; matsumoto , 1994 ; noguchi , 2001 ) .our main result is the observation of a new trapping mechanism in geometries such as fig .[ fig : device ] , which is at least as important as that suggested by han , and which can also be exploited to achieve molecular separation .the insight gained from our study should be useful for the design of future , improved molecular separation devices .the paper is organized as follows : in the next section , we describe and discuss the model and give some technical details on the simulation .the results are presented in section [ sec : results ] .we summarize and conclude in section [ sec : summary ] .we model a single dna molecule as a chain with pairwise interactions this potential describes soft , purely repulsive interactions between beads of diameter .the beads are connected by springs with the spring potential the spring constant was chosen very large , , in order to prevent chain crossings .the molecule is confined in a structured channel with a geometry similar to that used by han .the thickness of the shallow constrictions and thick regions is and , respectively , the length of a deep region is 80 , and the total length of a period is . in the lateral direction, the channel is infinite .the chain beads are repelled from the walls of the channel by means of a wall potential essentially identical to ( [ eq : lennard - jones ] ) , , where is the distance of a bead to the closest wall point .the details of the wall potential do not influence the results , as long as it is repulsive and short ranged .note that the effective width of the channel , _ i. e. _ , the width of the space accessible to a bead , is reduced by with this potential . a snapshot of a chain in such a channel is shown in fig . [ fig : snapshot ] .beads in an entropic trap .the solid lines show the electric field lines ., scaledwidth=70.0% ] dna is a charged polyelectrolyte with per base pair , thus each bead carries a charge and is subject to an electric field .we assume that the charges do not interact with one another .this will be discussed further below .the distribution of the electric potential in the channel was calculated numerically by solving the laplace equation with von neumann boundary conditions at the walls , where is the surface normal ) .the dynamical evolution of the system is described by a kramer s equations ( risken , 1989 ) where is the total force acting on bead , and its velocity , and is a friction coefficient .the random noise fulfills with or . the random noise and the friction mimics the effect of the solvent surrounding the dna .the chains differ from standard rouse chains ( doi , 1986 ) by their excluded volume interactions and by the inertia of the beads . with the parameters of our model ( , see the discussion further below ) , the chains exhibit rouse dynamics on length scales of the order the gyration radius of the chain , ( streek , 2002 ) .several important physical factors are neglected in the model .first , electrostatic interactions within dna chains are not taken into account .this is justified by the fact that the debye screening length in typical electrophoresis buffers is only a few , comparable to the diameter of the double helix .second , we do not consider electro - osmotic flow , or flow in general .this reflects the experimental situation reported by han .third , hydrodynamic interactions are disregarded .this is partly justified by the theoretical observation that for polyelectrolytes dragged by an electric field , the hydrodynamic interactions should be screened over distances larger than the debye length , since the counterions are dragged in the opposite direction ( viovy , 2000 ) .unfortunately , the argument is only strictly valid as long as no non - electric forces are present ( long , 1996 ) . in our case ,the walls of the channels exert non - electrical forces which stop the polymers , but do not prevent the counterions in the debye layer from moving .thus the chains experience an additional trapping force from the friction of the counterions .this effect is disregarded in our model .furthermore , diffusion is not treated correctly even in free solution .the diffusion constant of rouse chains scales as with the chain length .including hydrodynamic interactions , one expects zimm scaling , where the gyration radius scales like for self - avoiding chains .experimentally , the diffusion constant of dna in typical buffer solutions is found to scale as ( stellwagen , 2003 ) .unfortunately , a full treatment which accounts correctly both for the ( dynamically varying ) counterion distribution as well as the hydrodynamic interactions is not feasible with standard computational resources .the simplifications of our model influence the results quantitatively , but do presumably not change them by orders of magnitude . the qualitative behavior should not be affected .keeping these caveats in mind , we can now proceed to establish a quantitative connection with the experimental setup of han .the natural units in our model are related to the parameters , , ( the charge per bead ) , and ( the temperature ) .more specifically , the energy unit is , the length unit is , the time unit is , and the electric field unit is . throughout the paper , all quantities shall be given in terms of these natural units .they shall now be related to real ( si ) units .since the experiments are carried out at room temperature , the energy unit is .the length unit is obtained from matching the persistence length of the model chains , ( streek , 2002 ) with that of dna , , yielding .the persistence of the chain is also used to determine the number of base pairs ( bp ) per bead .a dna molecule contains approximately 150 bp on a stretch of length . in our model ,the average bond length between two beads is ( streek , 2002 ) , thus we have 1.9 beads per persistence length , and one bead represents roughly 80 bp . the elongation for a chain crossing with minimal energy is , which corresponds to an energy barrier for crossing .the boltzmann - factor for this energy barrier turns out to be less than . to check the simulation program ,the bond length distribution was compared to the boltzmann - factor and very good agreement was found .furthermore , no bond ever exceeded a length of .thus no chain crossing occured in our simulations .the time scale is calculated from the diffusion constant . for rouse chains of length , is given by experimentally , stellwagen ( 2003 ) have recently reported the relation choosing as reference a chain of length 40 kbp ( ) , we obtain .finally , the unit of the electric field can be identified from matching the mobility of free chains .the theoretical value is . in experiments ,the mobility depends strongly on the choice of the buffer .unfortunately , han do not report explicit measurements of the free - chain mobility of dna in the buffer used in their experiments . in the microchannel, they observe that the overall mobility saturates at high field strengths .the apparent maximum mobility results from an average over a slow motion in the reduced electric field of the deep regions , and a fast motion in the enhanced electric field of the constrictions ( see fig . [fig : snapshot ] ) . for large periods ,the ratio of these two field strengths is simply the inverse of the thickness ratio .one can then derive a relation between the true free chain mobility and the apparent mobility : where and are the relative lengths for the deep regions and shallow constrictions , respectively ( ) .han measure in an experimental setup with , , and ( han , 1999 ) . using eq .( [ eq : muapp ] ) , one can thus estimate .this value of seems very low compared to typical values in the literature .stellwagen have measured dna mobilities in tris - acetate buffers at various concentrations and found values between ( stellwagen , 2002 ) . in 40 mm tris - acetate buffer , they obtain ( stellwagen , 2003 ) . using the latter value as an order - of - magnitude estimate, we identify .however , the results of stellwagen can probably not be applied here , because the mobility depends strongly on the buffer .the first estimate , , leads to the identification .han ( 1999 , 2000 ) have separated dna of lengths between 5 and 160 kbp . herewe study chains of length , corresponding to 80 kbp , which is comparable .the depth of the deep channel regions in the experiments was , which also compares well with the simulation , .the depth of the constriction was in the experiments .in our case , it is , which is slightly larger , but still comparable .since our channels are wider , we expect that the trapping in the simulations will be less pronounced than in reality .the total length per trap ( period ) was in the experiments ( mostly ) and in the simulations .the average electric field strength in the experiments was varied between 20 and 80 , which corresponds to or , depending on the identification of . in the simulations , we studied field strengths between .when comparing field strengths , we must keep in mind that the electric field in the channel is not homogeneous ( see fig .[ fig : snapshot ] ) . in our geometry , the electric field in the constrictionis enhanced by a factor of 2.5 . in the experimental setup ,the enhancement factor is only , due to the fact that the length ratio between the shallow and deep regions is different ( 50:50 in the experiment , 20:80 in the simulations ) .the ratio of field strengths in the shallow and deep regions in our simulations is roughly 4 . in the experimental geometry ,it is for large periods and smaller otherwise .the remaining model parameter that has yet to be determined is the mass of a bead .we note that the actual value of the mass has no influence on the static properties of the chain ( _ e .g. _ , the chain flexibility ) , nor on the diffusive part of the dynamics .it does , however , determine the relative importance of vibrational modes in the chain and other inertia effects .the latter can be characterized by the electrophoretic relaxation time , _i. e. _ , the characteristic decay time of the drift velocity of free flow dna , if the electric field is suddenly turned off .typical values of are ( grossmann , 1992 ) . in our model, the electrophoretic relaxation time is , and the correct value of the mass would be .however , this is unfortunate from a computational point of view , because the mean velocity per bead , , becomes large for such small bead masses , and the time step has to be chosen short as a consequence .the simulation becomes very inefficient . on the other hand , we are not interested in inertia effects here , and we wish to study dynamical effects on much longer time scales than . therefore , we chose to give the beads an unphysically high mass , , leading to an electrophoretic relaxation time . on time scales and less , the dynamics will thus be unrealistic , but this does not affect the slow diffusive dynamics .we close this section with a few technical remarks . the dynamic equations ( [ kramer ] )were integrated using the verlet algorithm .the stochastic noise was realized by picking in every time step a vector at random .dnweg and paul ( 1991 ) have shown that the distribution of random numbers in such a procedure does not necessarily have to be gaussian . here, we used a uniform distribution in a sphere .since we consider single chains only , no periodic boundaries were necessary . with the mass , the time step could be chosen 0.01 .typical run lengths were between 4 and 20 million ( 5 - 25 seconds ) .the simulation jobs were managed by the condor software program ( condor ) , which was developed by the condor team at the computer science department of the university of wisconsin ( condor , 2003 ) .three examples of trajectories at the average field are shown in fig . [fig : trajectory ] .they reveal three qualitatively different modes of migration .the trajectory of very short chains ( corresponding to 800 bp ) is dominated by diffusion .the chains wander back and forth in the trap , until they eventually escape into the next trap .the movement of chains with intermediate chain length ( or 8 kbp ) is much more directional , but still irregular .they are often trapped at the entrance of constrictions .in contrast , long chains ( or 80 kbp ) travel smoothly .the first ( leading ) monomer is sometimes trapped , but this does not arrest the rest of the chain . whereas the time spent in one box fluctuates strongly for shorter chains , it is roughly constant for large chains . in the presence of an average electric field .the middle solid line shows the position of the center of mass .the ( dashed ) upper and lower lines show the positions of the first and the last monomer . in the case , these three lines can not be distinguished from each other .the dashed horizontal lines indicate the positions where constrictions begin . ,title="fig:",scaledwidth=48.0% ] in the presence of an average electric field .the middle solid line shows the position of the center of mass .the ( dashed ) upper and lower lines show the positions of the first and the last monomer . in the case , these three lines can not be distinguished from each other .the dashed horizontal lines indicate the positions where constrictions begin . ,title="fig:",scaledwidth=48.0% ] in the presence of an average electric field .the middle solid line shows the position of the center of mass .the ( dashed ) upper and lower lines show the positions of the first and the last monomer . in the case , these three lines can not be distinguished from each other .the dashed horizontal lines indicate the positions where constrictions begin ., scaledwidth=48.0% ] the behavior of intermediate chains ( ) has similarity to that observed in the simulations of tessier ( 2002 ) .however , the trapping is much less pronounced in our case , and chains frequently pass from one box to another without being trapped at all . trapping effects comparable to those reported by tessier were only observed at the lowest field strength , . at that field value ,chains of all lengths got trapped . as we shall see below , however , chain separation turned out to be not efficient for such small fields .we have carried out simulations for seven different chain lengths 10 , 20 , 50 , 100 , 200 , 500 , and 1000 , and for five average field values , = 0.0025 , 0.005 , 0.01 , 0.02 , and 0.04 .the resulting mobilities , determined as , are summarized in fig .[ fig : mobility ] . for all fields except the lowest ,the mobility increases steadily with the chain length . at high chain lengths and high fields ,it begins to saturate .the maximum value is only about half as large as the free chain mobility , due to the fact that the chains spend a disproportionate amount of time in the deep regions , where the local electric field is lower than average . in units of the mobility of a free chain as a function of chain length for different average electric fields in units of .the dashed lines show the prediction of eq .( [ eq : model ] ) for the fields ( from top to bottom ) ., scaledwidth=70.0% ] at the lowest field ( ) , the mobility depends only slightly on the chain length and even decreases with chain length for small . at such small fields ,backwards diffusion becomes important for short chains .we have seen in fig .[ fig : trajectory ] that short chains explore the whole trap . at , they sometimes even travel backwards into the trap that they just left .the conformational entropic penalty for entering the constrictions is small for short chains . in the limit ,the inverse mobility is thus simply proportional to the number of times the chain visits the entrance of the constriction . since the latter scales like ( the inverse diffusion constant ) , the mobilityis then expected to decrease with chain length . in our system, this is observed at for chain lengths smaller than . for ,the mobility increases again with chain length .the resulting overall chain length dependence is small , and the chain separation is not efficient . the quality of molecular separation systems is often characterized in terms of the theoretical plate number where is the retention time , _ i. e. _ , the total time spent in the system , and the width of the peak at the baseline .[ fig : plates ] shows the plate number per trap for our system . in the interesting regime, we have 10 - 100 plates per trap . at traps per meter ,this corresponds to theoretical plate numbers of plates / m , which is quite good and in agreement with the results of han ( 2000 ) . as indicated ( in units ) ., scaledwidth=70.0% ] we will now investigate the migration modes in more detail .to this end , we have calculated histograms of the retention times spent in one trap .they were defined as the difference of the times when the first monomer of a chain first enters the deep region of the trap .[ fig : histo ] shows distributions of retention times for chains of different length in the field . and electric field . the thick solid line is a fit to the initial exponential decay at chain length ., scaledwidth=70.0% ] the chains need a minimum time to travel from one trap to another .after that `` dead '' period , the histogram rises rapidly and reaches a maximum at .> from a comparison of histograms for all our simulation data ( not shown ) , we deduce that the position of the maximum of the distribution depends very little on the chain length and is strictly proportional to the inverse field .the product determines the maximum mobility in our system . beyond the maximum, the distribution decays rapidly for long chains ( ) , and much more slowly for short chains ( ) , following an exponential behavior .this is consistent with the common picture , where the migration is determined by one single escape rate .the histogram for intermediate chain length , however , reveals that the situation is more complex in reality .the initial decay of the distribution can be fitted with one exponential , however , a long - time tail emerges at times beyond .thus the distribution of retention times at has _ two _ characteristic time scales and . to some extent , this phenomenon is already apparent in the trajectory of fig .[ fig : trajectory ] .the figure suggests that there exist two qualitatively different ways how chains pass from one trap to another : either they travel relatively straight and unimpeded , or they get trapped and linger for some time at the border of the constriction .the trapping mechanism suggested by han alone can not explain these observations .here we propose an additional trapping mechanism , which also slows down short chains and which presumably accounts to a large extent for the chain length dependence of the mobility observed in our simulations .the idea is that chains get trapped at the side walls and corners of the deep boxes due to diffusion .the electric field lines lead the chains from the outlet of one constriction directly to the entrance of the next one . with a certain probability, the chain will therefore reach the next constriction without detours , and then get delayed there due to the mechanism suggested by han .this accounts for the fast time scale . on the other hand, chains may also diffuse out of the main path .they may access regions of the trap at the walls and in the corners where the electric field is very small . in that case , they are caught in a force - free region , and they can `` escape '' only by diffusion .we will now explore whether our data support this picture . in the following analysis , only data for chain lengths and fields were used . in most of these systems two time scales were observed .the value of the fast time scale could be extracted by fitting an exponential to the initial decay of the histogram .the determination of the slow time scale was much more difficult , due to the poor statistics for the late time tails of the histograms .we used two approaches : first we fitted the long range tail with an exponential function to obtain a rough estimate . assuming , we calculated via which is independent of .we used to analyze the data .we checked whether the result deviated strongly from the previous estimate , and whether it depended strongly on the cutoff .if this was the case , ( because the data for scattered strongly ) , the result was discarded .the remaining values are compiled in fig .[ fig : tau ] , together with the data for .if our suspicion is correct that the fast process corresponds to the mechanism of han , then the rate should be proportional to the amount of polymer in contact with the channel .since the channel entrance is essentially one dimensional , the contact area should be proportional to the gyration radius of the chain .[ fig : tau ] shows that the fast time scale indeed scales like . for different electric fields .the filled symbols correspond to the fast time scale , the open symbols to the slow time scale .the thick lines show for comparison power laws as indicated ., scaledwidth=70.0% ] the relation between and the electric field strength is more complicated .according to han , the chains must overcome a free energy barrier of height proportional to in order to escape .on the other hand , the prefactor in eq .( [ eq : tau ] ) may also depend on .the resulting -dependence can be rather complex . in the field range of our simulation, the resulting relation can be approximated by the empirical law .like the fast time scale , the slow time scale also decreases with the chain length , but the dependence here is very weak .unfortunately , the quality of the data was not sufficient for a quantitative analysis .if our interpretation is correct , then characterizes an escape from a low - force region .we note that extended chains in a corner experience a net electric force towards the wall , even though the field lines of course never enter the wall ( see fig .[ fig : trap ] ) . in order to leave the corner ,the chain must either move against the force , or change its shape . in both cases, it has to overcome a free energy barrier before it can get rescued from the field lines .a simple ansatz would predict the escape probability to be proportional to the diffusion constant , and to the area covered by the chain , , or the total chain length .this would yield a very weak net chain length dependence or , which is consistent with the observed behavior ( fig .[ fig : tau ] ) .thus the slow time scale itself does not contribute much to the chain length dependence of the mobility .the main effect comes from the fact that the _ relative number _ of chains caught in the field - free region depends on the chain length .the travel time from one channel to the next for undeflected chains is slightly smaller than .we assume that chains have to diffuse at least over a distance into the field - free region ( direction , see fig .[ fig : device ] ) in order to get caught .the distribution along the -direction after a time will be approximately gaussian : .the probability of being caught is thus where is the complementary error function .that a chain is still in the trap at the time , vs. in units of .the solid line represents a fit to eq .( [ eq : erf ] ) with . ,scaledwidth=70.0% ] we can test this prediction under the assumption that the two time scales and are sufficiently far apart that they can be separated . in that case , one can choose a time such that almost all undeflected chains have left the trap , while almost all deflected chains are still in the field - free region . can then be approximated by the relative number of chains left in the field - free region at the time .[ fig : pcum ] shows the result of such an estimate .the data collapse reasonably well for different and can be fitted with eq .( [ eq : erf ] ) , with the fit parameter . inserting and ,one obtains .thus chains get caught if they sidetrack by more than from their main path , which is determined by the field lines .these considerations establish the existence of a new mechanism which produces chain length dependent mobility . to assess the relative importance of the new mechanism , we compare the real mobility data , fig .( [ fig : mobility ] ) , with a very simple model .chains either travel straight across the trap , or get sidetracked into the field - free region .traveling across the trap takes at least the time .the chains caught in the field - free region spend an additional time in the trap .we make the simplification that is independent of the chain length and electric field and given by the number , which is roughly the value at chain length and field strength .the relative number of chains caught in the deep region is calculated according to eq .( [ eq : erf ] ) , with ( taken from fig .[ fig : pcum ] ) .the resulting mobility is ^{-1}.\ ] ] this prediction is compared with the actual data in fig . [fig : mobility ] ( dashed lines ) . despite the simplicity of eq .( [ eq : model ] ) , the agreement is remarkably good .to summarize , we have presented the first off - lattice brownian dynamics simulation of dna migration in an entropic trap array .we reproduce the experimentally observed phenomenon that the mobility increases with the chain length .this result can be traced back to two distinct mechanisms .the first mechanism has already been discussed by han : chains get delayed at the narrow channels connecting the traps .they escape through the channels with a probability which is proportional to the radius of gyration of the chain and thus scales as .however , we found that this effect accounts only in part for the total chain length dependence . in the second mechanism ,the chains are trapped with a certain probability at the side and in the corner of the box .the characteristic time for escaping such a configuration is very long .the trapping probability increases with the diffusion constant , which is in turn inversely proportional to the chain length . as a result ,the mobility increases with the chain length . to our best knowledge, this mechanism has not yet been described in the literature .it becomes relevant when the period of the structure is small .indeed , han have studied structures with periods ranging from to ( han , 1999 ) , but they reported separation by length only for their smallest structure with , in a system with dimensions comparable to those studied here . we have observed a number of other phenomena , which we shall not describe in detail here .in contrast to chen ( chen , 2003 ) , we have considered truly non - equilibrium systems .subsequent escapes can not necessarily be considered as independent events .the longest chains do not recover the equilibrium coil structure in the middle of the trap , but they remain stretched in the direction .moreover , our data seem to provide evidence that chains even retain some memory of the previous escape process . at high fields ,successive escape times seem to be correlated .unfortunately , the statistical quality of the data is not good enough to allow for a more thorough analysis .the situation becomes even more complicated when the shallow channel is made wider .whereas in the cases presented here , long chains migrated faster than short chains , we have observed the inverse effect in microfluidic system with wider channels ( duong , 2003 ) .the effect as well as other , even more unexpected phenomena , can be reproduced in simulations .these phenomena will be presented in a forthcoming publication ( streek , 2004 ) .we thank ralf eichhorn for critically reading the manuscript .this work was funded by the german science foundation ( sfb 613 , teilprojekt d2 ) .bader , j. s. , hammond , r. w. , henck , s. a. , deem , m. w. , mcdermott , g. a. , bustillo , j. m. , simpson , j. w. , mulhern , g. t. , rothberg , j. m. , 1999 . dna transport by a micromachined brownian ratchet device .pnas 96 , 13165 - 13169 .duong , t. t. , kim g. , ros , r. , streek , m. , schmid , f. , brugger , j. , anselmetti , d. , ros , a. , 2003 .size - dependent free solution dna electrophoresis in structured microfluidic systems .microelectronic engineering 67 - 68 , 905 - 912 .hammond , r. w. , bader , j. s. , henck , s. a. , deem , m. w. , mcdermott , g. a. , bustillo , j. m. , rothberg , j. m. , 2000 .differential transport of dna by a rectified brownian motion device .electrophoresis 21 , 74 - 80 .han , j. , turner , s. w. , craighead , h. g .. 1999 .entropic trapping and escape of long dna molecules at submicron size constriction .83 , 1688 - 1691 ; erratum . 2001 .86 , 1394 .
using brownian dynamics simulations , we study the migration of long charged chains in an electrophoretic microchannel device consisting of an array of microscopic entropic traps with alternating deep regions and narrow constrictions . such a device has been designed and fabricated recently by han for the separation of dna molecules ( science , 2000 ) . our simulation reproduces the experimental observation that the mobility increases with the length of the dna . a detailed data analysis allows to identify the reasons for this behavior . two distinct mechanisms contribute to slowing down shorter chains . one has been described earlier by han : the chains are delayed at the entrance of the constriction and escape with a rate that increases with chain length . the other , actually dominating mechanism is here reported for the first time : some chains diffuse out of their main path into the corners of the box , where they remain trapped for a long time . the probability that this happens increases with the diffusion constant , _ i. e. _ , the inverse chain length . gel electrophoresis , microfluidic system , dna separation , entropic trap , computer simulation
neurons in a brain communicate information , emitting spikes which propagate through axons and dendrites to neurons at the next stage .it has been a long - standing controversy whether information of neurons is encoded in the firing rates ( _ rate code _ ) or in the more precise firing times ( _ temporal code _ ) .some experimental results having been reported seem to support the former code while some the latter .in particular , a recent success in brain - machine interface ( bmi ) suggests that the population rate code is employed in sensory and motor neurons while it is still not clear which code is adopted in higher - level cortical neurons .experimental observations have shown that in many areas of the brain , neurons are organized into groups of cells such as columns in the visual cortex . a small patch in cortexcontains thousands of similar neurons , which receive inputs from the same patch and other patches .there are many theoretical studies on the property of neuronal ensembles consisting of equivalent neurons , with the use of spiking neuron models or rate - code models ( for a review on neuronal models , see ; related references therein ) . in the spiking neuron model , the dynamics of the membrane potential of a neuron in the ensembleis described by the hodgkin - huxley ( hh)-type nonlinear differential equations ( des ) which express the conductance - based mechanism for firings . reduced , simplified models such as the integrate - and - fire ( if ) and fitzhugh - nagumo ( fn ) models have been also employed .in contrast , in the rate - code model , neurons are regarded as transducers between input and output signals , both of which are expressed in terms of spiking rates .computational neuroscientists have tried to understand the property of ensemble neurons by using the two approaches : direct simulations ( dss ) and analytical approaches .ds calculations have been performed for large - scale networks mostly described by the simplest if model .since the computational time of ds grows as with , the size of the ensemble , a large - scale ds with more realistic models becomes difficult .although ds calculations provide us with useful insight to the firing activity of the ensemble , it is desirable to have results obtained by using analytical approaches .analytical or semi - analytical calculation methods for neuronal ensembles have been proposed by using the mean - field ( mf ) method , the population - density approaches - , the moment method and the augmented moment method ( amm ) ( details of the amm will be discussed shortly ) .it is interesting to analytically obtain the information of the firing rate or the interspike interval ( isi ) , starting from the spiking neuron model .it has been shown that the dynamics of the spiking neuron ensemble may be described by des of a macroscopic variable for the population density or spike activity , which determines the firing rate of ensemble neurons - . by using the - relation betweenthe applied dc current and the frequency of autonomous firings , the rate - code model for conduction - based neuron models is derived .when we apply the fokker - planck equation ( fpe ) method to the neuron ensemble described by the if model , the averaged firing rate is expressed by which denotes the distribution probability of the averaged membrane potential with the threshold for the firing .it is well known that neurons in brains are subjected to various kinds of noise , though their precise origins are not well understood .the response of neurons to stimuli is expected to be modified by noise in various ways . indeed , although firings of a single _ in vitro _neuron are reported to be precise and reliable , those of _ in vivo _ neurons are quite unreliable due to noisy environment .the strong criticism against the temporal code is that it is not robust against noise , while the rate code is robust .it is commonly assumed that there are two types of noise : additive and multiplicative noise .the magnitude of the former is independent of the state of variable while that of the latter depends on its state .interesting phenomena caused by the two noise have been investigated .it has been found that the property of multiplicative noise is different from that of additive noise in some respects .( 1 ) multiplicative noise induces a phase transition , creating an ordered state , while additive noise works to destroy the ordering .( 2 ) although the probability distribution in stochastic systems subjected to additive white noise follows a gaussian , multiplicative white noise generally yields a non - gaussian distribution - .( 3 ) the scaling relation of the effective strength for additive noise given by is not applicable to that for multiplicative noise : , where and denote effective strengths of multiplicative and additive noise , respectively , in the -unit system . a naive approximation of the scaling relation for multiplicative noise : as adopted in ref . , yields the result which does not agree with that of ds . in this paper , we will study the property of neuronal ensembles based on the rate - code hypothesis .rate models having been proposed so far , are mainly given by where ( ) denotes a firing rate of a neuron ( to ) , the relaxation rate , the coupling strength , the gain function , an external input , and expresses the magnitude of additive white noise of with the correlation : .the rate model as given by eq .( 1 ) has been adopted in many models based on neuronal population dynamics .the typical rate model is the wilson - cowan model , with which the stability of an ensemble consisting of excitatory and inhibitory neurons is investigated .the rate model given by eq .( 1 ) with is the hopfield model , which has been extensively adopted for studies on the memory in the brain incorporating the plasticity of synapses into .ds calculations have been performed , for example , for a study of the population coding for .analytical studies of eq .( 1 ) are conventionally made for the case of , adopting the fpe method with mf and diffusion approximations .the stationary distribution obtained by the fpe for eq .( 1 ) generally follows the gaussian distribution .isi data obtained from experiments have been fitted by a superposition of some known probability densities such as the gamma , inverse - gaussian and log - normal distributions - .the gamma distribution with parameters and is given by which is derived from a simple stochastic if model with additive noise for poisson inputs , being the gamma function . for in eq .( 2 ) , we get the exponential distribution describing a standard poisson process .the inverse gaussian distribution with parameters and given by ,\ ] ] is obtained from a stochastic if model in which the membrane potential is represented as a random walk with drift . the log - normal distribution with parameters and given by ,\ ] ]is adopted when the log of isi is assumed to follow a gaussian form .fittings of experimental isi data to a superposition of these probability densities have been extensively discussed in the literature - .the purpose of the present paper is to propose and study the generalized , phenomenological rate model [ eqs .( 5 ) and ( 6 ) ] .we will discuss ensembles with _finite _ populations , contrary to most existing analytical theories except some ones ( _ e.g. _ ref . ) , which discuss ensembles with _ infinite _ .the stationary distribution of our rate model will be discussed by using the fpe method .it is shown that owing to the introduced multiplicative noise , our rate model yields not only the gaussian distribution but also non - gaussian distributions such as gamma , inverse - gaussian - like and log - normal - like distributions .the dynamical properties of our rate model will be studied by using the augmented moment method ( amm ) which was previously proposed by the present author .based on a macroscopic point of view , hasegawa has proposed the amm , which emphasizes not the property of individual neurons but rather that of ensemble neurons . in the amm, the state of finite -unit stochastic ensembles is described by a fairly small number of variables : averages and fluctuations of local and global variables .for example , the number of deterministic equation in the amm becomes _ three _ for a -unit langevin model .the amm has been successfully applied to a study on the dynamics of the langevin model and stochastic spiking neuron models such as fn and hh models , with global , local or small - world couplings ( with and without transmission delays ) - .the amm in was originally developed by expanding variables around their stable mean values in order to obtain the second - order moments both for local and global variables in stochastic systems . in recent papers ,we have reformulated the amm with the use of the fpe to discuss stochastic systems subjected to multiplicative noise : the fpe is adopted to avoid the difficulty due to the ito versus stratonovich calculus inherent to multiplicative noise . in the present paper ,a study on the langevin model with multiplicative noise made in , has been applied to an investigation on the firing properties of neuronal ensembles .our method aims at the same purpose to effectively study the property of neuronal ensembles as the approaches developed in refs .-- .the paper is organized as follows . in sec .2 , we discuss the generalized rate model for an ensemble containing neurons , investigating its stationary and dynamical properties . some discussions are presented in sec . 3 , where variabilities of the rate and isi are calculated . the final sec .4 is devoted to our conclusion .for a study of the property of a neuron ensemble containing finite neurons , we have assumed that the dynamics of the firing rate ( ) of a neuron ( to ) is given by with here , and are arbitrary functions of , denotes the coordination number , an input from external sources and the coupling strength : and express the strengths of additive and multiplicative noise , respectively , given by and expressing zero - mean gaussian white noise with correlations given by the rate model in eq .( 1 ) adopts and ( no multiplicative noise ) .the gain function expresses the response of the firing rate ( ) to a synaptic input field ( ) .it has been theoretically shown in that when spike inputs with the mean isi ( ) are applied to an hh neuron , the mean isi of output signals ( ) is for ms and ms for ms .this is consistent with the recent calculation for hh neuron multilayers , which shows a nearly linear relationship between the input ( ) and output rates ( ) for hz ( fig . 3 of ref .it is interesting that the - relation is continuous despite the fact that the - relation of the hh neuron shows a discontinuous , type - ii behavior according to ref . . in the literature ,two types of expressions for have been adopted so far . in the first category ,sigmoid functions such as ( _ e.g. _ ) and ( _ e.g. _ ) have been adopted . in the second category ,gain functions such as ( _ e.g. _ ) have been employed , modeling the - function for the frequency of autonomous oscillation against the applied dc current , expressing the critical value and the heaviside function : for and 0 otherwise .the nonlinear , saturating behavior in arises from the property of the refractory period ( ) because spike outputs are prevented for after firing at . in this paper , we have adopted a simple expression given by although our results to be presented in the following sections , are expected to be valid for any choice of . by employing the fpe, we may discuss the stationary distribution for and , which is given by ,\end{aligned}\ ] ] with , \\y(r ) & = & 2 \int \:dr \ : \left[ \frac{h(i)}{\alpha^2 g(r)^2+\beta^2 } \right],\end{aligned}\ ] ] where and 1 for ito and stratonovich representations , respectively .hereafter we mainly adopt the stratonovich representation .* case i * and for the linear langevin model , we get ^{-(\lambda/\alpha^2 + 1/2 ) } e^{y(r ) } , \end{aligned}\ ] ] with where . in the case of , we get the -gaussian given by ^{\frac{1}{1-q}},\end{aligned}\ ] ] with we examine some limiting cases of eq .( 14 ) as follows .\(a ) for and ( _ i.e. _ additive noise only ) , eq .( 14 ) yields \(b ) for and ( _ i.e. _ multiplicative noise only ) , eq .( 14 ) leads to distributions calculated with the use of eqs .( 14)-(20 ) are plotted in figs .1(a)-1(c ) .the distribution for ( without multiplicative noise ) in fig .1(a ) shows the gaussian distribution which is shifted by an applied input . when multiplicative noise is added ( ) , the form of changed to the -gaussian given by eq .1(b ) shows that when the magnitude of additive noise is increased , the width of is increased .1(c ) shows that when the magnitude of external input is increased , is much shifted and widely spread . note that for and ( additive noise only ) , is simply shifted without a change in its shape when increasing [ eq . ( 19 ) ] .* case ii * and ( ) the special case of and has been discussed in the preceding case i [ eqs .( 14)-(20 ) ] .for arbitrary ( ) and ( ) , the probability distribution given by eqs .( 11)-(13 ) becomes ^{-1/2}\:e^{x(r)+y(r ) } , \end{aligned}\ ] ] with where is the hypergeometric function .some limiting cases of eqs .( 21)-(23 ) are shown in the following .\(a ) the case of was previously studied in .\(b ) for and ( _ i.e. _ additive noise only ) , we get . \ ] ] \(c ) for and ( _ i.e. _ multiplicative noise only ) , we get , \nonumber \\ & & \hspace{3cm}\mbox{for } \\& \propto & r^{-(2\lambda/\alpha^2+b ) } \exp\left [ -\left ( \frac{2h}{\alpha^2(2b-1 ) } \right ) r^{-2b+1 } \right ] , \hspace{0.2cm}\mbox{for } \\ & \propto & r^{(2h/\alpha^2 - 1/2 ) } \exp\left [ -\left ( \frac{2\lambda}{\alpha^2 a } \right ) r^a \right ] , \hspace{1.5cm}\mbox{for } \\ & \propto & r^{-[2(\lambda - h)/\alpha^2 + 1/2 ] } , \hspace{2cm}\mbox{for }\end{aligned}\ ] ] \(d ) in the case of and , we get ,\end{aligned}\ ] ] which reduces , in the limit of , to , \hspace{1cm}\mbox{for }\end{aligned}\ ] ] * case iii * and we get .\hspace{1cm}\mbox{for }\end{aligned}\ ] ] figure 2(a ) shows distributions for case ii and various with fixed values of , , , and ( multiplicative noise only ) . with more decreasing ,a peak of at becomes sharper .2(c ) shows distributions for case ii and various with fixed values of , , , and ( multiplicative noise only ) .we note that a change in the value yields considerable changes in shapes of .2(b ) and 2(d ) will be discussed shortly . when the temporal isi is simply defined by , its distribution is given by we get various distributions of depending on functional forms of and g(x ) .for , and , eq .( 26 ) yields , \end{aligned}\ ] ] which expresses the gamma distribution [ eq .( 2 ) ] . for , and , eq . ( 25 ) leads to , \end{aligned}\ ] ] which is similar to the inverse gaussian distribution [ eq .( 3 ) ] .for , and ,( 31 ) yields , \end{aligned}\ ] ] which is similar to the log - normal distribution [ eq .( 4 ) ] .2(b ) and 2(d ) show obtained from shown in figs .2(a ) and 2(c ) , respectively , by a change of variable with eq .2(b ) shows that with more increasing , the peak of becomes sharper and moves left .we note in fig .2(d ) that the form of significantly varied by changing in .when we consider the global variable defined by the distribution for is given by analytic expressions of are obtainable only for limited cases .\(a ) for and , is given by , \end{aligned}\ ] ] where .\(b ) for , we get with ^n,\ ] ] where is the characteristic function for given by with expressing the modified bessel function .some numerical examples of are plotted in figs . 3 , 4 and 5 .figures 3(a ) and 3(b ) show for and , respectively , when is changed . for , is the gaussian distribution whose width is narrowed by a factor of with increasing .in contrast , for is non - gaussian , whose shape seems to approach a gaussian for increasing .these are consistent with the central - limit theorem .effects of an external input on and are examined in figs .4(a ) and 4(b ) . figure 4(a ) shows that in the case of ( additive noise only ) , and are simply shifted by a change in .this is not the case for , for which and are shifted and widened with increasing , as shown in fig .4(b ) . figures 5(a ) and 5(b ) show effects of the coupling on and . for , and changed only slightly with increasing . on the contrary , for , an introduction of the couplingsignificantly modifies and as shown in fig .5(b ) .next we will discuss the dynamical properties of the rate model by using the amm . by employing the fpe, we obtain equations of motion for moments : , , and where .then we get equations of motion for the three quantities of , and defined by where expresses the mean , the averaged fluctuations in local variables ( ) and fluctuations in the global variable ( ) .we get ( for details see ) , \\\frac{d \gamma}{dt } & = & 2f_{1 } \gamma + 2h_{1 } \left ( \frac{w n}{z}\right ) \left(\rho-\frac{\gamma}{n } \right ) \nonumber \\ & + & ( \phi+1 ) ( g_{1}^2 + 2 g_{0}g_{2})\alpha^2\gamma + \alpha^2 g_{0}^2+\beta^2 , \\ \frac{d \rho}{dt } & = & 2 f_{1 } \rho + 2 h_{1 } w \rho + ( \phi+1 ) ( g_{1}^2 + 2 g_{0}g_{2})\:\alpha^2 \:\rho + \frac{1}{n}(\alpha^2 g_{0}^2 + \beta^2),\end{aligned}\ ] ] where , , and .original -dimensional stochastic des given by eqs .( 5 ) and ( 6 ) are transformed to the three - dimensional deterministic des given by eqs .( 48)-(50 ) . when we adopt eqs .( 48)-(50 ) are expressed in the stratonovich representation ( ) by where , and with . before discussing the dynamical properties, we study the stationary properties of eqs .( 53)-(55 ) .we get the stationary solution given by , \\\rho & = & \frac{(\alpha^2 \mu^2+\beta^2)}{2 n ( \lambda-\alpha^2-w h_1)},\end{aligned}\ ] ] where eq . ( 56 ) expresses the fifth - order algebraic equation of .the stability of eqs .( 53)-(55 ) around the stationary solution may be shown by calculating eigenvalues of their jacobian matrix , although actual calculations are tedious .figure 6 shows the dependences of and in the stationary state for four sets of parameters : ( solid curves ) , ( 0.5 , 0.1 , 0.0 ) ( dashed curves ) , ( 0.0 , 0.1 , 0.5 ) ( chain curves ) and ( 0.5 , 0.1 , 0.5 ) ( double - chain curves ) , with , and . for all the cases , is proportional to , which is easily seen in eq .in contrast , shows a weak dependence for a small ( ) .it is noted that and approximately express the widths of and , respectively .the -dependence of in fig .6 is consistent with the result shown in figs .3(a ) and 3(b ) , and with the central - limit theorem .we have studied the dynamical properties of the rate model , by applying a pulse input of given by with , , and which expresses the background input .7(a ) , 7(b ) and 7(c ) show the time dependences of , and for and when the input pulse given by eq .( 59 ) is applied : solid and dashed curves show the results of the amm and ds averaged over 1000 trials , respectively , with , , and .7(b ) and 7(c ) show that an applied input pulse induces changes in and .this may be understood from terms in eqs .( 54 ) and ( 55 ) .the results of the amm shown by solid curves in figs .7(a)-(c ) are in good agreement with ds results shown by dashed curves .figure 7(d ) will be discussed shortly .it is possible to discuss the synchrony in a neuronal ensemble with the use of and defined by eqs . (46 ) and ( 47 ) . in orderto quantitatively discuss the synchronization , we first consider the quantity given by ^ 2 > = 2 [ \gamma(t)-\rho(t)].\ ] ] when all neurons are firing with the same rate ( the completely synchronous state ) , we get for all , and then in eq . ( 60 ) . on the contrary , we get in the asynchronous state where . we may define the synchronization ratio given by which is 0 and 1 for completely asynchronous ( ) and synchronous states ( ) , respectively .figure 7(d ) shows the synchronization ratio for and plotted in figs .7(b ) and 7(c ) , respectively , with , , and .the synchronization ratio at and is 0.15 , but it is decreased to 0.03 at by an applied pulse .this is because by an applied pulse , is more increased than and the ratio of is reduced .the synchronization ratio vanishes for , and it is increased with increasing the coupling strength . next we show some results for different indices of and in and .8(a ) shows the time dependence of for ( solid curve ) and ( dashed curve ) with , , and .the saturated magnitude of for is larger than that for .solid and dashed curves in fig .8(b ) show for and ( 1,0.5 ) , respectively , with , , and .both results show similar responses to an applied pulse although for a background input of for is a little larger than that for .we have applied also a sinusoidal input given by + i^{(b)},\ ] ] for and with , , , and and 20 .time dependences of for and are plotted in figs .9(a ) and 9(b ) , respectively , with , , and .amm results of shown by solid curves in figs .9(a ) and ( b ) are indistinguishable from ds results ( with 100 trials ) shown by dashed curves , chain curves denoting sinusoidal input .as the period of becomes shorter , the magnitude of becomes smaller .the delay time of against an input is about ( ) for both and .we may calculate variabilities of and , by using their distributions of and , which have been obtained in sec .2 . for example , in the case of and , the distribution of for , and given by eq . ( 26 ) leads to the relevant gamma distribution for isi , , given by eq .( 33 ) yields equations ( 65 ) and ( 68 ) show that both and are increased with increasing the magnitude ( ) of multiplicative noise .it has been reported that spike train variability seems to correlated with location in the processing hierarchy .a large value of is observed in hippocampus ( ) whereas is small in cortical neurons ( ) and motor neurons ( ) . in order to explain the observed large ,several hypotheses have been proposed : ( 1 ) a balance between excitatory and inhibitory inputs , ( 2 ) correlated fluctuations in recurrent networks , ( 3 ) the active dendrite conductance , and ( 4 ) a slowly decreasing tail of input isi of ( ) at large .our calculation shows that multiplicative noise may be an alternative origin ( or one of origins ) of the observed large variability .we note that the variability of is given by in the amm ( _ e.g. _ eq .( 65 ) agrees with for and given by eqs .( 56 ) and ( 57 ) , respectively , with ) . it would be interesting to make a more detailed study of the variability for general and as discussed in sec .we have proposed the generalized rate - code model given by eqs .( 5 ) and ( 6 ) , in which the relaxation process is given by a single . instead, when the relaxation process consists of two terms : with , the distribution becomes ^{c_1}\:[p_2(r)]^{c_2},\ ] ] where ( ) denotes the distribution only with or .in contrast , when multiplicative noise arises from two independent origins : the distribution for becomes }.\ ] ] similarly , when additive noise arises from two independent origins : the distribution for becomes equations ( 70 ) , ( 72 ) and ( 74 ) are quite different from the form given by which has been conventionally adopted for a fitting of the theoretical distribution to that obtained by experiments - .we have proposed the generalized rate - code model [ eqs . ( 5 ) and ( 6 ) ] , whose properties have been discussed by using the fpe and the amm .the proposed rate model is a phenomenological one and has no biological basis . as discussed in sec .1 , the conventional rate model given by eq . ( 1 ) may be obtainable from a spiking neuron model when we adopt appropriate approximations to des derived by various approaches such as the population - density method - and others -. it would be interesting to derive our rate model given by eqs .( 5 ) and ( 6 ) , starting from a spiking neuron model .the proposed generalized rate model is useful in discussing stationary and dynamical properties of neuronal ensembles .indeed , our rate model has an interesting property , yielding various types of stationary non - gaussian distributions such as gamma , inverse - gaussian and log - normal distributions , which have been experimentally observed - .it is well known that the langevin - type model given by eq .( 1 ) can not properly describe fast neuronal dynamics at the characteristic times shorter than ( ) .this is , however , not a fatal defect because we may evaded it , by adopting an appropriate value of for a given neuronal ensemble with .actually , the dynamical properties of an ensemble consisting of excitatory and inhibitory neurons has been successfully discussed with the use of the langevin - type wilson - cowan model ( for recent papers using the wilson - cowan model , see : related references therein ) .one of the disadvantages of the amm is that its applicability is limited to the case of weak noise because it neglects contributions from higher moments . on the contrary ,the amm has following advantages : \(i ) the dynamical properties of an -unit neuronal ensemble may be easily studied by solving three - dimensional ordinary des [ eqs .( 48)-(50 ) ] , in which three quantities of , and have clear physical meanings ,\(ii ) analytic expressions for des provide us with physical insight without numerical calculations ( _ e.g. _ the dependence of follows the central - limit theorem [ eq . (58 ) ] , and \(iii ) the synchronization of the ensemble may be discussed [ eq . ( 61 ) ] . as for the item ( i ) , note that we have to solve the -dimensional stochastic langevin equations in ds , and the -dimensional partial des in the fpe .then the amm calculation is very much faster than ds : for example , for the calculation shown in fig .9(a ) , the ratio of the computation time of the amm to that of ds becomes / .we hope that the proposed rate model may be adopted for a wide class of study on neuronal ensembles described by the wilson - cowan - type model .this work is partly supported by a grant - in - aid for scientific research from the japanese ministry of education , culture , sports , science and technology .amm calculations have been performed by using the second - order runge - kutta method with a time step of 0.01 .direct simulations for eqs .( 5 ) and ( 6 ) have been performed by using the heun method with a time step of 0.0001 : ds results are averages of 100 trials otherwise noticed .the ratio of the computation time of the amm to that of ds for -unit langevin model is given by where denotes the trial number of ds , and and are time steps of amm and ds calculations , respectively .for example , this ratio becomes for the calculation shown in fig .9(a ) , for which , , and : dss for multiplicative noise require finer time steps than those for additive noise .
we have proposed a generalized langevin - type rate - code model subjected to multiplicative noise , in order to study stationary and dynamical properties of an ensemble containing _ finite _ neurons . calculations using the fokker - planck equation ( fpe ) have shown that owing to the multiplicative noise , our rate model yields various kinds of stationary non - gaussian distributions such as gamma , inverse - gaussian - like and log - normal - like distributions , which have been experimentally observed . dynamical properties of the rate model have been studied with the use of the augmented moment method ( amm ) , which was previously proposed by the author with a macroscopic point of view for finite - unit stochastic systems . in the amm , original -dimensional stochastic differential equations ( des ) are transformed into three - dimensional deterministic des for means and fluctuations of local and global variables . dynamical responses of the neuron ensemble to pulse and sinusoidal inputs calculated by the amm are in good agreement with those obtained by direct simulation . the synchronization in the neuronal ensemble is discussed . variabilities of the firing rate and of the interspike interval ( isi ) are shown to increase with increasing the magnitude of multiplicative noise , which may be a conceivable origin of the observed large variability in cortical neurons . * generalized rate - code model for neuron ensembles + with finite populations * hideo hasegawa _ department of physics , tokyo gakugei university + koganei , tokyo 184 - 8501 , japan _ ( ) _ pacs no . _ 84.35.+i , 87.10.+e , 05.40.-a
we are continually exposed to viruses . despite these constant biological assaults ,the immune system successfully fends off most viruses .considerable effort has been devoted to modeling the factors that influence whether a person exposed to a particular virus will eventually become ill .typical theoretical models of viral infections account for the evolution of the number of infected cells , healthy cells , and viruses as a function of the rates of microscopic infection and transmission rates .such models have provided many useful insights about the dynamics of viral diseases . in this work, we study a toy model the bursty random walk ( fig .[ model ] ) that captures one of the elements of viral infection dynamics .the position of the walk in one dimension represents the number of active viruses in an organism .since the immune system constantly kills viruses , they are removed from the body at some specified rate , corresponding to steps to the left in the bursty random walk . however , with a small probability , a virus enters and successfully hijacks a cell , the outcome of which is a burst of a large number of new viruses into the host organism , corresponding to a long step to the right in the model .when the number of virus particles reaches zero , the organism may be viewed as being free of the disease .conversely , when the number of viruses reaches a threshold value , the organism can be viewed as either being ill or dead . with this simplistic perspective , being cured or becoming ill is recast as a first - passage problem for the bursty random walk in an interval of length .when the burst length is small , the walk has a diffusive continuum limit whose first - passage properties are well known .if the burst length is of the order of the system length , this burstiness effects strongly affect the first - passage characteristics .this large - burst limit should be applicable to infectious processes where the threshold number of viruses for being ill is not large and the number of new viruses created in a burst event is a finite fraction of this threshold . related discreteness effects were found in the first - passage characteristics of a random walk that hops uniformly within a range ] , with . .] in the next section , we define the model and the basic first - passage quantities that we will investigate . in secs .[ sec : e ] and [ sec : t ] , we determine the exit probabilities and the average exit times to either end of the interval as a function of the burst length . when , very different first - passage properties arise compared to those for pure diffusion in the interval . perhaps the most striking is the conditional exit time to reach , corresponding to a state of infection , which has a non - monotonic dependence on the starting position .we compute these first - passage properties from the backward kolmogorov equations for the exit probabilities and exit times . in the concluding section ,we briefly discuss the corresponding first - passage properties for the bursty birth / death model .this process accounts for the feature that bursts should occur at a rate that is proportional to the number of live viruses .it is naturally to model this situation by defining the rate at which steps occur to be proportional to the current position of a bursty walk on the interval .in the bursty random walk , unit - length steps to the left occur with probability , while long steps ( bursts ) of length occur with probability ( fig .[ model ] ) .we choose and so that the average position of the walk does not change at each step ; however , most of our results are derived for general and . the motivation for considering these hopping probabilitiesis based on the experimental observation that viral counts in an organism often remain nearly constant for time periods much longer than the lifetime of individual viruses .such a near constancy could only arise if an organism produces new viruses ( by bursts ) and clears viruses at similar overall rates . with the constraint that the number of virus particles remains fixed , on average , the respective probabilities of making a single step to the right and to the leftare the bursty random walk is confined to the finite interval ] , the boundary conditions are : applying these boundary conditions to the general solution , we obtain for the restricted exit probabilities to and to , respectively .parenthetically , once we know one of or , the other is determined by the martingale property that the mean position of the bursty walk always remains fixed .that is , after all the probability has reached an absorbing boundary , the two restricted exit probabilities are related by +l\times \mathcal{r}_0(x)+ ( l+1)\times \mathcal{r}_1(x)=x ] , ] , etc ., instead of directly solving for the roots of a characteristic polynomial of order .as we shall see , this partitioning significantly reduces the order of the recursions for the exit probabilities . in the extreme situation where the burst length , a single burst results in exit at or beyond the right end of the interval .thus the total exit probability satisfies the recursion ; that is , either the walk steps to the left and then exits from , or the walk steps to the right and exits immediately .the solution to this recursion is a constant plus an exponential function .the boundary condition immediately gives because of the overwhelming probability of stepping to the left , the exit probability to the right boundary is not close to one for from below . as an example , for , we have ( fig .[ b23](c ) ) ., ( b ) , and ( c ) , , and .simulations are on a system of length .,title="fig:",scaledwidth=35.0% ] , ( b ) , and ( c ) , , and .simulations are on a system of length .,title="fig:",scaledwidth=35.0% ] , ( b ) , and ( c ) , , and .simulations are on a system of length .,title="fig:",scaledwidth=35.0% ] ] ( region ii ) . to exit from regioni requires at least two bursts . ] for the case , we partition ] ( defined as region i ) and ] into the three subintervals ] , and ] so that there are two subintervals to consider : ] . for concretenesswe determine the exit probability to the specific site ; similar behavior arises for other exit points in ] ( region i ) and ] , the interval naturally divides into the three subintervals ] , and $ ] .the recursion relations satisfied by the total exit probability to the right edge of the interval are : these exit probabilities must also satisfy the joining and boundary conditions we generalize the approach used to solve the two - interval case ( cf.eq . ) by first solving for in the form , substituting this result into the recursion for to obtain its general form , and finally substituting the result for into the recursion for .all the unknown constants may then be fixed by the boundary and joining conditions , and the final result is : + 2\big\}}{pq^b\big[(b - y)(b - y+1 ) pq^b-2y\big]+2}~,\\ \mathcal{e}^{\rm ii}(x)&=1-\frac{2q^x \big[pq^b ( x - y)+1\big]}{pq^b \big[(b - y ) ( b - y+1 ) pq^b-2y\big]+2}~,\\ \mathcal{e}^{\rm iii}(x)&=1-\frac{2q^x}{pq^b\big[(b - y)(b - y+1)pq^b-2y\big]+2}~ , % \mathcal{e}^{\rm iii}(x)&=1-\frac{2q^x}{p^2q^{2b}(b - y)(b - y+1)-2ypq^b+2}~ , \end{split}\end{aligned}\ ] ] where .this procedure can be continued to as many subintervals as desired both for the total and for the restricted exit probabilities .
we investigate the first - passage properties of bursty random walks on a finite one - dimensional interval of length , in which unit - length steps to the left occur with probability close to one , while steps of length to the right `` bursts '' occur with small probability . this stochastic process provides a crude description of the early stages of virus spread in an organism after exposure . the interesting regime arises when , where the conditional exit time to reach , corresponding to an infected state , has a non - monotonic dependence on initial position . both the exit probability and the infection time exhibit complex dependences on the initial condition due to the interplay between the burst length and interval length .
let be a compact -manifold with local coordinates and let be a smooth vector - field on , whose local components are given by . this paper is concerned with the following pair of partial differential equations ( pdes ) : for a time dependent function and a time dependent density on . in the above pdes we are following the einstein summation convention , and summing over the index `` . ''equation , which is sometimes called the `` transport equation , '' describes how a scalar quantity is transported by the flow of .equation , which is sometimes called the `` continuum equation '' or `` liouville s equation '' describes how a density ( e.g. a probability distribution ) is transported by the flow of .such pdes arises in a variety of contexts , ranging from mechanics to control theory , and can be seen as zero - noise limits of the forward and backward kolmogorov equations . the solution to takes the form where is the flow map of at time ( * ? ? ?* chapter 18 ) . from thisobserve that exhibits a variety of conservation laws .for example , if and are solutions to , then so is their product , , and their sum , .similarly , the solution to takes the form .one can deduce that the -norm of is conserved in time ( * ? ? ?* theorem 16.42 ) . finally , is the adjoint evolution equation to in the sense that the integral is constant in time .one finds that the final integral vanishes upon substitution of and and applying integration by parts . ] .this motivates the following definition of qualitative accuracy : [ def : quality ] a numerical method for and is _ qualitatively accurate _ if it conserves discrete analogs of scalar multiplication / addition , the -norm and the total mass for densities and the sup - norm for functions .both and can be numerically solved by a variety of schemes . for a continuous initial condition , , for example , the method of characteristics describes a solution to as a time - dependent function where is the solution to .this suggests using a particle method to solve for at a discrete set of points .in fact , a particle method would inherit many discrete analogs of the conservation laws of , and would as a result be _qualitatively accurate_. for example , given the input , the output of a particle method is identical to the ( componentwise ) product of the outputs obtained from inputing and separately . however , particle methods converge much slower than their spectral counterparts when the function is highly differentiable . in the casewhere is the unit circle , , a spectral method can be obtained by converting to the fourier domain where it takes the form of an ordinary differential equation ( ode ) : where and denote the fourier transforms of and . in particular , this transformation converts into an ode on the space of fourier coefficients .a standard spectral galerkin discretization is obtained by series truncation .such a numerical method is good for -data , in the sense that the convergence rate , over a fixed finite time , is faster than where is the order of truncation . in particular , spectral schemes converge faster than particle methods when the initial conditions have some degree of regularity .unfortunately the spectral algorithm given above is not _ qualitatively accurate _ , as is demonstrated by several examples in section [ sec : numerics ] .the goal of this paper is _ to find a numerical algorithm for and which is simultaneously stable , spectrally convergent , and qualitatively accurate ._ within mechanics , spectral methods for the continuum and transport equation are a common - place where they are viewed as special cases of first order hyperbolic pdes .various galerkin discretizations of the koopman operator '' which yields the solution to .we refer the reader to for a survey of recent applications . ]have been successfully used for generic dynamic systems , most notably fluid systems where such discretizations serve as a generalization of dynamic mode decomposition .dually , ulam - type discretizations of the frobenius - perron operator have been used to find invariant manifolds of systems with uniform gaussian noise . in continuous time , petrov - galerkin discretization of the infinitesimal generator of the frobenius perron operator converge in the presence of noise and preserve positivity in a haar basis . in this article, we consider a unitary representation of the diffeomorphisms of known to representation theorists and quantum probability theorists . to be more specific , we consider the action of diffeomorphisms on the hilbert space of half - densities .half densities can be abstractly summarized as an object whose `` square '' is a density or , alternatively , can be understood as a mathematician s nomenclature for a physicist s `` wave functions .'' one of the benefits of working with half - densities , over probability densities , is that the space of half - densities is a hilbert space , while the space of probability densities is a convex cone .this tactic of inventing the square - root of an abstract object in order to simplify a problem has been used throughout mathematics .the most familar example would be the invention of the complex numbers to find the roots of polynomials .a more modern example within applied mathematics can be found in where the ( conic ) space of positive semi - definite tensor fields which occur in non - newtonian fluids is transformed into the ( vector ) space of symmetric tensors .similarly , an alternative notion of half - densities is invoked in to transform the mean - curvature flow pde into a better behaved one .in this paper we develop numerical schemes for and .first , we derive an auxiliary pde , , on the space of half - densities in section [ sec : half densities ] .we relate solutions of to solutions of and in theorem [ thm : quantize ] .second , we pose an auxiliary spectral scheme for in section [ sec : discretization ] .our auxiliary scheme induces numerical schemes for and via theorem [ thm : quantize ] .third , we derive a spectral convergence rate for our auxiliary scheme in section [ sec : analysis ]. the spectral convergence rate for our auxiliary scheme induce spectral convergence rates for numerical schemes for and . finally , we prove our schemes are qualitatively accurate , as in definition [ def : quality ] , in section [ sec : qualitative ] .we end the paper by demonstrating these findings in numerical experiments in section [ sec : numerics ] .we observe our algorithm for to be superior to a standard spectral galerkin discretization , both in terms of numerical accuracy and qualitative accuracy . throughout the paper a smooth compact -manifold without boundary .the space of continuous complex valued functions is denoted and has a topology induced by the sup - norm , ( see ) . given a riemannian metric , , the resulting sobolev spaces on are denoted ( see ) .the tangent bundle to is denoted by , and the iterated whitney sum is denoted by ( see ) .a ( complex ) density is viewed as a continuous map which satisfies certain geometric properties which permit a notion of integration .we denote the space of densities by and the integral of is denoted by ( * ? ? ?* chapter 16 ) . by completion of with respect to the norm we obtain a banach space , .we should note that is homeomorphic to the space of distributions up to choosing a partition of unity of .given a function , we denote the multiplication of by , and we denote the dual - pairing by .we let denote the closed subspace of whose elements exhibit weak derivative . given a separable hilbert space we denote the banach - algebra of bounded operators by and topological group of unitary operators by .the adjoint of an operator is denoted by .the trace of a trace class operator , , is denoted by .the commutator bracket for operators on is denoted by : = a \cdot b - b \cdot a ] .that is , as a topological group. moreover , a linear evolution equation on given by = 0 ] for some vector - field .the dual operator is then necessarily of the form = { \ensuremath{\frac { \partial } { \partial x^{i } } } } ( \rho x^{i}) ] and is given in local coordinates by : = \frac{1}{2 } x^{i } { \ensuremath{\frac { \partial \psi}{\partial x^{i } } } } + \frac{1}{2 } { \ensuremath{\frac { \partial } { \partial x^{i } } } } \left ( \psi x^{i } \right ) .\label{eq : representation}\end{aligned}\ ] ] the advection equation can then be written as : = 0 .\label{eq : half density pde}\end{aligned}\ ] ] despite the lie derivative being unbounded , a unique solution is defined for all time : [ prop : stone ] the unique solution to is of the form where is the one - parameter semigroup generated by the operator .explicitly , is the operator `` '' in the sense that the solution to is where is the time flow map of at time . by inspection we can observe that by proposition [ prop : half densities ] , is a density , and so we can integrate it .the integral of a density is invariant under transformations ( * ? ? ?* proposition 16.42 ) and we find \psi_{2 } ) + \bar{\psi}_{1 } \pounds_{x}[\psi_{2 } ] \right ) \\ = \langle \pounds_{x}[\psi_{1 } ] \mid \psi_{2 } \rangle + \langle \psi_{1 } \mid \pounds_{x}[\psi_{2 } ] \rangle .\end{aligned}\ ] ] therefore , the operator , is anti - hermitian .we can see that is densely defined , as it is well defined on , which is dense in by construction .stone s theorem implies that there is a one - to - one correspondence between densely defined anti - hermitian operators on and one - parameter groups consisting of unitary operators on .observe that solve directly , by taking its time - derivative .thus is the unique one - parameter subgroup we are looking for . to understand the relationship to classical lebesgue spaces , recall that for any manifold ( possibly non - orientable ) one can assert the existence of a smooth non - negative reference density ( * ?* chapter 16 ) .upon choosing such a , the -norm of a continuous complex function with respect to is and is the completion of the space of continuous functions with respect to this norm .the relationship between and is that they are equivalent as topological vector - spaces : [ prop : non canonical ] choose a non - vanishing positive density .let denote the square root of is the standard square - root function .then is the half - density which we are considering . ] . for any there exists a unique such that .this yields an isometry between and .it suffices to prove that is isomorphic to the space of square integrable ( w.r.t . ) continuous functions , because the later space is dense in .let .then is a continuous density and there exists a unique function such that . by taking the square root of both sides we can obtain a unique function such that .the function is unique with respect to and the map sends to by construction .thus the map is continuous .the inverse of the map is given by .if the spaces are nearly identical the reader may wonder why matters .in fact , the pair are not identical in all aspects .as described earlier , under change of coordinates or advection , the elements of each space transform differently .more importantly , is _ not _ canonically contained within the space of square integrable functions , and functions and densities are _ not _ contained in .such an embedding may only be obtained by choosing a non - canonical `` reference density '' , as in proposition [ prop : non canonical ] .this has numerous consequences in terms of what we can and can not do .for example , an operator with domain on can not generally be applied to objects in in the same way .these limitations can be helpful , since they permit vector fields to act differently on objects in than on objects in .these prohibitions serve as safety mechanisms , analogous to the use of overloaded functions in object oriented programs , which due to their argument type distinctions , effectively banish certain bugs from arising . while the canonicalism " of is useful for this discussion , the _ canonical _sobolev spaces are not .since the algorithms proposed in this paper are proven to converge in a sobolev space , we must still choose a norm and we rely upon traditional metric dependent definitions . to begin , equip with a riemannian metric .the metric , , induces a positive density , known as the _ metric density _ and an inner - product on given by : the metric also induces an elliptic operator , known as the laplace - beltrami operator , which is negative - semidefinite ( i.e. for all ) .if is compact , then is a separable hilbert space and the helmholtz operator , , is a positive definite operator with a discrete spectrum . for any we may define the _ sobolev norm _ : where and are related by .then we define as the completion of with respect to the norm .such a definition is isomorphic , in the category of topological vector - spaces , to the one provided in . in order to prove this claim ,observe that it holds for bounded sets in , and then apply a partition of unity argument to obtain the desired equivalence on manifolds .in particular , note that . it is notable that as a topological vector - space is actually not metric dependent ( * ? ? ?* proposition 2.2 ) . however , the norm is metric dependent .[ prop : compact_embedding ] let be a compact riemmanian manifold .if then is compactly embedded within .let be the hilbert basis for which diagonalizes in the sense that for a sequence .the operator is given by and so is a hilbert basis for .let us call .the embedding of into is then given in terms of the respective basis elements by .as and , we see that this embedding is a compact operator ( * ? ? ?* proposition 4.6 ) .in physics , `` quantization '' refers to the process of substituting certain physically relevant functions with operators on a hilbert space , while attempting to preserve the symmetries and conservation laws of the classical theory . in this section ,we quantize and by replacing functions and densities with bounded and trace - class operators on .this is useful in section [ sec : discretization ] when we discretize .to begin , let us quantize the space of continuous real - valued functions . for each , there is a unique bounded hermitian operator , given by scalar multiplication .that is to say for any . by inspectionone can observe that the map `` '' is injective and preserves the algebra of because and .similarly , ( and in the opposite direction ) for any trace class operator there is a unique distribution such that : for any .more generally , for any in the dual - space , there is a such that .the map `` '' is merely the adjoint of the injection `` '' .therefore `` '' is surjective .we can now convert the evolution pdes and into odes of operators on .[ thm : quantize ] let be a time - dependent vector - field. then satisfies if and only if satisfies = 0 .\label{eq : quantum observable ode } \end{aligned}\ ] ] if is trace - class and satisfies = 0 , \label{eq : quantum density ode } \end{aligned}\ ] ] then satisfies . finally , if satisfies , then satisfies and satisfies .let satisfy .for an arbitrary we observe that \cdot \psi ] so : ^{\dagger } \cdot a + h_{f}^{\dagger } \cdot \frac{d a}{dt } \right ) = \operatorname{tr } ( - \pounds_{x}^{\dagger } h_{f}^{\dagger } \cdot a + h_{f}^{\dagger } \pounds_{x}^{\dagger } \cdot a + h_{f}^{\dagger } \cdot \frac{da}{dt } ) .\end{aligned}\ ] ] upon noting that and that : + \frac{d \hat{\rho}}{dt } ) \right ) . \end{aligned}\ ] ] as was chosen arbitrarily , the desired result follows .again , this line of reasoning is reversible .lastly , if satisfies and then we see the benefit of using and to represent the pdes of concern is that and may be discretized using a standard least squares projections on without sacrificing qualitative accuracy .this section presents the numerical algorithms for solving and .the basic ingredient for all the algorithms in this section are a hilbert basis and an ode solver .denote a hilbert basis by for .for example , for a riemannian metric , , if denote eigen - functions of the laplace operator , then forms a smooth hilbert basis for where denotes the riemannian density .we call the fourier basis . to ensure convergence , we assume : [ ass : basis ] our basis is such that there exists a metric for which the unitary transformation which sends the basis to the fourier basis is bounded with respect to the -norm for some .in this section we provide a semi - discretization of and . just as a note to the reader ,a `` semi - discretization '' of the pde for some partial differential operator , , is just a discretization of which converts the pde into an ode . in particular , we assume access to solvers of finite dimensional odes , denoted `` . '' in practice any ode solver such as euler s method , runge - kutta , or even well tested software such as could be used to compute such solutions .most notably , the method of is specialized to isospectral flows such as and by using discrete - time isospectral flows .more explicitly , let denote the numerically computed solution to the ode `` '' at time , with initial condition .before constructing an algorithm to spectrally discretize and in a qualitatively accurate manner , we first solve using a standard spectral discretization in algorithm [ alg : half density ] .initialize .initialize initialize the function given by . to summarize , algorithm [ alg : half density ] produces a half - density by projecting to .this projection is done by constructing the operator . in section [ sec : analysis ] we prove that converges to the solution of as .we see that evolves by unitary transformations , just as the exact solution to does .this correspondence is key in providing the qualitative accuracy of algorithms that follow , so we formally state it here . [ prop : unitary ] the output of algorithm [ alg : half density ] is given by when is the input to algorithm [ alg : half density ] where and is the unitary operator as in proposition [ prop : stone ] generated by .the operator in algorithm [ alg : half density ] is anti - hermitian on .it therefore generates a unitary action on when inserted into . before continuing ,we briefly state a sparsity result that aides in selecting a basis .we say an operator is _ sparse banded diagonal _ with respect to a hilbert basis if there exists an integer such that is a finite sum elements of the form for fewer than offsets for .[ thm : sparsity ] let be a dense coordinate chart for on some dense open set , then for functions where ( see proposition [ prop : non canonical ] ) .if and are sparse banded diagonal , and if the vector - field is given in local coordinates by with fewer than of s being non - zero for each , then the matrix in algorithm [ alg : half density ] is sparse banded diagonal and the sparsity of is .the result follows directly from counting .theorem [ thm : sparsity ] suggests selecting a basis where is small , or at least finite .for example , if were a torus , and the vector - field was made up of a finite number of sinusoids , then a fourier basis would yield a equal to the maximum number of terms along all dimensions . by theorem [ thm : quantize ] ,the square of the result of algorithm [ alg : half density ] is a numerical solution to .we can use this to produce a numerical scheme to by finding the square root of a density . given a ,let denote the positive part and denote the negative part so that , then is a square root of since .this yields algorithm [ alg : density ] to spectrally discretize in a qualitatively accurate manner for densities which admit a square root .initialize set alternatively , we could have considered the trace - class operator as an output .this would be an numerical solution to , and would be related to our original output in that .finally , we present an algorithm to solve ( in lieu of solving ) . this algorithm is presented for theoretical interest at the moment .initialize .initialize the linear map given by ] '' where is anti - hermitian and that the satisfies the isospectral flow .in this sections we derive convergence rates . we find that the error bound for algorithm [ alg : half density ] induces error bounds for the algorithms [ alg : density ] and [ alg : function ] .therefore , we first derive a useful error bound for algorithm [ alg : half density ] . our proof is a generalization of the convergence proof in , where is studied ( modulo a factor of two time rescaling ) on the torus .we begin by proving an approximation bound . in all that follows ,let denote the orthogonal projection .[ prop : approximation ] if and , then for some constant and .we can assume that is a fourier basis .the results are unchanged upon applying assumption [ ass : basis ] and converting to the fourier basis .any can expanded as where .as it follows that a corollary of weyl s asymptotic formula is that is for large .after substitution of this asymptotic result into for large , we see that is asymptotically dominated by for some constant . for sufficiently large find and by another application of the weyl formula where the last inequality is derived by bounding the infinite sum with an integral .with this error bound for the approximation error we can derive an error bound for algorithm [ alg : half density ] : [ thm : half density convergence ] let for .let and ] be fixed .let be the solution of at time .finally , let be the output of algorithm [ alg : density ] with respect to the input for some . then : where is constant with respect to , and is the same constant as in theorem [ thm : half density convergence ] .without loss of generality , assume that is non - negative ( otherwise split it into its non - negative and non - positive components ) .let be such that , as described in algorithm [ alg : density ] .it follows that and we compute if we let then we can re - write the above as above we have applied holder s inequality to , which still holds upon using the isometry in proposition [ prop : non canonical ] .theorem [ thm : half density convergence ] provides a bound for .substitution of this bound into the above inequality yields the theorem .finally , we prove that algorithm [ alg : function ] converges to a solution of , which is equivalent to a solution of courtesy of theorem [ thm : quantize ] : [ prop : function approximation ] let and let .then where , and is constant .let . by proposition [ prop : approximation ]we know that for , then : by the result follows .[ thm : function convergence ] let and $ ] be fixed .let denote the solution to at time with initial condition .let denote the output of algorithm [ alg : function ] with respect to the inputs for some . then : for the same constant as in proposition [ prop : function approximation ] and the same constants as in theorem [ thm : half density convergence ] .we find in light of proposition [ prop : isospectral ] we find the output of algorithm [ alg : function ] indicates that .therefore , the above inline equation becomes and finally where .the first term is bounded by proposition [ prop : function approximation ] . to bound the second term we must bound . as is the backwards time numerical solution to and the exact backward time solution to , theorem [ thm : half density convergence ] prescribes the existence of constants and such that : for any .this expression can be simplified by noting that , setting , and noting that the norm is stronger than the -norm to get : by applying the cauchy - schwarz inequality to and our derived bound on : upon invoking proposition [ prop : function approximation ] we get the desired result .in this section , we prove that our numerical schemes are qualitatively accurate . we begin by illustrating the preservation of appropriate norms . throughout this sectionlet , , and denote the sequence of outputs of algorithms [ alg : half density ] , [ alg : density ] , and [ alg : function ] with respect to initial conditions and for .[ thm : norms ] let denote solutions to , , and respectively .let , and , denote outputs from algorithms [ alg : half density ] , [ alg : density ] , and [ alg : function ] respectively for a time .then and are constant with respect to for arbitrary .moreover , to prove is conserved note that the evolution is isospectral .we have already shown that converges to in the operator norm .convergence of the norms follows from the fact that .an identical approach is able to prove the desired properties for and as well .theorem [ thm : norms ] is valuable because each of the norms is naturally associated to the entity which it bounds , and these quantities are conserved for the pdes that this paper approximates .for example , for a function , and this is constant in time when is a solution to .a discretization constructed according to algorithm [ alg : function ] according to theorem [ thm : norms ] is constant for any , no matter how small .the full banach algebra is conserved by advection too .this property is encoded in our discretization as well .[ thm : algebra ] let , and be solutions of and let .let and be numerical solutions constructed by algorithm [ alg : function ] , then satisfies .\end{aligned}\ ] ] moreover , strongly converges to as in the operator norm on when for . by construction ,the output of algorithm [ alg : function ] is the result of an isospectral flow , and is therefore of the form we then observe differentiation in time implies the desired result .convergence follows from theorem [ thm : function convergence ] .finally , the duality between functions and densities is preserved by advection .if satisfies and satisfies then is conserved in time .algorithms [ alg : density ] and [ alg : function ] satisfy this same equality : for each , is constant in time where .moreover , converges to the constant as . as and observe that convergence follows from theorems [ thm : function convergence ] and [ thm : density convergence ] .this section describes two numerical experiments .first , a benchmark computation to illustrate the spectral convergence of our method and the conservation properties in the case of a known solution is considered .consider the vector field for .the flow of this system is given by : if the initial density is a uniform distribution , , then the the exact solution of is : figure [ fig : s1 ] depicts the evolution of at with an initial condition .figure [ fig : exact ] depicts the exact solution , given by , figure [ fig : standard spectral ] depicts the numerical solution computed from a standard fourier discretization of with 32 modes , and figure [ fig : gn spectral ] depicts the numerical solution solution computed using algorithm [ alg : density ] with 32 modes .0.36 on the example described in section [ sec : benchmark].,title="fig:",scaledwidth=90.0% ] 0.36 on the example described in section [ sec : benchmark].,title="fig:",scaledwidth=90.0% ] 0.36 on the example described in section [ sec : benchmark].,title="fig:",scaledwidth=90.0% ] here we witness how algorithm [ alg : density ] has greater qualitative accuracy than a standard spectral discretization , in the `` soft '' sense of qualitative accuracy .for example , standard spectral discretization exhibits negative mass , which is not achievable in the exact system . moreover , the -norm is not conserved in standard spectral discretization .in contrast , theorem [ thm : norms ] proves that the -norm is conserved by algorithm [ alg : density ] .a plot of the -norm is given in figure [ fig : l1 ] .finally , a convergence plot is depicted in figure [ fig : convergence ] .note the spectral convergence of algorithm [ alg : density ] . in terms of numerical accuracy ,algorithm [ alg : density ] appears to have a lower coefficient of convergence .-norm vs time of a standard spectral discretization ( solid ) and the result of algorithm [ alg : density ] ( dotted ) on the example described in section [ sec : benchmark].,scaledwidth=80.0% ] ( dotted ) and a standard spectral method ( solid ) in the -norm.,scaledwidth=90.0% ] in general , algorithm [ alg : function ] is very difficult to work with , as it outputs an operator rather than a classical function .however , algorithm [ alg : function ] is of theoretical value , in that it may inspire new ways of discretization ( in particular , if one is only interested in a few level sets ) .we do not investigate this potentiality here in the interest of focusing on the qualitative aspects of this discretization .for example , under the initial conditions and the exact solutions to are : under the initial condition the exact solution to is : one can compute by first multiplying the initial conditions and then using algorithm [ alg : function ] to evolve in time , or we may evolve each initial condition in time first , and multiply the outputs .if one uses algorithm [ alg : function ] , then both options , as a result of theorem [ thm : algebra ] , yield the same result up to time discretization error ( which is obtained with error tolerance in our code ) . in contrast , if one uses a standard spectral discretization , then these options yield different results with a discrepancy .this discrepancy between the order of operations for both discretization methods is depicted in figure [ fig : discrepancy ] .finally , the sup - norm is preserved by the solution of .as shown in theorem [ thm : quantize ] , the sup - norm is equivalent to the operator norm when the functions are represented as operators on .as proven by theorem [ thm : norms ] , the operator - norm is conserved by algorithm [ alg : function ] .in contrast , the sup - norm drifts over time under a standard discretization .this is depicted in figure [ fig : norms ] ( not plotted ) is attributable to our time - discretization scheme where we only tolerated error of in this instance.,scaledwidth=115.0% ] ( red ) on the example described in section [ sec : benchmark].,scaledwidth=80.0% ] consider the system on the three - torus for constants .when this system is the well studied volume conserving system known as an arnold - beltrami - childress flow .when , , and , then the solutions to this ode are chaotic , with a uniform steady state distribution .when the operator of is identical to the operator that appears in , and algorithm [ alg : half density ] do not differ from a standard spectral discretization .therefore we consider the case where to see how our discretization are differs from the standard one . when volume is no longer conserved and there is a non - uniform steady - state distribution .for the following numerical experiment let and .as an initial condition consider a wrapped gaussian distribution with anisotropic variance centered at .equation is approximately solved using algorithm [ alg : density ] , monte - carlo , and a standard spectral method .the results of the -marginal of these densities are illustrated in figure [ fig : abcd ] .the top row depicts the results from using algorithm [ alg : density ] using modes along each dimension .the middle row depicts the results from using a monte - carlo method with particles as a benchmark computation .finally , the bottom row depicts the results from using a standard fourier based discretization of using 33 modes along each dimension .notice that algorithm [ alg : density ] performs well when compared to the standard discretization approach .( top row ) , monte carlo ( middle row ) and a standard spectral galerkin ( bottom row ) on the example described in section [ sec : abc_flow ] .the domain is the -torus .here we ve consider an initial probability density given by a wrapped gaussian .darker regions represent areas of higher - density.,scaledwidth=100.0% ]in this paper we constructed a numerical scheme for and that is spectrally convergent and qualitatively accurate , in the sense that natural invariants are preserved .the result of obeying such conservation laws is a robustly well - behaved numerical scheme at a variety of resolutions where legacy spectral methods fail .this claim was verified in a series of numerical experiments which directly compared our algorithms with standard fourier spectral algorithms .the importance of these conservation laws was addressed in a short discussion on the gelfand transform .we found that conservation laws completely characterize and , and this explains the benefits of using qualitatively accurate scheme at a more fundamental level .this paper developed over the course of years from discussions with many people whom we would like to thank : jaap eldering , gary froyland , darryl holm , peter koltai , stephen marsland , igor mezic , peter michor , dmitry pavlov , tilak ratnanather , and stefan sommer .this research was made possible by funding from the university of michigan ., _ nonlinear analysis on manifolds : sobolev spaces and inequalities _ , vol . 5 of courant lecture notes in mathematics , new york university , courant institute of mathematical sciences , new york ; american mathematical society , providence , ri , 1999 . , _ the analysis of linear partial differential operators . i _ , classics in mathematics , springer - verlag , berlin , 2003 .distribution theory and fourier analysis , reprint of the second ( 1990 ) edition [ springer , berlin ; mr1065993 ( 91m:35001a ) ] .
the transport and continuum equations exhibit a number of conservation laws . for example , scalar multiplication is conserved by the transport equation , while positivity of probabilities is conserved by the continuum equation . certain discretization techniques , such as particle based methods , conserve these properties , but converge slower than spectral discretization methods on smooth data . standard spectral discretization methods , on the other hand , do not conserve the invariants of the transport equation and the continuum equation . this article constructs a novel spectral discretization technique that conserves these important invariants while simultaneously preserving spectral convergence rates . the performance of this proposed method is illustrated on several numerical experiments .
the recent earthquake and tsunami in japan on 11 march 2011 , 05:46 coordinated universal time ( utc ) resulted in severe damage to the fukushima dai - ichi nuclear reactor complex .due to the uncertainty of the situation , limited quantitative information , and its potential impact on both local public health as well as our low - background fundamental physics program , we began monitoring local air samples in seattle , wa , usa , for the potential arrival of airborne radioactive fission products . in this paperwe present data on key radionuclides associated with the nuclear accident and a brief discussion of the transport .we believe it is important to provide a rapid release of this data for two reasons : * while the earthquake , tsunami and nuclear accident are extremely serious within japan , there has also been a great deal of concern around nuclear radiation reaching the u.s. our data , generated by independent university research , should provide greater confidence in our understanding of the risks from long - distance transport of radionuclides . * these data , along with other observations , will help provide better information on the characteristics of the source , the release mechanism , and the transport of the radionuclides .our samples consist of air filters taken from the intake to the ventilation system of the physics and astronomy building at the university of washington .this allows us to sample times more air than what had been done previously here at the university after the chernobyl incident and proved to be one of the key points for the successful detection of the radioactive fission products . in order to search for characteristic gamma rays stemming from radionuclideswe place the samples inside a lead shield of 5 to 20 cm thickness next to a p - type point contact germanium detector for low - level counting .the detector exhibits an energy resolution of 1.4 kev fwhm at 600 kev .the level of observed background radiation inside the shield ranges from 10 counts / kev / hour at 50 kev to 2 counts / kev / hour at 800 kev . the energy and efficiency of the detector have been calibrated using 10 strong gamma lines between 200 and 1500 kev from a source to an accuracy of about 0.1 kev and 10% , respectively .we digitize the preamplified traces coming from the germanium detector using the struck card sis3302 which at the same time extracts the energy of the measured pulse . the communication with the card and the vme crate is managed by orca .after the acquisition the data are automatically uploaded to a database for analysis .in addition to the real physics events we also inject a pulser signal into the preamplifier to check the live time and health of the system .the pulser runs at 0.1 hz with an amplitude equivalent to an energy of kev .the signal from the pulser is clearly visible in fig .[ fullspectrum ] .the air filters used are commercial ventilation filters from americanairfilter ( model perfectpleat ultra ) and purolator ( model dmk80-std2 ) with dimensions 61 cm .their efficiency for retaining particles down to a size of 5 m amounts to 75% , drops to 35% at 1 m and to 5% at 0.4 m . from our detection of the cosmogenic isotope , we calculated an activity of .1 mbqm . comparing this value to the known concentration of 2 - 8 mbqm we deduce a filter efficiency of 2.4.0% or , correspondingly , particle sizes of .4 m .this roughly agrees with observations of radioactive particle sizes after the chernobyl accident and measured sizes of radioactive dust in the atmosphere . in an auxiliary measurement we exposed in addition to our standard filter a high efficiency particulate air filter ( hunter hepatech filter model 30930 )this filter is % efficient for particles of sizes .1 m and features one layer of activated charcoal . taking the measured activities from that filter as the true activities in air , we calculate the efficiency for our standard filters from the ratio of measured activities .the filter efficiency amounts to 14.2.2% for the and 4.4.2% for the other fission products and , in agreement with the above estimate .this points to slightly larger particle sizes of compared to the other isotopes ( .6 m instead of .4 m given the filter efficiencies ) .we can only speculate on the initial conditions and the chemistry involved that would lead to this difference .a difference between the particle sizes for and other fission products had already been observed after the chernobyl accident . however , in those measurements the particle sizes tended to be slightly smaller than the other fission products .we believe that the difference between those measurements and our findings is due to the longer transport distance .the long transport tends to lead to overall smaller particle sizes .as can be transported part of the way in its elemental form and can thus be adsorbed at later stages , this could lead to larger particle sizes with respect to the other fission products which are adsorbed shortly after the release . at several other radiation monitoring stations cartridges filled with charcoalwere used in addition to particle air filters .the charcoal captures the present in its gaseous form and the combination of the two thus measures the total amount of in air .the ratio of gaseous to attached to particulate matter varied between 2 and 20 with an average of about 5 .we would like to stress that our results below only measure the portion of attached to particulate matter .the air filters were typically exposed for one day to an air flow of 114000 m/day , which was measured using a davis 271 turbo - meter flowmeter .we bagged and compressed the filters into packages of approximately 1000 to 4000 before placing them into the lead shield for counting .the solid angle for gamma rays emitted within that volume and interacting with the germanium detector was calculated given the actual dimensions and ranged between 1.6 and 4.4% .we attribute a 20% systematic uncertainty to the calculated value .( colour online ) comparison of the gamma spectra from the measurements of air filter ph1 ( red , 16 - 17 march ) and air filter ph2 ( blue , 17 - 18 march ) showing clearly the additional peaks due to the arrival of radioactive fission products at the us west coast .the dominant peak at 364 kev is from . ]( colour online ) plot of the 5 strongest gamma lines of , , , , and for the air filter ph1 ( red ) and air filter ph4 ( blue ) measurements .the change in activity is due to fluctuating radon levels during the time of measurement . ]we started the air monitoring campaign on 16 march 2011 .the exact exposure and counting periods for the different air filters are listed in table [ filter_tab ] .no fission products were detected in the first air filter ph1 and we were able to attribute all the visible gamma lines to known background radioactivity from cosmic - ray induced processes , various radioactive isotopes of the uranium and thorium decay chains , cosmogenic , and .the subsequent sample ph2 immediately revealed the onset of several characteristic gamma lines from fission products .the identified isotopes are , , , , and . figure [ fullspectrum ] shows the comparison between the gamma ray spectra from air filter ph1 and ph2 where the additional gamma peaks are clearly identifiable .figure [ zoomedspectrum ] demonstrates the statistical significance of the detected lines by showing the peaks of the strongest decay branches of the five identified isotopes . in order to obtain the decay rates of the different isotopes we summed the appropriate spectral bins in the region of interest and subtracted the sum of the same amount of bins in the side bands . for the cases in which the statistical significance of the extracted signal counts was less than three sigma we calculated the upper limit at 95% c.l . by employing the feldman - cousins formalism for a possible signalgiven the number of background counts . from the detected number of counts ( ) and the given counting time ( ) and isotope lifetime ( ) we calculate the decay rate ( ) at the beginning of the counting period : from this and measurements and calculations of air flow ( ) , filter , geometrical , and detection efficiency ( , and ) the amount of activity present in air ( ) using the known branching ratios ( ) is : in addition , we correct the values for delays in the counting period ( ) with respect to the end of the exposure using the known lifetimes ( ) thus calculating the activity present in the filter at the end of the exposure time : we do not correct for decays during the exposure itself as the time structure of the arrival of the radioactive atoms is unknown .the values for the activities can be found in fig .[ airactivities ] and table [ activity_tab ] .we cross - checked for consistency between the activities obtained from different branches of the same isotope . as they were in agreement andthe systematic error dominates we only give the value for the strongest branch . for with a half life of only 2.3 hours we do not detect the atoms originating in the reactor but the ones following the decay of .the measured activities for these two isotopes given in table [ activity_tab ] agree very nicely corroborating our correction due to the energy dependent detection efficiency .the highest observed activity of amounts to 4.4.3 mbqm . [ cols="<,^,>,<,^,>,<",options="header " , ] in addition to the activities themselves we also analyzed the ratio of activities for several isotopes . while the activities have a large systematic error due to the different efficiency corrections these errors cancel to a very large degree in the ratios .figure [ activitiesratios ] shows the ratios for / , / , / , and / .we show also fits to the ratios with a simple exponential decay where the lifetimes have been fixed to the known values of the different isotopes .we excluded the points where only an upper limit for one of the activities was available . while the ratio of the two caesium isotopes is nicely described by this fit ( a constant fit in that case due to the large half lives with /dof = 10.1/8 )the other ratios show large deviations ( /dof ) and the fit rather follows the general trend of the ratio .this indicates that the release and transport efficiencies for the different elements vary in time leading to `` bursts '' of individual elements during certain days .it has already been alluded above that certainly the transport of is different from the other radionuclides due to the differences in the measured filter efficiencies .the release and transport of is also more efficient than that of as can be seen by extrapolating the ratio of to back to the end of active reactions in the fuel rod ( day 0 in fig . [ activitiesratios ] ) .comparing the extracted value of .5 to the ratio of fission yields of .5 shows that more was released and/or transported than . in order to assess the transport time of the fission products across the pacific we performed several model calculations .the trajectories were computed using the noaa hysplit model using the global data assimilation system ( gdas ) meteorological dataset and model - calculated vertical velocities .we used the hysplit model to calculate several hundred trajectories over the time period of interest .the trajectories show a range in transport patterns depending on the altitude and hour of the start time . figure [ trajectory ] ( a ) shows three trajectories which exhibit the range of transport pathways .the start time is 12 march 2011 , 10 utc , which is approximately 3 hours after the first reported explosion from unit 1 , and the trajectories were calculated for three heights in the boundary layer , 500 , 1000 and 1500 meters above ground level .the trajectory at 500 meters is caught up in , and lofted by , a cyclonic system over the bering sea .the trajectories started at 1000 and 1500 meters are partially lofted by the same system , but do not enter the cyclonic pattern .instead they are rapidly transported across the pacific .upon arrival to the west coast of the u.s .the transport again splits with one arm transported to the north in a cyclonic direction around a second low pressure system located off the coast of washington state .figure [ trajectory ] ( b ) shows the 850 mb geopotential heights , wind vectors and temperatures for 17 march 2011 , 12 utc .the trajectory initially started at 1500 meters is transported in the boundary layer towards california .the northern arm is again lofted by a cold front near washington state .there were rain showers and cool weather in western washington at this time .the strong divergence and precipitation associated with these weather systems most likely significantly reduced the concentrations of radionuclides that were transported .the trajectories support the notion of transport of the radionuclides from the japanese boundary layer to the u.s .boundary layer in only 5 to 6 days .this is significantly faster than previously reported trans - pacific times , especially considering the radionuclides were released in the boundary layer over japan and measured in the boundary layer along the u.s .west coast .following the nuclear accident , measurements on radiation levels on land and off the coast of fukushima from several sources were collected and made available by the .highest reported readings for in the region area are in the range of 500 - 600 bqm on march 22nd and 23rd about 25 km south of the plant , although there is an extreme value of 5600 bqm reported for march 21st .the tokyo electric power company ( tepco ) is constantly monitoring and reported one reading of 4100 bqm within about 10 km of the reactor on march 20th .most other observations by mext and tepco are less than 10 bqm .the japanese ministry of defense also reported values using aircraft samples , with a maximum of 0.46 bqm for march 25th in an area 25 - 30 km to the west .based on our highest observed value of 4.4 mbqm , we estimate that this air was diluted by a factor of 10 - 10 prior to reaching the boundary of the continental u.s .this level of dilution is not surprising given the transport patterns mentioned above . from the presence and/or absence of certain fission isotopes we can draw several conclusions on their origin : ( i ) the value of the ratio of to activities of .7 is indicative of the release of the fission products from a nuclear reactor and not from nuclear weapons .( ii ) the presence of the relatively short lived isotopes and shows that the fission products had been released primarily from recently active fuel rods as opposed to spent fuel .( iii ) the notable absence of 20.8-h in our spectra , together with the known steady state activity ratio of to of allows us to put a lower limit on the time between the end of steady state nuclear fuel burning and the arrival of the fission products at our location . a lower limit of 5.7 days ( 95% c.l . )was derived from the ph2 measurement to be compared with 6.6 days delay between the earthquake and the end of ph2 filter exposure .the ph4 measurement resulted in a lower limit of 8.1 days ( 95% c.l . ) for the end of exposure of 8.4 days after the earthquake .given the modelled transport time of 5 - 6 days from above this means that the reactor successfully shut down at the time of the earthquake and that nuclear reactions were largely stopped at the time of the release of the fission products .also we found a small indication for a peak in our ph2 data set .the peak with 22 counts / hour per day of exposure was not significant enough to pass our 3 cut .the reactor shut down time derived from the insignificant peak is 6.8.5 days before the end of ph2 exposure which is statistically compatible with the time of the earthquake .( iv ) it is striking that we see only three of the many possible fission product elements .this points to a specific process of release into the atmosphere .the exact process and why it would be selective requires further investigation , but we can speculate that the release of fission products to the atmosphere is the result of evaporation of contaminated steam , in which , e.g. , csi is very soluble .chernobyl debris , conversely , showed a much broader spectrum of elements , reflecting the direct dispersal of active fuel elements .we measured the arrival of airborne fission products from the fukushima dai - ichi , japan nuclear reactor incident in seattle , wa , usa .the first fission products arrived between 17 - 18 march about 7 days after the earthquake and in agreement with other reported detections of radionuclides in the western united states .our models of the transport of air masses across the pacific point to a typical transport time of 5 to 6 days .this agrees with the first reported explosion at fukushima one day after the earthquake , augmented by additional contributions from subsequent explosions .the detected fission isotopes are , , , , and with the highest activity observed for of 4.4.3 mbqm on 19 - 20 march .our measurements were only sensitive to radionuclides attached to particulate mater and were thus not sensitive to the part of that is transported in its gaseous form .the presence of the aforementioned isotopes clearly points to the release of the fission products from recently active fuel elements . at the same time , the absence of the short lived isotope indicates that nuclear reactions must have been stopped successfully at the time of the earthquake as we would otherwise have detected its activity .this set of measurements over a period of 23 days provide a quantitative basis for further modeling on how radioactive fission products are transported in the atmosphere and will hopefully be included in a global analysis .we are grateful to j. orrell and h. s. miley for the loan of equipment and advice .we also benefited from conversations with m. savage , j. gundlach , and b. taylor .the support by the staff of the physics building of the university of washington , in particular j. alferness , proved invaluable .this work has been supported by the us department of energy under de - fg02 - 97er41020 . 21 natexlab#1#1url # 1`#1`urlprefix[2][]#2 [ 2]#2 , , , . . , ., , , , . . ,, et al . , . ., , , , , , . . ,, et al . , . ., , , , , , , . . ,, , , , , . . ,, , , . . ,. , . . , . . , , ,, , , , , , . . ,, , , , , . . ,
we report results of air monitoring started due to the recent natural catastrophe on 11 march 2011 in japan and the severe ensuing damage to the fukushima dai - ichi nuclear reactor complex . on 17 - 18 march 2011 , we registered the first arrival of the airborne fission products , , , , and in seattle , wa , usa , by identifying their characteristic gamma rays using a germanium detector . we measured the evolution of the activities over a period of 23 days at the end of which the activities had mostly fallen below our detection limit . the highest detected activity from radionuclides attached to particulate matter amounted to 4.4.3 mbqm of on 19 - 20 march . radiation monitoring , fission products , germanium detector 89.60.gg , 25.85.-w , 29.40.-n
contact lines are defined by the triple - point intersection of the rigid boundary , fluid flow and the vacuum state .flows with the contact line at contact angle were discussed in , where corresponding solutions of the navier - stokes equations were shown to have no physical meanings . in the recent paper , benilov and vynnycky analyzed the behavior of the contact line asymptotically by using the thin film equations .consider a two - dimensional couette flow shown on figure [ fig - scheme ] , where two horizontal rigid plates are separated by a distance normalized to unity , with the lower plate moving to the right relatively to the upper plate with a velocity normalized to unity .the space between the plates is filled with an incompressible fluid on the left , and vacuum ( that is , gas with negligible density ) on the right , separated by a free boundary .the -axis is directed along the lower plate , and the contact line is located on the upper plate .physically relevant flows correspond to the configuration , where the fluid - filled region to the right of the contact line decays monotonically , and is carried away by the lower plate to some residual thickness as .the velocity of the contact line is and the reference frame on figure [ fig - scheme ] moves to the left with the velocity so that the contact line is placed dynamically at the point .note that the velocity is an unknown variable to be found as a function of time .the shape of the fluid - vacuum interface at time is described by the graph of the function for , where is the thickness of fluid - filled region . by using asymptotic analysis andthe lubrication approximation , benilov and vynnycky derived the following nonlinear advection diffusion equation for the free boundary of the fluid flow : = 0 , \quad x > 0 , \;\ ; t > 0,\ ] ] the boundary conditions and define the normalized thickness and the contact line location , whereas the flux conservation gives the boundary condition for . here and henceforth, we use the subscript to denote the partial derivative .in addition , we fix for convenience. existence of weak solutions of the thin - film equation ( [ model ] ) for constant values of and neumann boundary conditions on a finite interval was recently shown by chugunova __ . using further asymptotic reductions with benilov and vynnycky reduced the nonlinear equation ( [ model ] ) with to the linear advection diffusion equation : subject to the boundary conditions physically relevant solutions corresponds to the monotonically decreasing solutions with and as , where .we note that any constant value of is allowed thanks to the invariance of the linear advection diffusion equation ( [ pde ] ) with respect to the shift and scaling transformations .indeed , if solves the boundary value problem ( [ pde])([bc - pde ] ) such that as , then given by for any , solves the same advection diffusion equation ( [ pde ] ) with the same boundary conditions ( [ bc - pde ] ) but with the variable velocity and with the asymptotic behavior as . with three boundary conditions at and the decay conditions for as , the initial - value problem for equation ( [ pde ] )is over - determined and the third ( over - determining ) boundary condition at is used to find the dependence of on .local existence of solutions to the boundary value problem ( [ pde])([bc - pde ] ) was proved by pelinovsky __ using laplace transform in and the fractional power series expansion in powers of .we shall consider the time evolution of the boundary value problem ( [ pde])([bc - pde ] ) starting with the initial data for a suitable function .for physically relevant solutions , we assume that the profile decays monotonically to a constant value as and that is a non - degenerate maximum of such that , , and .the solution may lose monotonicity in during the dynamical evolution because of the boundary value crosses zero from the negative side . in this case , we say that the flow becomes non - physical for further times and the model ( [ pde])([bc - pde ] ) breaks .simultaneously , this may mean that the velocity blows up to infinity , because for sufficiently strong solutions of the advection diffusion equation ( [ pde ] ) , the velocity satisfies the dynamical equation which follows by differentiation of ( [ pde ] ) in and setting . based on numerical computations of the thin - film equations ( [ model ] ) , benilov and vynnycky that for any physically relevant initial data , there is a finite positive time such that tends to negative infinity and approaches zero as , whereas the profile remains a smooth and decreasing function for .moreover , they claim that behaves near the blowup time as the logarithmic function of : where , are positive constants .the same properties of the blow up of contact lines were observed in in numerical simulations of the reduced model ( [ pde])([bc - pde ] ) .we point out that the numerical simulations in are based on comsol built - in algorithms .the goal of this paper is to simulate numerically the behavior of the velocity near the blow - up time under different physically relevant initial data .our technique is based on the reformulation of the boundary - value problem ( [ pde])([bc - pde ] ) , which will be suitable for an application of the direct finite - difference method .we will approximate the behavior of the velocity from the dynamical equation ( [ contact - equation ] ) rewritten in finite differences .the numerical computations reported in this paper were performed by using the matlab software package . as the main outcome , we confirm that all physically relevant initial data including those with positive initial velocitywill result in blow - up of to negative infinity in a finite time . at the same time , we show that the power function and fits our numerical data better than the logarithmic function near the blow - up time .we explain why the behavior as is highly expected for solutions of the boundary value problem ( [ pde])([bc - pde ] ) .we believe that the incorrect logarithmic law ( [ loglaw ] ) is an artefact of the comsol built - in algorithms used in .we shall mention two recent relevant works on the same problem .firstly , existence of self - similar solutions of the linear advection diffusion equation ( [ pde ] ) was proved by pelinovsky and giniyatullin .the self - similar solutions are given by with and , where is a suitable function .although the self - similar solutions ( [ self - similar ] ) satisfy the decay condition at infinity , and the first two boundary conditions ( [ bc - pde ] ) , the third boundary condition is not satisfied and is replaced with for a fixed .consequently , the self - similar solution ( [ self - similar ] ) predicts blows up in a finite time with positive and positive . although the scaling of the self - similar solution ( [ self - similar ] ) is compatible with the scaling transformation ( [ asymptotic - reduction ] ) used in the derivation of the linear advection diffusion equation ( [ pde ] ), it does not satisfy the physical requirements of the couette flow on figure [ fig - scheme ] .secondly , chugunova et al . constructed steady state solutions of the boundary value problem ( [ pde])([bc - pde ] ) and showed numerically that these steady states can serve as attractors of the bounded dynamical evolution of the model .both the steady states and the initial conditions that lead to bounded dynamics of the model are not physically acceptable as has to be monotonically increasing with as .note that both and are positive for the steady states of the boundary value problem ( [ pde])([bc - pde ] ) . to simulate the boundary value problem ( [ pde])([bc - pde ] ) , a different numerical method is proposed in .this method is still based on finite differences and matlab software package . because the fourth - order derivative term is approximated implicitly and the first - order derivative term is approximated explicitly , the system of finite - difference equations was closed in without any additional equation on the velocity .consequently , was found from the system of finite - difference equations .we also mention that both recent works of and used a priori energy estimates and found some sufficient conditions , under which the smooth physically relevant solutions of the boundary value problem ( [ pde])([bc - pde ] ) blows up in a finite time . in particular ,if , or , or , the smooth solution blows up in a finite time . however , these sufficient conditions do not eliminate existence of smooth physically relevant solutions , for which oscillates and decays to zero as .the remainder of our paper is organized as follows .section 2 outlines the numerical method for approximations of the boundary value problem ( [ pde])([bc - pde ] ) .section 3 presents the numerical simulations of the boundary value problem truncated on the finite interval ] , for sufficiently large so that and are approximately zero . forany fixed , let denote the numerical approximation of at , and let denote the equal step size between adjacent grid points . by applying the second - order central - difference formulas to partial derivatives in the fourth - order equation at each , we obtain the differential equations : which are accurate up to the truncation error . since and for all , the above formula needs only to be applied to interior points with the necessity to approximate for the grid point and for the grid point .the value of can be found from the boundary condition : and can be found from the decay condition : which are again accurate up to the truncation error .it remains to define from the third boundary condition .the velocity can be expressed by applying the central difference approximation to the dynamical equation : where can be found from the third boundary condition in : writing the system of differential equations in the matrix form we use heun s method to evaluate solutions of the system of differential equations .let denote the numerical approximation of at and let denote the time step size ( not necessarily constant ) . by heun s method , we obtain the iterative rule ,\ ] ] where the initial vector is obtained from the initial condition .note that the coefficient matrix depend on since it is defined by the variable velocity .nevertheless , is constant in .the global error of heun s method is , so the global truncation error for the numerical approximation is . the explicit version of heun s method is stable only when therefore , in practice we shall use the implicit heun s method ( which is stable for all ) , by solving the system of linear equations where is the identity matrix .however , because the coefficient matrix on the left - hand side contains an unknown value of , a prediction - correction method is necessary for solving this system of equations as follows .first , is approximated using to predict the value of , which is then used to predict the value of using equation .second , is updated from the prediction to obtain the corrected values of and . since the implicit method is used in both prediction and correction steps , the unconditional stability is preserved .we use the finite - difference method to compute approximation of the boundary value problem ( [ pde2])([bc2 ] ) , after truncation on the finite interval $ ] with sufficiently large .since the time evolution features blow - up in a finite time , an adaptive method is used to adjust the time step after each iteration to maintain the local truncation error of the numerical method at a certain tolerance level .figure [ fig - num-1 ] shows the numerical approximation of the boundary value problem ( [ pde2])([bc2 ] ) for the initial function with and ( the one shown on figure [ fig - initial ] ) .the initial velocity is determined from this initial condition by equation ( [ velocity1 ] ) as .the top left panel of the figure shows the profile of versus at different values of until the terminal time of our computations .the top right panel of the figure shows the change of the velocity in time computed dynamically from equation ( [ velocity2 ] ) .the bottom left panel shows the boundary value versus and the bottom right panel shows the boundary value versus .it is clear from the top panels that the velocity diverges towards at , whereas the solution remains regular near the blow - up time .recall that the velocity is determined from equation by the quotient of and , where must be strictly negative for all for physically acceptable solutions .we can see from the bottom panels that the value of is about to cross zero from the negative side at the blow - up time , whereas is also approaching zero but at a much slower rate than .this also indicates that is approaching negative infinity at the blow - up time .+ to measure the error of numerical computations , we shall derive dynamical constraints on the time evolution of a smooth solution of the boundary value problem ( [ pde2])([bc2 ] ) .differentiating equation with respect to once and twice and taking the limit , we obtain and using equation , we determine at from the central difference approximation : then , the value of is approximated from equations and : comparing the value of determined from equation with the central - difference approximation for the numerical derivative we can estimate the numerical error of the solution at the boundary .figure [ fig - num-2 ] ( left ) compares the value of between equations and .the error remains small , therefore , the assumption that the solution is smooth ( or at least ) at the boundary is valid up to numerical accuracy .figure [ fig - num-2 ] ( right ) shows the time step size adjusted to preserve the same level of the local error of .we set if the error estimation procedure yields larger values of .this truncation is needed because the error drops significantly near , and the error estimation procedure would otherwise produce large values of .we have performed computations with other initial conditions from the two - parameter family of functions in ( [ ic ] ) .figure [ fig - num-3 ] ( left ) shows the dynamical evolution of the velocity starting with a positive velocity , which is determined from the initial function ( [ ic ] ) with and . although the terminal time is much larger compared with the case of the negative initial velocity on figure [ fig - num-1 ] , a blow - up is still detected from this initial condition .the solution looks similar to the solution shown in figure [ fig - num-1 ] ( top left ) and hence is not shown .figure [ fig - num-3 ] ( right ) shows the adjusted time step size .we note that the time step size is small at the initial time because the smooth solution appears from the initial condition , which does not satisfy infinitely many constraints of the boundary value problem ( [ pde2])([bc2 ] ) .it is also small near the terminal time because of the blow - up of the smooth solution .but is not too small at intermediate values of , when the solution is at a slowly varying phase . during this slowly varying phase , is nearly constant but changes nearly linearly in time ( similarly to figure [ fig - num-1 ] ( bottom left ) and hence is not shown ) .figure [ fig - num-4 ] illustrates the dynamical evolution of the velocities under different initial conditions given by the two - parameter function ( [ ic ] ) . from these plots , together with the previous examples , it is clear that the blow - up time depends on the initial velocity and a large positive initial velocity leads to a much longer slowly varying phase before the solution blows up .nevertheless , the blow - up in a finite time is unavoidable for any physically acceptable initial conditions .in order to determine numerically the blow - up time and the rate of blow - up of the velocity , we will fit the numerical data near the terminal time of our computations with either the logarithmic function or the power function . for the logarithmic function ,we first differentiate both sides of the expression with respect to and take the inverse : then the constants and can be determined from a linear regression applied to equation ( [ regression-1 ] ) .we will skip the numerical procedure for determining the values of since it does not affect the blow - up behavior of .for the power function , we can take the logarithm of both sides of the expression and then differentiate the above expression : the constants and can now be determined from a linear regression applied to equation ( [ regression-2 ] ) . in practice, we found that the blow - up rate in the power law or the coefficient in the logarithmic law vary with different time windows ( i.e. the range of which is used to fit the data ) .the following output gives a comparison of numerical data under different time windows and different tolerance levels , using the initial condition ( [ ic ] ) with and . here _ starting time _ means the time at which we start to fit the data , and _ error _ is the mean squared error ( mse ) defined by where is the total number of data points used in the regression ..... initial condition : a = 0.5 , b = 0 ; initial velocity : v(0 ) = -1.2500 tolerance level : 0.0001 , number of iterations : 330 , terminal time = 1.8729 starting time blowup time t0 blowup rate p or c1 error powerlaw : 1.8176 1.8749 0.3916 0.000017 1.8356 1.8752 0.3994 0.000003 1.8550 1.87560.4104 0.000000 loglaw : 1.8176 1.8678 0.5371 23.732740 1.8356 1.8695 0.6135 33.681247 1.8550 1.87160.7578 68.934686 tolerance level : 1e-006 , number of iterations : 1448 , terminal time = 1.8732 starting time blowup time t0 blowup rate p or c1 error powerlaw : 1.8172 1.8753 0.3927 0.000033 1.8360 1.8757 0.4009 0.000006 1.85471.8760 0.4118 0.000000 loglaw : 1.8172 1.8688 0.5500 25.226547 1.8360 1.8705 0.6343 33.937325 1.8547 1.8724 0.7854 58.894321 .... the above table shows that the errors from the logarithmic law are much larger than the errors from the power law in all cases . also , the error of the power law reduces as the time window moves closer to the blow - up time , whereas the error of the logarithmic law increases .moreover , the blow - up times determined from the logarithmic law are smaller than the terminal time of our computations .hence , the logarithmic law deviates from the numerical data near the blow - up time .as we can see in figure [ fg : datafitting ] , the power function fits our numerical data much better than the logarithmic function . in order to confirm that the blow - up of the velocity occurs according to the power law ( [ pwrlaw ] ) compared to the logarithmic law ( [ loglaw ] ) , we use the scaling transformations suggested in and replace the time variable by the new variable where is a positive integer . in new time variable with , the model ( [ pde2 ] ) is rewritten in the form whereas the boundary conditions or the numerical method are unaffected . with the power law ( [ pwrlaw ] ) as , the new time variable in ( [ time - variable ] ) approaches a finite limit if and becomes infinite if . with the logarithmic law ( [ loglaw ] ) , the new time variable would always approach a finite limit for any integer .figure [ fig - power-1 ] shows the dependence of versus the rescaled time variable for ( left ) and ( right ) .it is obvious that the blow up occurs in finite time if and in infinite time if , which corroborates well with the previous numerical data suggesting that .this figure rules out the validity of the logarithmic law ( [ loglaw ] ) .we have checked that the rescaled time variable for also extends to infinite times , similarly to the result for .we note that the dependence of versus the original time variable can be obtained by numerical integration of the integral in ( [ time - variable ] ) .we have checked that both time evolutions of in with and recover the same behavior of in , which resembles the top left panel of figure [ fig - num-1 ] except times near the blow - up time , where the computational error becomes more significant . using the scaling transformation ( [ time - variable ] ) with in the casewhen as , we can define a more accurate procedure of detecting the blow - up rate in the power law ( [ pwrlaw ] ) .first , we note that if as , then as .hence as with . using now the linear regression in log - log variables for and , we can estimate the coefficient , and then .the following table shows several computations of and for different initial and terminal times .all other parameters are fixed similarly to the previous numerical computations . .... starting time terminal time regression slope q blow - up rate p 36.0943 723.3424 0.5345 0.4697 121.7362 723.3424 0.5221 0.4797 272.5828 78034.1670 0.5044 0.4956 2393.6301 78034.1670 0.4997 0.5003 .... the results of data fitting suggest that the power law gives a consistent estimation of the blow - up rate , with .let us now discuss why the behavior as appears a generic feature of smooth solutions of the boundary value problem ( [ pde2])([bc2 ] ) . using equations ( [ velocity1 ] ) and ( [ r5 ] ) , we obtain the dynamical equation on : let us now assume that there is such that where and . solving the differential equation ( [ contact ] ) near the time , we obtain under the constraint that .the asymptotic rate ( [ asymptotic - rate ] ) corresponds to the power law with .figure [ fig - power-2 ] shows the behavior of absolute values of ( left ) and ( right ) versus the rescaled time variable given by ( [ time - variable ] ) with .we can see that the assumption , that is , is bounded away from zero near the blow - up time , is justified numerically .we note that the time evolution in the rescaled time variable allows us to identify this property better than the time evolution in the original time variable , which is shown on the bottom right panel of figure [ fig - num-1 ] .we have also checked from the linear regression in log - log coordinates that as with , in consistency with the asymptotic rate ( [ asymptotic - rate ] ) .we conclude from the numerical simulations of the boundary value problem ( [ pde2])([bc2 ] ) that , for any suitable initial condition in the two - parameter form , there always exists a finite positive time such that as , although the blow - up time varies from different initial velocity . with a large positive initial velocity , the solution tends to have a longer phase of slow motion before it eventually blows up , whereas a negative initial velocity yields a much smaller value of the blow - up time .the numerical results also suggest that the behavior of near the blow - up time satisfies the power law , with a blow - up rate .this numerical observation corroborates a simple analytic theory for the blow - up of the velocity of contact lines in the reduced model ( [ pde])([bc - pde ] ) . based on earlier numerical evidences in ,a similar result should also hold for the nonlinear thin - film equation ( [ model ] ) .an open problem for further studies is to develop a more precise and computationally efficient numerical method for solutions of the boundary value problem ( [ pde2])([bc2 ] ) . because the model equation is already a fourth - order differential equation, we shall avoid using any numerical methods that involves higher - order central differences .in addition , because of the unknown variable , it is difficult to use other higher - order implicit methods to solve the system of differential equations after discretization .thus , the finite difference method has a limited accuracy .therefore , a different approach is needed , for instance , by using the collocation method involving the discrete fourier transform .pelinovsky , a.r .giniyatullin , and y.a .panfilova , on solutions of a reduced model for the dynamical evolution of contact lines " , transactions of nizhni novgorod state technical university n.a .alexeev n.4 ( 94 ) ( 2012 ) , 4560 .pelinovsky and a.r .giniyatullin , finite - time singularities in the dynamical evolution of contact lines " , bulletin of the moscow state regional university ( physics and mathematics ) * 2012 * n.3 ( 2012 ) , 1424 .
we study numerically a reduced model proposed by benilov and vynnycky ( j. fluid mech . * 718 * ( 2013 ) , 481 ) , who examined the behavior of a contact line with a contact angle between liquid and a moving plate , in the context of a two - dimensional couette flow . the model is given by a linear fourth - order advection - diffusion equation with an unknown velocity , which is to be determined dynamically from an additional boundary condition at the contact line . the main claim of benilov and vynnycky is that for any physically relevant initial condition , there is a finite positive time at which the velocity of the contact line tends to negative infinity , whereas the profile of the fluid flow remains regular . additionally , it is claimed that the velocity behaves as the logarithmic function of time near the blow - up time . compared to the previous computations based on comsol built - on algorithms , we use matlab software package and develop a direct finite - difference method to study dynamics of the reduced model under different initial conditions . we confirm the first claim but also show that the blow - up behavior is better approximated by a power function , compared with the logarithmic function . this numerical result suggests a simple explanation of the blow - up behavior of contact lines .
dynamic pet studies provide the opportunity to image functional metabolic parameters of tissue in - vivo [ ] .although there have been many developments in this direction [ e.g. , , , , muzi et al .( ) , veronese et al .( ) ] , no procedure has yet been widely adopted for routine use .most often quantitation of dynamic pet studies is based on consideration of a single time point for a user - defined region of interest ( roi ) . in view of the complexity of pet imaging and its expense , this is unsatisfactory . as most radiotracers used in pet act in a linear and time - invariant fashion , dynamic pet imaging measures the convolution between the activity of the tracer in the arterial blood supply and the tissue impulse response .the impulse response is known as the tissue residue function . in statistical termsthe residue is the life table associated with the collection of pet tracer atoms introduced , typically by intravenous injection , to the circulatory system .the residue has its roots in the seminal indicator dilution work of .kinetic analysis of pet data is substantially concerned with modeling and estimation of the residue function . to this end, there are a suite of commonly used compartmental models [ ] . however , while compartmental models adequately represent the biochemistry of well - mixed homogeneous in - vitro samples , they are not necessarily well suited to represent micro - vascular flows and micro - heterogeneity that are part of in - vivo tissue [ bassingthwaighte ( ) , li , yipintsoi and bassingthwaighte ( ) and ] .consequently , there is interest in more flexible nonparametric approaches to the estimation of the tissue residues . among the most popular approachesis the spectral method introduced by . hereresidues are approximated by nonnegative sums of exponentials , whose amplitudes and rate constants are adapted to the data ; see veronese et al .( ) for a recent treatment and review .spectral methods have the complexity of requiring estimating of a set of intrinsically nonlinear exponential rate constants .this is a significant practical computational challenge ; see zeng et al .( ) , for example .but spectral methods also have a theoretical limitation in that they force the negative - derivative of residue function , aka the transit time density of tracer - atoms , to be monotonically decreasing from a mode at zero .this assumption is at odds with micro - vascular flow measurements which support a more log - normal or gamma - like form for the transit time density . if the residue is to be estimated nonparametrically , it is desirable to have a procedure , like that given in hawe et al .( ) or osullivan et al .( ) , that does not impose an unrealistic physiologic assumption on the residue function ab initio .our focus here is on voxel - level estimation .the method approximates voxel - level residues by a mixture of _ basis _ residue functions that have been optimized by applying a backward elimination technique to a segmentation of the entire volume of data .the use of mixtures in this setting is not new [ ] , however , unlike the previous work , which has involved approximation of mixtures of compartmental models by a compartmental model form , the current approach does not require this step .an important aspect of the methodology is decomposition of the tissue residue to separately focus on characteristics associated with short transit times of tracer atoms in the vasculature , distinct from slower transit times associated with blood - tissue exchange and retention. this decomposition parallels the often separate consideration given to early and late life - time mortality patterns in human life tables .the methodology leads to a practical quadratic programming - based algorithm for voxel - level residue reconstruction and associated generation of functional metabolic images of parameters of interest . for a typical dynamic pet study the analysis , including the segmentation steps , runs on a 3.2 ghz pc in less than 30 minutes . in the context of pet scanning in cancer applications ,that is , about 90% of all clinical pet imaging studies , this is completely adequate for routine operational use .section [ sec2 ] presents the basic statistical models underlying the approach .inference and model selection methodology are developed in section [ sec3 ] .section [ sec4 ] presents illustrations with imaging data from both normal subjects and cancer patients .performance of the methodology for fluoro - deoxyglucose ( fdg ) and water ( h2o ) imaging studies is considered in section [ sec5 ] .this includes comparisons with compartmental model analysis and more theoretical evaluation via simulations .let represent the concentration of tracer atoms at time in a tissue voxel with three - dimensional spatial coordinate . , measured as activity per unit mass ( mg ) of tissue , evolves in response to the localized arterial input function , denoted and measured as activity per unit volume ( ml ) of whole blood .the basic assumption of most pet imaging is that the interaction of the tracer with the tissue can be approximated as a linear and time - invariant process .thus , the measurable concentration arises as a convolution between the tissue response and the arterial input function here is tissue response and , borrowing terminology of the work of meier and zierler ( ) on indicator dilutions , is called the residue function .formally , is proportional to the impulse response of the tissue at location and has units of flow ( ml ) .if all tracer atoms were instantaneously introduced in a unit volume of blood , would give the number remaining in the tissue as a function of time .if tracer atoms per ml are introduced in the arterial blood supply to the tissue , for small time increments , then the number of those atoms , per gram of tissue , remaining in the tissue over the time interval ] of the scanning and , apart from the complexity of indirect measurement by the convolution equation , there is censoring because tracer atoms decay over time .such restrictions will be familiar , as they arise in traditional life - table work . to better understand the residue, it is helpful to separate the early ( vascular ) component from the later components that are associated with longer term interaction with the tissue and also retention . using as a cutoff for rapid ( large - vessel ) transit times, a decomposition of the residue is obtained as where , and is the constant .we refer to as the rapid vascular component , as the exchangeable or in - distribution component and as the ( apparent ) extracted component ._ apparent _ is used because the ultimate ( asymptotic ) extraction is not strictly observable based on the finite duration of the study , however , as it is common to choose large enough that there would be little further decline in the residue at times greater than , should be a good approximation to the relevant flux of the tracer atoms into tissue .the decomposition in ( [ eq2 ] ) is dependent on the value of ( and ) . for human imaging ,the choice of minute is reasonable , as this matches the early vascular distribution time for intravenously injected contrast agents , upon which the standard scanning duration used to assess local blood volume parameters in computerized tomography ( ct ) and magnetic resonance ( mr ) is based [ provenzale et al .( ) ] . in the absence of other information ,the temporal resolution of a pet study for the residue can be no better than the temporal sampling of scanning and the sharpness of the arterial input resulting from the intravenous injection of the tracer .each component of the residue decomposition in ( [ eq2 ] ) is itself a residue or life table .the extracted component is constant but the vascular and distribution components carry information beyond scale .key parameters for a residue function are its maximum and integral values , which represent the flow and volume occupied by the collection of tracer atoms defined by the residue [ meier and zierler ( ) , hawe et al . ( ) ] .so based on the decomposition in ( [ eq2 ] ) , we identify five summary parameters of particular interest vascular flow and volume ( , ) , distribution flow and volume ( , ) and the apparent flux ( ) which is seen as the net flux of tracer into tissue up to time .a further parameter of interest is the extraction fraction , defined by . in the case where the residue is exponential , for example , a 1-compartment model [ e.g. , bassingwaighte ( ) ] with rate constants and , for , the flow reduces to and as the exchangeable volume , for a 2-compartment fdg model [ phelps et al .( ) ] with and , as , and the flux value .substitution of the residue decomposition ( [ eq2 ] ) into ( [ eq1 ] ) gives a decomposition of the tracer tissue concentration as a sum of vascular ( ) , in distribution ( ) and extracted ( ) components the sum is the extravascular component .examples of this are shown in section [ sec4 ] .as is constant , is the product of flux and the cumulative arterial activity . at late time points , vascular and exchangeable concentrationare safely ignored so the late time concentration is effectively proportional to the cumulative arterial activity .this is the basis of a model - free approach to the analysis of flux [ patlak , blasberg and fenstermacher ( ) ] . a variety of blood - tissue exchange models , for example , bassingwaighte ( ) , huang and phelps ( ) and gunn et al .( ) , as well as many general life - table methods , for example , cox and oakes ( ) and , might be used to approximate tissue residue functions .we should allow any approach that does not systematically misrepresent the physiologic / metabolic processes involved .validation of model formulations for pet tracers is difficult . in - vitro studiesclarify important biochemical transformations involved , but satisfactory in - vivo validation of model assumptions related to the structure of micro - vasculature flows and heterogeneities is not possible .the most widely used one- and two - compartment models in pet reduce to representation of the residue by sums of mono - exponential functions . while these models may adequately represent the biochemistry involved ,their ability to describe the complexities of vascular transport is limited .indeed , in the standard compartmental models the tracer atom transit time density is always monotonically decreasing , so the modal transit time for the nonextracted tracer is always zero .physiologically this is difficult to justify [ bassingwaighte ( ) ] .we use an additive model that approximates the local tissue residue by a positive linear sum of a fixed set of distinct basis residue functions , , that have themselves been derived from a nonparametric analysis of time courses arising from a full segmentation of the data volume .the model is where the s are nonnegative constants . for simplicity , the basis residues are normalized to have maximum of unity , that is , for . assuming can be described by a delayed version of a sampled arterial time - course , which , in view of the temporal resolution of pet , is reasonable , equation ( [ eq4 ] ) implies \\[-8pt ] \nonumber & & { } + \alpha_{j}(x ) \bar c_{j } \bigl(t-\delta(x ) \bigr),\end{aligned}\ ] ] where for . for known delay , ,the model is linear in the -coefficients .note estimation of the s in ( [ eq5 ] ) allows the local residues to be determined by equation ( [ eq : rr ] ) ; from them associated flow and volume parameters of section [ sec2.1 ] can be recovered .\1 . if the s in ( [ eq5 ] ) correspond to specific regional time courses , a mixture model interpretation for the model can be developed .this is reasonable , as the population of available metabolic pathways for a tracer atom is determined by the profile of enzymes , receptor ligands or transporters that are represented . across a collection of voxels these profiles vary with a greater representation of certain characteristics in some voxels than in others .thus , the transit time for a randomly chosen tracer atom in voxel can be expected to select a metabolic pathway in accordance with the distribution of pathways available within the voxel , and the -coefficients ( scaled to sum to unity ) could be viewed as a set of mixing proportions ; see osullivan ( , ) .the form in equation ( [ eq : rr ] ) can also be viewed as an example of a general multivariate factor analysis ( without reference to residues ) .such models have been used to describe pet time - course data ; see , for example , kassinen et al .( ) , , lee et al .( ) and zhou et al .( ) .a tissue region can contain significant nonarterial blood vessels . depending on tissue location , separate signals associated with major blood pools in the circulatory system , such as the right ventricle of the heart , the lungs , venous blood and the venous supply path from the injection site to the heart , might need to be considered .this can be accomplished by augmenting equation ( [ eq5 ] ) to include terms representing nonarterial blood signals .obviously this is particularly relevant in the thoracic imaging where the direct or indirect ( via a spillover artifact ) impact of nonarterial cardiac and pulmonary blood signals can be significant .venous blood vessels arise throughout the body , so there is a case for always including a venous signal term . but rarely does the simultaneous measurement of arterial and venous blood activity arise in a pet study .venous blood can be viewed as a response to the arterial supply , so the venous signal is sensibly represented as a whole - body response to the arterial supply .thus , if an explicit venous signal is not available , the structure of our modeling approach allows for the component -residues to adapt to so that the overall tissue residue will have the venous component included .as mean transit times from arterial to venous blood are short ( minute ) , our proposed decomposition of the residue with will be a combination of pure arterial , venous contributions .hence , should be viewed as an estimate of the volume ( per mg ) of large arterial and large venous vessels in the tissue .thus , if an explicit venous blood signal is included ( ) , the local estimate of blood volume should be the sum of the venous volume [ i.e. , and from the estimated residue in ( [ eq2 ] ) .\3 . due to limited resolution, voxel - level data are subject to mixing and partial volume effects , which are reasonably modeled by mixtures ; see also section [ sec3.3 ] .the estimation of voxel - level residues involves three steps .first , a segmentation procedure is applied to extract scaled time - course patterns from the measured set of voxel - level time courses in the data .next , the time courses are analyzed to recover a nonparametric residue function for each and a backward elimination procedure is used to obtain a reduced set of basis residue functions .the final step does voxel - level optimization of -coefficients and delay in equation ( [ eq5 ] ) with subsequent evaluation of the voxel residue in equation ( [ eq : rr ] ) and the key parameters identified in section [ sec2.1 ] .the details involved in each of these steps are presented below . as the analysis is based on a voxel - level fitting process ,the residuals associated with the fitting process provide useful diagnostic information .some proposals for examining the temporal and spatial patterns in those residuals are indicated .a split - and - merge segmentation procedure from is used .the procedure groups voxels on the basis of the shape of the measured time course , the _ scaled _ time course .the splitting employs a principal component analysis to recursively divide the tissue volume into a large collection ( typically 10,000 ) of hyper - rectangular regions whose scaled time - course patterns show maximal homogeneity .the merging process then recursively combines ( initially with a constraint to ensure that segments consist of contiguous collections of voxels ) these regions to create a collection of regions with high average within - region homogeneity . for the analyses reported here, the number of segments used is taken to be large enough to explain 95% of the variance in the scaled time - course data , about 712 segments for a typical cerebral study and 1520 segments for a chest or abdominal study .the choice of the number of segments was examined in . as the focus of the algorithm is on _ scaled _ time - course information , the extracted segments are well suited for use in subsequent mixture modeling .a particular advantage of the scaled approach is that it results in fewer segments than might otherwise be required to explain a comparable proportion of variance in the data .it is helpful to display segments to connect with anatomy .if the average scaled time course for a segment is given by a vector , imaging the -weighted average of the voxel - level time - course data in the segment is effective .some examples are shown in section [ sec4 ] .the time bins of data acquisition are for .the segmentation algorithm provides a mean time course and associated sample variance vector , for , for each of segments .arising from the underlying poisson emissions that are the basis of the imaging process , regional time - course data have an approximate quasi - poisson structure [ see ; carson et al .( ) and ] , that is , for .thus , up to known calibration factors ( incorporating time - bin duration and voxel dimension ) , the mean values are proportional to the integrated concentration per milligram of radioactive tracer atoms which in turn is a function of the regional residue and input arterial supply : where represents isotope decay and is an appropriate delay .note here represents the total ( radioactive and nonradioactive ) tracer atom concentration at time .an initial set of possible residue basis elements are obtained by representation of the s in terms of b - splines [ osullivan et al .( ) ] . herethe b - spline coefficients and the delay are optimized by weighted least squares with weights given by .we seek to express the segment residues in terms of a reduced set of basis residue forms .since retention is apparent in nearly all residues , it makes sense to ensure that the constant residue function ( a _ patlak _ term , cf .section [ sec2.1 ] ) is a fixed member of any basis .thus , the focus of the basis search is on representation of the nonretained residue components .suppose we have a set of such normalized basis elements denoted for as well as the constant unit value patlak residue , .with these , the residue for the segment ( ) is approximated by the ( nonnegative ) linear combination by substitution into ( [ eq : rb ] ) , the -coefficients and delay can be optimized to the observed segment data .a weighted least squares fitting is used with weights given by .an approximate unbiased risk assessment criterion is used to obtain an overall assessment of the -component basis .the target loss is the weighted square error difference between the true segment mean vectors and their estimated values based on the -component approximation ^ 2.\ ] ] with and , so in the case that the vector is a linear function of the weighted data , that is , , ] , where and . here ,to simplify the notation , we have dropped the subscripts and .this variance approximation will become more reliable at high doses when the constraints on the -coefficients are , typically , not active .recall from equation ( [ eq7 ] ) , involves the convolution of the normalized residue and the arterial input function , .so if is expressed as where has amplitude of unity , can be expressed as ^{-1},\ ] ] with the same as but with now involving convolution of the normalized residue and normalized arterial input .thus , at high doses the variance and , consequently , the mse , will be inversely proportional to dose ( ) .this is consistent with , as was found for fdg in the simulation .the slower convergence ( ) for h2o may be a reflection of a greater dependence on constraints , due to the higher noise . when constraints are active , the standard weighted least squares covariance formula will not be reliable .we have presented an approach to the estimation of voxel - level tracer residues from pet time - course data .the technique uses a data - adaptive mixture model that allows for voxel - level variation in the time of arrival of the tracer in the arterial supply .the mixture representation of local residues is plausible and has been used previously with basis residues that are a compartmental model or have simple exponential forms [ and ] .the present work shows that it is possible to also use nonparametric forms for the basis residues .this allows the possibility to better investigate potential deviations from compartmental - like descriptions of tissue residues .computationally , the linearity of mixture models is attractive , as it facilitates the implementation based on efficient use of standard quadratic programming tools .the work has a reliance on multivariate statistical methods and uses backward elimination guided by an unbiased risk type model selection statistic .residue functions are life tables for the transit time of radiotracer atoms . just as infant and elderly mortality patterns might be given separate attention in a human life table , decomposition of the residue can provide insight into the tracer kinetics .our approach emphasizes decomposition of residue to focus on flow and volume characteristics of vascular and in distribution transport as well as the ( apparent ) rate of extraction of the tracer by tissue , that is , flux .thus , we have a 5-number summary for the residue . the life - table perspective on the tissue residue emphasized in this paper may encourage broader interest in adapting methods from mainstream survival analysis for application to the growing needs for quantitation in pet studies and for related contrast tracking techniques used in computerized tomography and magnetic resonance [ schmid et al . ( ) ] .pet imaging has grown in importance particularly in the context of cancer , where over 90% of clinical imaging with pet is carried out .having more sophisticated kinetic analysis tools , such as residue analysis , can enhance the type of information recovered from these studies .this may potentially lead to better procedures for selecting and monitoring cancer treatments in order to optimize the patient outcomes .a number of current clinical imaging trails with pet in cancer already have reliance on detailed kinetic analysis for extraction of diagnostic information .given the nature of the problems involved , there is an opportunity for statistics to play a greater role in these developments .we are grateful to the referees , associate editor and the editor for a number of comments which led to significant improvements to the manuscript .
most radiotracers used in dynamic positron emission tomography ( pet ) scanning act in a linear time - invariant fashion so that the measured time - course data are a convolution between the time course of the tracer in the arterial supply and the local tissue impulse response , known as the tissue residue function . in statistical terms the residue is a life table for the transit time of injected radiotracer atoms . the residue provides a description of the tracer kinetic information measurable by a dynamic pet scan . decomposition of the residue function allows separation of rapid vascular kinetics from slower blood - tissue exchanges and tissue retention . for voxel - level analysis , we propose that residues be modeled by mixtures of nonparametrically derived basis residues obtained by segmentation of the full data volume . spatial and temporal aspects of diagnostics associated with voxel - level model fitting are emphasized . illustrative examples , some involving cancer imaging studies , are presented . data from cerebral pet scanning with fluoro - deoxyglucose ( fdg ) and water ( h2o ) in normal subjects is used to evaluate the approach . cross - validation is used to make regional comparisons between residues estimated using adaptive mixture models with more conventional compartmental modeling techniques . simulations studies are used to theoretically examine mean square error performance and to explore the benefit of voxel - level analysis when the primary interest is a statistical summary of regional kinetics . the work highlights the contribution that multivariate analysis tools and life - table concepts can make in the recovery of local metabolic information from dynamic pet studies , particularly ones in which the assumptions of compartmental - like models , with residues that are sums of exponentials , might not be certain . , , , ,
barrier options are contracts whose pay - offs are activated or de - activated when the underlying process crosses a pre - specified level .these contracts are among the most popular path - dependent options .to value barrier options , a model needs to be sufficiently flexible to calibrate call option prices at different strikes and maturities .however , it is desirable to maintain a degree of analytical tractability to facilitate the calculations , especially for the greeks or the sensitivities .these sensitivities describe the change in the model price with respect to a change in the underlying parameter , and are important for an appreciation of the robustness of the model s results .it is well known that the accurate evaluation of the greeks is a challenging numerical problem , since standard pde or monte - carlo methods are generally slow and unstable .it is well established that the geometric brownian motion model lacks the flexibility to capture features in financial asset return data such as the skewness and the excess kurtosis .it can not calibrate simultaneously to a set of call option prices . to address these limitations ,one of the approaches consists of introducing jumps in the price process by replacing the brownian motion by a lvy process .lvy models , such as the vg , cgmy , nig , kobol , generalised hyperbolic , and kou s double exponential model , have been successfully applied to the valuation of european - type options .we refer to cont and tankov , boyarchenko and levendorskii , and schoutens for background and references on the application of lvy models in option pricing .as observed by many authors , such as eberlein and kluge , or carr and wu , lvy models are generally not capable of calibrating option prices simultaneously across strikes and maturities .empirical studies of s&p500 index data by carr and wu , and pan , show that the implied jump intensities and the implied jump size distributions vary greatly over time .the prices of short - dated options exhibit a significantly larger risk - premium than that of long - dated options .this is reflected in the thicker tails of the implied marginal risk - neutral distributions , especially at short maturities .for example , in the equity markets , short - dated out - of - the money put options are relatively expensive since the risk of a large negative jump in the share is priced . because of the stationarity and independence of the increments of a lvy process , the moments exhibit a rigid term structure that is different from what is observed in market data .this lack of flexibility can be overcome by considering models driven by additive processes , which have independent and time - inhomogeneous increments .additive models have been used for equity option pricing by carr et al . , galloway and nolder , and by eberlein and kluge for interest rate option pricing . motivated by modelling considerations , carr proposed a self - similar additive model for the log - price , and reported good calibration results across time .galloway and nolder carried out a calibration study for various related models .eberlein and kluge constructed an hjm model driven by an additive process with continuous characteristics , and they obtained a good fit for swaptions by using piecewise constant parameters . in this paperwe follow a similar approach : we model the share price by an additive process with hyper - exponential jumps .hyper - exponential distributions are finite mean - mixtures of exponential distributions which can approximate monotone distribution arbitrarily closely . as first observed by asmussen et al . , most of the popular lvy models used in mathematical finance possess completely monotone lvy densities and can therefore be approximated well by hyper - exponential lvy models .a hyper - exponential additive model is sufficiently flexible to allow for an accurate calibration to european option prices across strikes and multiple maturities .in addition , if the parameters are piecewise constant , the model admits semi - analytical expressions for prices and greeks of barrier options .there currently is a body of literature devoted to various aspects of pricing barrier options . in the setting of lvy models , a transform - based approach to pricebarrier options has been developed in a number of papers , including geman and yor , kou and wang , davydov and linetsky , boyarchenko and levendorskii .in particular , kou and wang , kou et al . , sepp , lipton , and jeannin and pistorius considered the cases of lvy processes with double - exponential and hyper - exponential jumps . in this paper ,the transform algorithm that we develop is based on a so - called matrix wiener - hopf factorization .such matrix factorizations were first studied by london et al . and rogers for ( noisy ) fluid models .jiang and pistorius developed matrix - wiener factorization results for regime - switching models with jumps .we show that by suitably randomizing the parameters the distributions of the infimum and supremum of the randomized hyper - exponential additive process can be explicitly expressed in terms of a matrix wiener - hopf factorization .we use these results to derive semi - analytical expressions for the first - passage time probabilities , for the prices , and for the greeks of barrier options , up to a multi - dimensional transform .the actual prices are subsequently obtained by inverting this transform .as a numerical illustration , we calibrate the hyper - exponential additive model to eurostoxx prices quoted on 27 february 2007 at four different maturities .we calculate in this setting down - and - in digital and down - and - in call option prices and greeks ( delta and gamma ) .to invert the transform , we use a contour deformation algorithm and a fractional fast fourier transform algorithm , developed by talbot , bailey and swarztrauber , and chourdakis , .we also compare it to monte - carlo euler scheme simulations .we find that the algorithm is accurate and stable , and much faster than monte - carlo simulations ( especially for the greeks ) .this method is suitable for applications in which the number of periods is not too large ( up to four ) . when a larger number of periods is required , the direct inversion method used here is no longer feasible .the subject still needs to be further investigated and is left for future research .the remainder of the paper is organized as follows . in section[ sec : add ] we define the hyper - exponential additive model and present its application to european call option pricing . in sections[ sec : wh ] and [ sec : price ] we derive semi - analytical expressions for the first - passage probabilities of a hyper - exponential additive process in terms of a matrix wiener - hopf factorisation , and for the prices and greeks of barrier options . in section [ sec : num ] we present numerical results .we consider an asset price process modelled as the exponential of an additive process .informally , an additive process can be described as a lvy process with time - dependent characteristics or , equivalently , as a process with independent but non - stationary increments .we briefly review below some key properties of additive processes .for further background on additive processes and their applications in finance , we refer to sato , and to cont and tankov .an additive process can be defined more formally as follows .[ def : locallevy ] for a given , \} ] , has an infinitely divisible distribution with lvy triplet ; that is , the characteristic function of is given by ] is determined by the collection of lvy triplets \} ] and \times\mathbb r\to\mathbb r ] , ( with )we set for ] whose parameters are constant during the periods ] . in the first period as a jump - diffusion with positive and negative exponential jumps with means and jump rates and . in the second period a brownian motion with drift .the idea is to randomize the times between maturities by replacing and with independent exponential random variables having means and .this results in a regime - switching jump - diffusion with the regime only jumping from state 1 to state 2 , according to the generator matrix we associate to the regime - switching process a continuous markov additive process , which can be informally obtained by replacing positive and negative jumps with stretched slopes of and ( see asmussen for background on this embedding ) . as described in , in this case the generator of the modulating markov processis given by with the matrices and in theorem [ thm : wh ] given by are illustrated . here is a hyper - exponential additive process on the period ] and ] , the process evolves as a jump - diffusion with volatility , drift and exponentially - distributed jumps .the positive and negative jumps are exponentially - distributed with means and jump rates and , respectively . during the second period ] , which corresponds to the restriction that in it was shown that the vector , where denotes the state space of , is given by where the matrix is given in ( [ eq : k ] ) . combining these results with theorem [ thm : wh ]we find that and the proof is complete .under the hyper - exponential additive model with piecewise constant parameters , the characteristic function at time is explicitly given by with as given in .the price of a european call with maturity can thus be efficiently calculated using a well - established fourier transform method , which we briefly recall .the fourier transform over of , the price of a call option with log - strike and maturity , can be explicitly expressed in terms of the characteristic function as follows : \td k\\ & = & s_0\te{-rt_i}\frac{\phi^{(i)}(v-(\alpha+1)\mathbf i)}{(\alpha+\mathbf iv)(\alpha+1+\mathbf iv)}.\label{eq : fftcall}\end{aligned}\ ] ] since the call pay - off function itself is not square - integrable in the log - strike , the axis of integration is here shifted over which corresponds to exponentially dampening the pay - off function at a rate , which is usually taken to be ( see carr and madan ) .the call option prices are then determined by inverting the fourier transform : evaluate down - and - in digital option prices ( did ) , we invert the multi - dimensional laplace transform to obtain where and are vertical lines in the complex plane defined by for with and fixed values of , chosen such that all the singularities of the transform are coordinate - wise on the left of the lines .many algorithms approximate the integrals in by a finite linear combination of the transform at some specific nodes with certain weights .three approaches have been studied by abate et al . , based on fourier series expansion , combinations of gaver functionals , and deformation of the integral contour . herewe concentrate on the last method developed by talbot , since reports in the literature ( e.g. ) suggest that this approach offers high performance for a short time of execution , which our numerical results confirm .we write with , , , and since is a real valued function , is also equal to the real part of the integral on the right - hand side of , which can be used to reduce the calculation by a factor of two . to illustrate the evaluation of the integrals, we present concrete expressions for the approximating sums when ( which is the setting that will be implemented later on ) . defining obtain where is equal to .the weights and the nodes are given by ^ 2)-\mathbf i \cot(k\pi / m))e^{q_k}.\end{aligned}\ ] ] since the weights and nodes are independent of the transform , the calculation time of the algorithm can be reduced by pre computing and storing weights and nodes .the speed of convergence and the accuracy of the talbot algorithm will depend on the regularity of the laplace transform .although universal error bounds are not known , abate et al . showed numerically that the single parameter can be used to control the error and can be seen as a measure for the precision .they found after extensive numerical experiments that for a large class of laplace transforms the relative error is approximately .for high dimensional inversion , extra accuracy in the inner sums may be needed to obtain a sufficient degree of precision for the outer sums , which can be achieved by increasing . to evaluate down - and - in call option prices ( dic ) , we invert the fourier - laplace transform over log - strike and time periods . forthe inversion of the laplace transform we again apply the talbot algorithm . in the case of two time periods , with find that the fourier transform can be approximated by the following sums : unlike the case of the inversion of , we can not reduce the calculation time by two by using complex conjugates , since the function is not real valued .down - and - in call prices are then obtained by inverting the fourier transform over strike : where is the rate of exponential dampening .this integral is approximated for a set of log - strikes between as a summation : where are the integration weights defined by the trapezoidal rule with and otherwise , is the log - strike grid step - size and is the -grid step - size . carr and madan and chourdakis set . to have accurate prices for any strike , the log - strike grid spacing needs to be sufficiently small .a common approach is to apply directly the fast fourier transform ( fft ) and to compute the summation on a fixed log - strike range with using many points .bailey and swarztrauber , propose an alternative approach , and define the fractional fast fourier transform ( frfft ) , which uses an arbitrary range .chourdakis showed that the frfft can be used to calculate option prices with less points without losing accuracy .he reported that the frfft is times faster than the fft for the calculation of european option prices .since in our case the fourier transform is obtained numerically , we chose to employ the frfft .we now briefly specify the form of this algorithm in our setting , and refer for further details to , , .the resulting sum is then given by where , and . extending this summation into a circular convolution over yields where and this equation can be rewritten in terms of three discrete fourier transforms : with although the latter sum is computed by invoking two fourier transforms and one inverse fourier transform , this approach has the advantage of computing the option prices on a specific log - strike window with independent grids and and requires less points .
in this paper we develop an algorithm to calculate the prices and greeks of barrier options in a hyper - exponential additive model with piecewise constant parameters . we obtain an explicit semi - analytical expression for the first - passage probability . the solution rests on a randomization and an explicit matrix wiener - hopf factorization . employing this result we derive explicit expressions for the laplace - fourier transforms of the prices and greeks of barrier options . as a numerical illustration , the prices and greeks of down - and - in digital and down - and - in call options are calculated for a set of parameters obtained by a simultaneous calibration to stoxx50e call options across strikes and four different maturities . by comparing the results with monte - carlo simulations , we show that the method is fast , accurate , and stable . _ keywords : _ hyper - exponential additive processes , matrix wiener - hopf factorization , first passage times , barrier options , multi - dimensional laplace transform , fourier transform , sensitivities . _ acknowledgements : _ we would like to thank p. howard and s. obraztsov for their support , and also d. madan for useful conversations . this research was supported by epsrc grant ep / d039053 , and was partly carried out while the authors were based at king s college london .
the centrality of spectral analysis in a wide range of scientific disciplines , there has been a variety of viewpoints regarding how to quantify distances between spectral density functions . besides the obvious ones which are based on norms , inherited by ambient function - spaces , etc . , there has been a plethora of alternatives which attempt to acknowledge the structure of power spectral density functions as a positive cone .the most most well known are the kullback - leibler divergence which originates in hypothesis testing and in bayes estimation , the itakura - saito distance which originates in speech analysis both belonging to bregman class ( ) , the bhattacharyya distance , and the ali - silvey class of divergences .their origin can be traced either to a probabilistic rationale ( as in the case of the kullback - leibler divergence ) or , to some ad - hoc mathematical construct designed to seek distance measures with certain properties ( as in the case of bregman and ali - silvey classes ) .the purpose of this work is to introduce certain new notions of distance which are rooted in filtering theory and provide intrinsic distance measures between any two _ power spectral density _ functions .our starting point is a prediction problem .we select an optimal predictive filter for an underlying random process based on the assumption that the process has a given power spectral density .we then evaluate the performance of such a filter against a second power spectral density which may be thought of as the spectral density function of the `` actual '' random process .the relative degradation of performance ( i.e. , variance of the prediction error ) quantifies a mismatch between the two functions .interestingly , it turns out to be equal to the ratio of the arithmetic over the geometric mean of the fraction of the two power spectra .the logarithm of the relative degradation serves a distance measure . infinitesimal analysis suggests a pseudo - riemannian metric on the manifold of power spectral density functions .the presence of such a metric suggests that geodesic distances may be used to quantify divergence between power spectra .indeed , a characterization of geodesics is provided , and certain logarithmic intervals are shown to satisfy the condition .the length of such intervals connecting two power spectral densities provides yet another notion of distance between the two .an identical approach based on the degradation of performance of smoothing filters leads to other expressions which , equally well , quantify divergence between power spectral densities .two observations appear to be universal .first that the mismatch between the `` shapes '' of spectral density functions is what turns out to be important .this is quantified by how far the ratio of the two spectral densities is from being constant across frequencies .the ratio of spectral density functions is reminiscent of the likelihood ratio in probability theory .the second observation is that all of the distance measures that we encountered , in essence , they compare different means ( i.e. , arithmetic , geometric , and harmonic , possibly , weighted ) of the ratio of the two spectral density functions or of their logarithms .it is quite standard , that e.g. , the argithmetic and the geometric means coincide only when the ratio is constant and have a gap otherwise .the same applies to a wider family of generalized means .thus , this observation suggests a much larger class of possible alternatives : quantify the divergence between ( the `` shape '' of ) two density functions using the gap between two generalized means of their ratio , or by the slackness of jensen - type of inequalities involving this ratio .the underlying mathematical construct appears quite distinct from those utilized in defining the bregman and the ali - silvey classes of distance measures .furthermore , the mathematical construct is deeply rooted in prediction theory and , at least in certain cases , can be motivated as quantifying degradation of performance as we explained earlier .consider a scalar zero - mean stationary random process and denote by its sequence of autocorrelation samples and by its power spectrum .thus , and while denotes expectation and `` '' denotes complex conjugation .we are interested in quadratic optimization problems with respect to the usual inner product the closure of , which we denote by , can be identified with the space of functions which are square integrable with respect to with inner product where and .further , the correspondence is a hilbert space isomorphism ( see ) .thus , least - variance approximation problems can be equivalently expressed in . in particular , the variance of the _ one - step - ahead prediction _error for the _ predictor _ is similarly , the variance of the error of the _ smoothing _filter is simply in general , the power spectrum is a bounded nonnegative measure on and admits a decomposition with a singular measure and the absolutely continuous part of ( with respect to the lebesgue measure ) . in general the singular parthas no effect on the minimal variance of the error , and the corresponding component of can be estimated with arbitrary accuracy using any `` one - sided '' infinite past .the variance of the optimal one - step - ahead prediction error depends only on the absolutely continuous part of the power spectrum and is given in terms by the celebrated szeg - kolmogorov formula stated below ( see and also , , ( * ? ? ?* chapter 6 ) , ) .+ with as above when , and zero otherwise . in case the prediction - error variance is nonzero and the random process is non - deterministic in the sense of kolmogorov . in this case , it can be shown that where is an outer function in the hardy space with , i.e. , is analytic in the unit disc and its radial limits are square integrable ( see ) .then , the linear combination serves as the optimal predictor of based on past observations and the least variance of the optimal prediction error becomes analogous expressions exist for the optimal smoothing error and the corresponding smoothing filter which uses both past and future values of .it is quite interesting , and rather straightforward , that while the variance of the optimal one - step - ahead prediction error is the _ geometric mean _ of the spectral density function , the variance of the error , when a smoothing filter utilizes both past and future , turns out to be the _ harmonic mean _ of the spectral density function .+ ( see ) [ prop : minvalue]with as above when , and zero otherwise . in case the variance of the optimal smoothing error is nonzero and the random process is nondeterministic in the sense that past and future specify the present which can be estimated with zero variance . in this case( see ) is the image of the optimal smoothing error under the kolmogorov map , and that now consider two distinct spectral density functions and postulate a situation where filtering of an underlying random process is attempted based on the incorrect choice between these two alternatives .the variance is then compared with the least possible variance which is achieved when the correct choice is made ( i.e. , when the predictor is optimal for the spectral density against which it is being evaluated ) .the degradation of performance is quantified by how much the ratio of the two prediction - error variances exceeds the identity .this ratio serves as a measure of mismatch between the two spectral densities ( the one which was used to design the predictor and the one against which it is being evaluated ) .the resulting mismatch turns out to be _ scale - invariant _ i.e ., the expression is homogeneous .hence , as a measure of distance it actually quantifies distance between the positive rays that the two spectral density functions define , and thus , it quantifies distance between the respective `` shapes . ''it turns out that this distance is convex on logarithmic intervals and has a number of distance - like properties , short of being a metric .let us assume that both and hence , that for corresponding outer -functions normalized as before so that , for .obviously , denotes the geometric mean of as before , for .these expressions represent the least variances when the predictor is chosen on the basis of the correct spectral density function . if however , the predictor is based on whereas the underlying process has as its spectral density , then the variance of the prediction error turns out to be if we divide this variance by the optimal value we obtain this is the ratio of the _ arithmetic mean _ over the _geometric mean _ of the fraction of the two spectral density functions .the expression is not symmetric in the two arguments. the subscript `` '' signifies ratio of arithmetic over geometric means .the logarithm of is nonnegative and defines a notion of distance between rays of density functions .henceforth , we denote this logarithm by alternatively , we can view the above as slackness of a jensen - type inequality . before we discuss key properties of , we introduce a natural class of paths connecting density functions : for any two density functions , ,\ ] ] defines a _ logarithmic interval _ between and .the terminology stems from the fact that whenever the needed logarithms exist , .\ ] ] later on we will see that these represent geodesics on the manifold of density functions with respect to an induced pseudo - riemannian metric .[ prop : prop3 ] let , represent density functions defined on . the following hold : + [ cols= " > , < " , ] properties ( i ) , ( ii ) , and ( iv ) are a direct consequence of the corresponding properties given in proposition [ prop : prop3 ] for . property ( iii ) on the other hand follows as before from ( iv ) andthe fact that the derivative of at is zero .in order to illustrate the quantitative behavior of these measures , we consider three specific power spectra labeled , as before . these are shown in figure [ fig : spectra ] .we then consider the triangle formed with those power spectra as vertices and connected using logarithmic intervals .the interior of the triangle is similarly sampled at logarithmically placed points .in essence , we consider the family of power spectral densities .\ ] ] for each value of ( sampled appropriately ) , we evaluate , and compare these to the kullback - leibler divergence between the suitably normalized functions the normalization is necessary if the kullback - leibler divergence is to have properties of a distance measure ( i.e. , nonnegative when its arguments are different , etc . ) .thus , we denote the set of power spectra in ( [ points ] ) is thought of as a set of points forming an equilateral triangle , conceptually sitting on the -plane .then , the vertical axis represents distance from , measured using these three alternative measures .the corresponding surfaces are drawn in figures [ fig : surf1]-[fig : surf3 ] .the three power spectra used are as follows : ( ) , ( - - ) , ( -- ) ] ] ] ] there appears to be little qualitative difference between , , and .they are also quite similar in that it is easy to calculate functional forms for minimizers of either of these distance measures under moment constraints ( see section [ sec : minimizers ] below ) .hence , it is important to undercore that lacks an intrinsic interpretation as a distance measure between power spectra , in contrast to which therefore may be preferable for exactly that reason .a large class of spectral analysis problems is typified by the trigonometric moment problem where a power spectral density is sought to match a partial sequence of autocorrelation samples , i.e. , a positive function is sought such that see e.g. , .since , in general , the family of consistent s is large , a particular one is chosen `` closest '' to a given `` prior '' .maximum entropy spectral analysis , for instance , can be interpreted as seeking the spectral density closest in the kullback - leibler sense to one which is flat , i.e. , the prior in this case is the power spectral density of white noise ( see e.g. , ) . in the same spiritwe may pose the problem of seeking closest to in the sense of minimizing e.g. , and subject to the moment constraints ( [ momentconstraints ] ) . to this end , as usual , we introduce lagrange multipliers ( ) and form the lagrangian setting the variation of identically to zero for all perturbations of gives conditions that help identify the functional form of minimizing s .briefly , after we eliminate higher order terms .stationarity conditions require that the above is identically zero for all ( small ) functions .this leads to from which we deduce that a minimizing must be of the form with then , values for as well as for the lagrange multipliers must be determined so that in ( [ eq : functionalform ] ) satisfies ( [ momentconstraints ] ) and ( [ extra ] ) this can be done for instance using homotopy methods in .it is interesting that when , the minimizer is the same as in the one obtained by applying the maximum entropy principle ( e.g. , see ) , i.e. , it turns out to be an all - pole spectral density function which is of course uniquely identified by the moment constraints .evidently , in general , minimizing gives a different answer than the one obtained by minimizing or , by minimizing other distances .yet , all such problems are similar and can be dealt with in two steps .first identify the functional form of a minimizer and then determine values for the coefficients so as to satisfy ( [ momentconstraints ] ) .the latter step requires solving a nonlinear problem in general , and can be approached in a variety of ways ( e.g. , as in ) .infinitesimal perturbations about a given power spectral density function , when measured by any of , , or , give rise to nonnegative definite quadratic forms .these forms are in fact nonsingular on directions other than rays emanating from the origin .this is due to the fact that the aforementioned distances do not separate points on such rays while they give nonzero distance otherwise .they thus induce riemannian metrics on suitably defined manifold of spectral rays . in this section ( and the current paper )we focus on the particular metric induced by , we show how to characterize geodesics , and verify that logarithmic intervals are in fact geodesics . throughoutwe assume that all functions are smooth enough so that the indicated integrals exist .this , in particular , can be ensured if all spectral density functions are bounded and have bounded inverses as well as bounded derivatives .weaker conditions are clearly possible . for the purposes of this section we define ,\\ & & \mbox { with } f(\theta )> 0 , \mbox { and both } f(\theta),\frac{df(\theta)}{d\theta } \mbox { square integrable}\}.\end{aligned}\ ] ] with a suitable norm on and its derivative , becomes a ( banach ) manifold .we also recall the definition the -th norm , applicable to any on ] .these can be thought of as rays .they can be identified by pointing to one particular representative .thus , in particular , the set of rays can be identified with this set can be given the structure of a manifold and thought of as a set of probability density functions on ] , of spectral density functions connecting two given ones , namely and .note that is a function of two arguments , the path parameter and the frequency hence , we often write .the length traversed as varies from to is simply in the last step we eliminated higher order terms in inside the integral , since those integrate to zero .here and throughout `` '' ( dot ) , as in is used to denote derivative with respect to , i.e. , interestingly , the expression in ( [ eq : interesting ] ) only depends on .thus , if we define then and the requirement that the end point of coincide with and , readlily translates into boundary conditions for , namely and .the task of finding extremals of such integrals leads to euler - lagrange equations for the path .more specifically , the lagrangian corresponding to ( [ eq : length2 ] ) is and only depends on . therefore and the euler - lagrange equations simplify to being independent of .since enters in through an integral over , the partial derivative with respect to is infinitesimal .thus , we write which is independent of as we just explained . since where the latter term produces again a differential in , it follows that alternatively , which simply says that the variation of about the mean , as a function of , normalized by a `` standard deviation''-like quantity must be independent of .we summarize our conclusion as follows .[ prop : geodesics]given two spectral density functions , extremal ( geodesic ) paths ( $ ] ) connecting the two , in the sense of achieving a local extremal of the path integral , must satisfy ( [ eq : must ] ) for , i.e. , the left hand side of ( [ eq : must ] ) must be independent of . the proof has been established in the arguments leading to the proposition .we finally verify that logarithmic intervals satisfy ( [ eq : must ] ) .this is rather straightforward since , for the logarithm is a linear function of and the derivative is already independent of .the ratio plays a rle analogous to the likelihood ratio of probability theory .the length of logarithmic intervals can be computed in terms of this ratio by simple inspection since from ( [ eq : interesting ] ) is independent of .therefore the following statement holds .the length of the logarithmic path connecting two power spectral densities and ,is given by the proof follows in the arguments leading to the proposition .in a way completely analogous to the previous sections we may consider the increase in the variance of the smoothing error when a wrong choice between two alternatives is used to identify a candidate smoothing filter . thus , we begin with two density functions and assume that .accordingly we test the optimal smoothing filter based on against . as explained in section [ section : preliminaries ] , the -optimal smoothing filter gives rise to an error corresponding , via the kolmogorov mapping , to .hence , the variance of the smoothing error divided by the -optimal variance is interestingly , this can be rewritten as follows where is a normalized measure with variation one .expression ( [ eq : avsms ] ) shows the degradation as the square of the ratio of the mean - square of the fraction over its arithmetic mean .these two means , mean - square and arithmetic , are weighted by which is of course dependent on one of the two arguments .however , the expression is homogeneous and does not depend on scaling of either of the two arguments or .accordingly , we may define as a distance measure the presence of a data - dependent integration measure may by compared to the ( normalized ) kullback - leibler divergence in ( [ eq : kl ] ) .the expressions derived in the previous sections suggest that generalized means of the `` likelihood''-like ratio and their logarithms may be used as distance measures between `` shapes '' of density functions and .more specifically , we know that for any positive function , where denotes the -th generalized mean then with a value which depends on how `` far '' is from being constant .hence , we may use to quantify the distance between the `` shapes '' of and , and since is the geometric mean of ( see e.g. , ) , both that we encountered earlier are special cases of the above .weighted versions of weighted means may also be used for the same purpose , as suggested in section [ sec : iii ] .lengths of geodesics as suggested in section [ sec : riemannian ] present another possibility .indeed , a `` zoo '' of possible options emerges .assessing practical and theoretical merits of each is the subject of a future project .bregman , `` the relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming , '' _ ussr comput .math . and math ._ vol . 7 , pp .200 - 217 , 1967 . c. byrnes , t.t .georgiou , and a. lindquist , `` a new approach to spectral estimation : a tunable high - resolution spectral estimator , '' _ ieee trans . on signal processing _ ,* 48(11 ) : * 3189 - 3206 , november 2000 . t.t .georgiou , `` the maximum entropy ansatz in the absence of a time arrow : fractional - pole models , '' preprint , 18 pages : http://arxiv.org/abs/math/0601648/ u. grenander and g. szeg , * toeplitz forms and their applications * , chelsea , 1958 .
we present several natural notions of distance between spectral density functions of ( discrete - time ) random processes . they are motivated by certain filtering problems . first we quantify the degradation of performance of a predictor which is designed for a particular spectral density function and then it is used to predict the values of a random process having a different spectral density . the logarithm of the ratio between the variance of the error , over the corresponding minimal ( optimal ) variance , produces a measure of distance between the two power spectra with several desirable properties . analogous quantities based on smoothing problems produce alternative distances and suggest a class of measures based on fractions of generalized means of ratios of power spectral densities . these distance measures endow the manifold of spectral density functions with a ( pseudo ) riemannian metric . we pursue one of the possible options for a distance measure , characterize the relevant geodesics , and compute corresponding distances . [ thm]corollary [ thm]lemma [ thm]proposition [ thm]problem [ thm]remark [ thm]definition [ thm]example power spectral density functions , distance measures .
codes are a family of codes that achieve capacity on binary , memoryless , symmetric ( bms ) channels and have low - complexity construction , encoding , and decoding algorithms . this is the setting we consider .polar codes have since been extended to a variety of settings including source - coding , non - binary channels , asymmetric channels , channels with memory , and more .the probability of error of polar codes is given by a union of correlated error events .the union bound , which ignores this correlation , is used to upper - bound the error probability . in this work ,we exploit the correlation between error events to develop a general method for lower - bounding the probability of error of polar codes .polar codes are based on an iterative construction that transforms identical and independent channel uses into `` good '' and `` bad '' channels .the `` good '' channels are almost noiseless , whereas the `` bad '' channels are almost pure noise .arikan showed that for every , as the proportion of channels with capacity greater than tends to the channel capacity and the proportion of channels with capacity less than tends to .the polar construction begins with two identical and independent copies of a bms and transforms them into two new channels , channel is a better channel than whereas channel is worse than .can be stochastically degraded to channel , which in turn can be stochastically degraded to .] this construction can be repeated multiple times ; each time we take two identical copies of a channel , say and , and polarize them , e.g. , to and .we call the operation a ` '-transform , and the operation a ` '-transform. there are possible combinations of ` '- and ` '-transforms ; we define channel as follows .let be the binary expansion of , where is the most significant bit ( msb ) .then , channel is obtained by transforms of according to the sequence , starting with the msb : if we do a ` '-transform and if we do a ` '-transform .for example , if , channel is , i.e. , it first undergoes a ` '-transform and then two ` '-transforms .overall , we obtain channels ; channel has input and output .i.e. , channel has binary input , output that consists of the output and input of channel , and assumes that the input bits of future channels are uniform .we call these _ synthetic channels_. one then determines which synthetic channels are `` good '' and which are `` bad '' , and transmits information over the `` good '' synthetic channels and predetermined values over the `` bad '' synthetic channels . since the values transmitted over the latter are predetermined , we call the `` bad '' synthetic channels _ frozen_. decoding is accomplished via the successive - cancellation ( sc ) decoder. it decodes the synthetic channels in succession , using previous bit decisions as part of the output .the bit decision for a synthetic channel is either based on its likelihood or , if it is frozen , on its predetermined value .i.e. , denoting the set of non - frozen synthetic channels by , where we denoted and similarly for the previous bit decisions .as non - frozen synthetic channels are almost noiseless , previous bit decisions are assumed to be correct .thus , when is sufficiently large , this scheme can be shown to achieve capacity as the proportion of almost noiseless channels is . to analyze the performance of polar codes , let denote the event that channel errs under sc decoding while channels do not .i.e. , the probability of error of polar codes under sc decoding is given by .let denote the event that channel errs given that a genie had revealed to it the true previous bits , i.e. we call an sc decoder with access to genie - provided previous bits a _ genie - aided decoder_. some thought reveals that ( see ( * ? ? ?* proposition 2.1 ) or ( * ? ? ?* lemma 1 ) ) .thus , the probability of error of polar codes under sc decoding is equivalently given by . in the sequelwe assume a genie - aided decoder . the events are disjoint but difficult to analyze .the events are easier to analyze , but are no longer disjoint . a straightforward upper bound for is the union bound : this bound facilitated the analysis of .an important question is how tight this upper bound is . to this end ,one approach is to develop a lower bound to , which is what we pursue in this work .a trivial lower bound on a union is better lower bounds may be obtained by considering pairs of error events : via the inclusion - exclusion principle , one can combine lower bounds on multiple pairs of error events to obtain a better lower bound this can also be cast in terms of unions of error events using . to our knowledge , to datethere have been two attempts to compute a lower bound on the performance of the sc decoder , both based on .the first attempt was in , using a density evolution approach , and the second attempt in applies only to the bec .we briefly introduce these below , but first we explain where the difficulty lies . the probability is given by an appropriate functional of the probability distribution of synthetic channel . however , the output alphabet of is very large .if the output alphabet of is then the output alphabet of has size .this quickly grows unwieldy , recalling that .it is infeasible to store this probability distribution and it must be approximated .such approximations are the subject of ; they enable one to compute upper and lower bounds on various functionals of the synthetic channel . to compute probabilities of unions of events ,one must know the joint distribution of two synthetic channels .the size of the joint distribution s output alphabet is the product of each synthetic channel s alphabet size , rendering the joint distribution infeasible to store .the authors of suggested to approximate the joint distribution of pairs of synthetic channels using a density evolution approach .this provides an iterative method to compute the joint distribution , but does not address the problem of the amount of memory required to store it .practical implementation of density evolution must involve quantization ( * ? ? ? * appendix b ) .the probability of error derived from quantized joint distributions approximates , but does not generally bound , the real probability of error . for the special case of the bec , as noted and analyzed in , no quantization is needed , as the polar transform of a bec is a bec .thus , they were able to precisely compute the probabilities of unions of error events of descendants of a bec using density evolution .the same bounds for the bec were developed in using a different approach , again relying on the property that the polar transform of a bec is a bec .the authors were able to track the joint probability of erasure during the polarization process .furthermore , they were able to show that the union bound is asymptotically tight for the bec .in this work , we develop an algorithm to compute lower bounds on the joint probability of error of two synthetic channels .our technique is general , and applies to synthetic channels that are polar descendants of any bms channel .we use these bounds in to lower - bound the probability of error of polar codes . for the special case of the bec, we recover the results of and using our bounds .our method is based on approximating the joint distribution with a stochastically upgraded joint distribution that has a smaller output alphabet .the difficulty is that key ideas that are true for single channels no longer apply to joint distributions .for example , a degrading operation on a joint distribution may _ improve _ the performance of an sc decoder .as another example , a sufficient statistic for a single synthetic channel is not a sufficient statistic for the joint distribution : two symbols that are indistinguishable for one synthetic channel may have very different meanings for future synthetic channels .therefore , we develop methods that in one sense decouple the two synthetic channels yet in another sense couple them even further .in this section we provide a brief overview of our method , and lay out the groundwork for the sections that follow . we aim to produce a lower bound on the probability of error of two synthetic channels .since we can not know the precise joint distribution , we must approximate it .the approximation is rooted in stochastic degradation .degradation is a partial ordering of channels .let and be two channels .we say that is ( stochastically ) degraded with respect to , denoted , when there exists some channel such that if is degraded with respect to then is upgraded with respect to .degradation implies an ordering on the probability of error of the channels ( * ? ? ?* chapter 4 ) : if then .this is true only when the decoder used is the optimal decoder .the notion of degradation readily extends to joint channels .we say that joint channel via some degrading channel if as for the single channel case , if then , where is the probability of error of the optimal decoder for the joint channel .indeed our approach will be to approximate the joint synthetic channel with an upgraded joint channel with smaller output alphabet .there is a snag , however : this ordering of error probabilities does not hold , in general , for suboptimal decoders .the sc decoder , used for polar codes , is suboptimal . in the genie - aided case , which we consider here ,it is equivalent to performing a maximum likelihood decision on each marginal separately .we shall demonstrate the suboptimality of the sc decoder in .then , we will develop a different decoding criterion whose performance lower - bounds the sc decoder performance and is ordered by degradation . while in general this decoder requires an exhaustive search , for the special case of polar codes this decoder is easily found .it does , however , imply a special structure for the degrading channel , which we use to our advantage .we investigate the joint distribution of two synthetic channels in .we first bring it to a more convenient form that will be used in the sequel .then , we explain how to polarize a joint synthetic channel distribution and explore some consequences of symmetry .further consequences of symmetry are the subject of , in which we transform the channel to another form that greatly simplifies the steps that follow .this form exposes the inherent structure of the joint distribution .how to actually upgrade joint distributions is the subject of .we upgrade the joint distribution in two ways ; each upgrades one marginal without changing the other .we can not simply upgrade the marginals , as we must consider the joint distribution as a whole .this is where the above - mentioned methods of coupling - decoupling come into play .we present our algorithm for lower - bounding the probability of error of polar codes in .this algorithm is based on the building blocks presented in the previous sections .we demonstrate our algorithm with some numerical results in .we denote by for .we use an iverson - style notation ( see ) for indicator ( characteristic ) functions .i.e. , for a logical expression , is whenever is not true and is otherwise .we assume that the indicator function takes precedence whenever it appears , e.g. , is for .in this section , we tackle decoding of two dependent channels .we explain how this differs from the case of decoding a single channel , and dispel some misconceptions that may arise .we then specialize the discussion to polar codes .we explain the difficulty with combining the sc decoder with degradation procedures , and develop a different decoding criterion instead .finally , we develop a special structure for the degrading channel that , combined with the decoding criterion , implies ordering of probability of error by degradation . a decoder for channel is a mapping that maps every output sequence to some .the average probability of error of the decoder for equiprobable inputs is given by the decoder is deterministic for symbols where assumes only the values and .for some symbols , however , we allow the decoder to make a random decision . if for some , then is the same whether or .thus , the probability of error is insensitive to the resolution of ties .we denote the error event of a decoder by it is dependent on the decoder , i.e. , ; we suppress this to avoid cumbersome notation . clearly , . the maximum - likelihood ( ml ) decoder , well known to minimize when the input bits are equiprobable , is defined by the ml decoder is not unique , as it does not define how ties are resolved .we now consider two _ dependent _ binary - input channels , and , with joint distribution .the optimal decoder for the joint channel considers both outputs together and makes a decision for both inputs jointly ; its probability of error is . rather than jointly decoding the input bits based on the joint output, we may opt to decode each channel separately .i.e. , the decoder of channel bases its decision solely on and completely ignores and vice versa .what are the optimal decoders and ? the answer depends on the criterion of optimality .denote by the error event of channel under some decoder .if we wish to minimize each individual channel s probability of error , we set each decoder as the ml decoder for the respective channel .we call this the _ individual maximum likelihood _ ( iml ) decoder , and denote its probability of error by .another criterion is to minimize , the probability that at least one of the decoders makes an error .we call the decoder that minimizes this probability using individual decoders for each channel the _ individual minimum joint _ ( imjp ) decoder . the event is not the same as the error event of the optimal decoder for the joint channel , even when the individual decoders turn out to be ml decoders. this is because we decode each input bit separately using only a portion of the joint output .clearly , the three decoders in successively use less information for their decisions .the optimal decoder uses both outputs jointly as well as knowledge of the joint probability distribution ; the imjp decoder retains the knowledge of the joint distribution , but uses each output separately ; finally , the iml decoder dispenses with the joint distribution and operates as if the marginals are independent channels .[ ex_joint bms where p(eml1ueml2 ) is not optimal ] the conditional distribution of some joint channel is given in .the marginals are channels and .the optimal decoder for the joint channel chooses , for each output pair , the input pair with the highest probability ; it achieves .it is easily verified that the ml decoders of the marginals decide that the input is when is received and vice versa ; thus , .if we change the decoder of channel to always declare , regardless of the output , then . by checking all combinations of decoders , it can be verified that this is indeed the minimum value of . as expected , holds .we now demonstrate that the probability of error of suboptimal decoders is not ordered by degradation . to this end, we degrade the joint channel in by merging the output symbols into a new symbol , and into a new symbol , . denote the new joint channel .for each of the marginals , the ml decoder declares upon receipt of , and otherwise .hence , for the degraded channel , , which is _ lower _ than . for the degraded channel ,the iml decoder is also the optimal decoder . as this is a degraded channel , however , ..conditional distribution . in this case, the ml decoders of the marginals do not minimize . [ cols="^,^,^,^,^ " , ] given a joint channel distribution , finding an optimal or iml decoder is an easy task .in both cases we use maximum - likelihood decoders ; in the first case based on the joint distribution , whereas in the second case based on the marginal distributions . on the other hand , finding an imjp decoder requires an exhaustive search , which may be costly . in the polar coding setting , as we now show , the special structure of joint synthetic channels permits finding the imjp decoder without resorting to a search procedure .let be some bms channel that undergoes polarization steps .let and be two indices of synthetic channels , where .the synthetic channels are and , where , , and .we call them the _ a - channel _ and the _ b - channel _ , respectively .their joint distribution is .i.e. , this is the probability that the output of the a - channel is and the output of the b - channel is , given that the inputs to the channels are and , respectively . with probability ,the prefix of is .namely , has the form where denotes the remainder of after removing and .thus , for some arbitrary .the factor stems from the uniform distribution of . with some abuse of notation , we will write the right - most expression makes it clear that the portion of in which the input of the a - channel appears must equal the actual input of the a - channel .observe from that we can think of as the joint distribution up to a constant factor .indeed , we will use to denote the joint channel where convenient .which decoders can we consider for joint synthetic channels ?the optimal decoder extracts from the output of the b - channel and proceeds to decode .this outperforms the sc decoder but is also impractical and does not lend itself to computing the probability that is of interest to us , the probability that _ either _ of the synthetic channels errs .a natural suggestion is to mimic the sc decoder , i.e. , to use an iml decoder .the joint probability of error of this decoder may decrease after stochastic degradation , so we discard this option .consider two decoders and for channels and , respectively .as above , is the error event of channel using decoder .we seek a lower bound on .therefore , we choose decoders and that minimize ; this is none other than the imjp decoder .its performance lower - bounds that of the iml decoder ( see ) .as we shall later see , combined with a suitable degrading channel structure , the probability of error of the imjp decoder increases after stochastic degradation .conversely , it decreases under stochastic upgradation ; thus , combining the imjp decoder with a suitable upgrading procedure produces the desired lower bound .multiple decoders may achieve .one decoder can be found in a straight - forward manner ; we call it _ the _ imjp decoder .the following theorem shows how to find it .its proof is a direct consequence of that follow .[ thm_minimizing phi_a and phi_b for polar channel ] let and be two channels with joint distribution that satisfies .then , is achieved by setting as an ml decoder for and according to where note that is not a conditional distribution ; it is non - negative , but its sum over does not necessarily equal . holds for any two synthetic channels and that result from the same number of polarization steps of a bms , where index is greater than . in the polar code case ,the joint channel satisfies , so applies . in what follows ,denote [ lem_optimal phi2 is ml decoder ] let and be two dependent binary - input channels with equiprobable inputs and joint distribution that satisfies .let be some decoder for channel with error event .then , setting as an ml decoder for achieves .recall that .using , where the problem of finding the decoder that minimizes is separable over ; the terms , are non - negative and independent of .therefore , the optimal decoder is given by [ lem_optimal phi1 for given phi2 ] let and be two binary - input channels with joint distribution and equiprobable inputs .let be some decoder for channel .then , the decoder for channel given by minimizes . since the input is equiprobable , where the last equality is by .the problem of finding the decoder that minimizes is separable over ; clearly the optimal decoder is the one that sets using , if is chosen as an ml decoder , as per , we have the following expression for : the imjp and iml decoders do not coincide in general , although in some cases they may indeed coincide .we demonstrate this in the following example .let be a bsc with crossover probability .we perform polarization steps and consider the joint channel , i.e. and . when , we have . on the other hand ,when , . in either case , holds . in the special case where is a bec and and two of its polar descendants , the imjp and iml ( sc ) decoders coincide .this is thanks to a special property of the bec that erasures for a synthetic channel are determined by the outputs of the copies of a bec , regardless of the inputs of previous synthetic channels .we show this in appendix [ ap_imjp for bec ] .the imjp decoder is attractive for joint polar synthetic channels since , by , we can efficiently compute it .this was made possible by the successive form of the joint channel .thus , we seek degrading channels that maintain this form .let be a joint distribution of two synthetic channels and let .the marginal channels of are and .the most general degrading channel is of the form where and are probability distributions .this form does not preserve the successive structure of joint synthetic channels . even if satisfies , the resulting may not . to this end , we turn to a subset of degrading channels .recalling that , we consider degrading channels of the form i.e. , these degrading channels degrade , the output of , to , pass unchanged , and degrade , the remainder of s output , to .for this to be a valid channel , and must be probability distributions .this degrading channel structure is illustrated in . by construction ,degrading channels of the form preserve the form that is required for efficiently computing the imjp decoder as in .a degrading channel of the form is called _proper_. we write to denote that channel is upgraded from with a proper degrading channel .we say that an upgrading ( degrading ) procedure is proper if its degrading channel is proper .\(q ) [ rectangle , draw , minimum height = 2.2 cm , minimum width = 1.2 cm ] ; ( pa ) [ right = 1.3 cm of q.45 , rectangle , draw , minimum width = 1 cm , minimum height = 0.7 cm ] ; ( pb ) [ right = 3 cm of q , rectangle , draw , minimum width = 1 cm , minimum height = 0.7 cm ] ; at ( q.center|-pa ) ; ( q.135 ) + ( -0.8,0 ) node[left ] ; ( q.225 ) + ( -0.8,0 ) node[left ] ; ( q.135 ) ( q.-45 ) ; ( pa ) ( ) node ( papb ) + ( 2,0 ) node(ya ) node [ right ] ; ( papb ) circle ( 1pt ) ; ( papb.center ) |- ( pb.150 ) ; ( pb ) ( ya|-pb ) node[right ] ; at ( ) [ rounded corners , dashed , thick , draw , minimum height = 2.2 cm , minimum width = 3.4 cm , label = below: ; ( q.45 ) node[pos = 0.3 , above ] ( za ) node[pos=0.8 ] ( qpa ) ( pa ) ; ( q ) ( q-|za ) node[above ] ( pb ) ; ( q.-45 ) ( q.-45-|za ) node[above ] ( ya|-q.-45 ) node [ right ] ; ( qpa ) circle ( 1pt ) ; ( qpa.center ) |- ( pb.165 ) ; ( papb |- q.-45 ) circle ( 1pt ) ; ( papb|-q.-45 ) |-(pb.195 ) ; by marginalizing the joint distribution it is straight - forward to deduce the following for joint synthetic channel distributions .[ lem_qa and qb are degraded from wa and wb ] if joint channel , then and .this lemma is encouraging , but insufficient for our purposes .it is easy to take degrading channels that are used for degrading a single ( not joint ) synthetic channel and cast them into a proper degrading channel for joint distributions .this , however , is not our goal .instead , we start with and seek an _ upgraded _ with smaller output alphabet that can be degraded to using a proper degrading channel . this is a very different problem than the degrading one , and its solution is not immediately apparent .plain - vanilla attempts to use upgrading procedures for single channels fail to produce the desired results .later , we develop proper upgrading procedures that upgrade one of the marginals without changing the other .we now show that the probability of error of the imjp decoder does not decrease after degradation by proper degrading channels .intuitively , this is because the decoder for the original channel can simulate the degrading channel .we denote by the error event of channel under some decoder , and similarly define , , and .further , we denote by decoders for and by decoders for , .[ lem_conditions on degrading for conservation of pe ] let joint channel have marginals and .assume that , then .the proof follows by noting that for any decoder , we can find a decoder with identical performance . first consider the decoder for channel .denote by the result of drawing with probability .then , ; i.e. , this is the decoder that results from first degrading the a - channel output and only then decoding .next , consider the decoder for the b - channel .denote by the result of drawing with probability .then , similar to the a - channel case , .hence , the best decoder pair can not do worse than the best decoder pair .let be a bms channel that undergoes polarization steps .the probability of error of a polar code with non - frozen set under sc decoding is given by where is the error probability of synthetic channel under ml decoding .obviously , for any , we have already mentioned the simplest such lower bound , .we now show that the imjp decoder provides a tighter lower bound . to this end , denote where is the probability of error of channel under decoder .[ lem_imjp provides a tighter lower bound than max pe ] let be a bms channel that undergoes polarization steps , and let be the set of non - frozen bits .then , using , . by definition, the imjp decoder seeks decoders and that minimize the joint probability of error of synthetic channels with indices and .therefore , for any two indices and we have in particular , this holds for the indices that maximize the right - hand - side .this establishes the leftmost inequality of . to establish the rightmost inequality of ,we first show that for any , to see this , first recall that the imjp decoder performs ml decoding on the b - channel , yielding .next , we construct in which the -channel is noiseless , by augmenting the portion of its with , i.e. , channel can be degraded to using a proper degrading channel by omitting from the portion of the output and leaving unchanged .thus , .finally , denote . by ,for any we have and . since we obtain the proof .are instrumental for our lower bound , which combines upgrading operations and the imjp decoder .in this section , we study the properties of joint synthetic channels .we begin by bringing the joint synthetic channel into an equivalent form where the b - channel sml decision is immediately apparent .we then explain how to jointly polarize synthetic channels .finally , we describe some consequences of symmetry on joint distributions and on the imjp decoder . two channels and with the same input alphabet but possibly different output alphabets are called _ equivalent _ if and .we denote this by .channel equivalence can cast a channel in a more convenient form .for example , if is a bms , one can transform it to an equivalent channel whose output is a sufficient statistic , such as a -value ( see appendix [ ap_definition of d values ] ) , in which case the ml decoder s decision is immediately apparent .let be a joint synthetic channel .since the joint distribution is determined by the distribution of , we can transform to an equivalent channel in which the b - channel -value - value '' we mean the -value computed for channel . instead of -values , other sufficient statistics of the b - channel could have been used .our use of -values was prompted by their bounded range $ ] .] of symbol is immediately apparent .joint channel is in -value representation if the marginal satisfies we use the same notation for both the regular and the -value representations of the joint channel due to their equivalence .the discussion of the various representations of joint channels in applies here as well .in particular , we will frequently use to denote the joint synthetic channel distribution .the following lemma affords a more convenient description of the joint channel , in which , in line with the imjp decoder , the b - channel sml decision is immediately apparent .moreover , this description greatly simplifies the expressions that follow .channels and are equivalent and the degrading channels from one to the other are proper .[ lem_representation of joint bit - channel distribution using -values ] to establish equivalence we show that each channel is degraded from the other using proper degrading channels .the only portion of interest in is , as in either direction and are unchanged by the degrading channel .denote by the set of all symbols such that the b - channel -value of is , for fixed .then , where clearly , the b - channel -value of is . on the other hand , by and since all symbols in share the same b - channel -value , where and . in the sequel, we will use this lemma to convert to -value representation the result of polarizing a joint distribution in -value representation ( see ) .this is possible because holds for any representation of in which are the input and output , respectively , of the a - channel , is the input of the b - channel , and is the output of the b - channel .in particular , need not consist of inputs to channels . at this pointthe reader may wonder why we have stopped here and not converted the a - channel output to its -value .the reason is that this constitutes a degrading operation , which is the opposite of what we need .two a - channel symbols with the same a - channel -value may have very different meanings for the imjp decoder .thus , we can not combine them to a single symbol without incurring loss .when the joint distribution is in -value representation , proper degrading channels admit the form it is obvious that all properties obtained from degrading channels of the form are retained for degrading channels of the form . by, we may assume that the degraded channel is also in -value representation .let be some joint synthetic channel distribution in -value representation .we wish to find the distribution of where . even though is in -value representation , after a polarization transform this is no longer the case . of course, one can always bring the polarized joint channel to an equivalent -value representation as in .the polar construction is shown in , where we explicitly stated the different outputs of the polarized channels .we note that the top copy of outputs , jointly , , as its a - input is .( w1 ) ; ( w2 ) ; ( w1top ) at ( ) ; ( w1bot ) at ( ) ; ( w2top ) at ( ) ; ( w2bot ) at ( ) ; ( a1 ) ; ( a2 ) ; ( topdot ) at ( a1|-w2top ) ; ( botdot ) at ( a2|-w2bot ) ; ( uat ) ; ( a2r ) at ( ) ; ( a2l ) at ( ) ; ( topdot ) ( a1 ) ; ( botdot ) ( a2 ) ; ( ) ( w1bot ) ; ( a1 ) ( w1top ) ; ( a2 ) ( w1bot ) ; ( uat|-a1)node[left] ( a1 ) ; ( uat|-w2bot ) node[left] ( w2bot ) ; ( ) ( ) ; ( uat ) node[left] ( a2l ) ( a2r|-topdot ) ( w2top ) ; ( a2l|-w2top ) ( a2r ) ; ( uat|-w2top ) node[left] ( a2l|-w2top ) ( a2r ) ( a2 ) ; ( w2r2 ) at ( ) ; ( w2r4 ) at ( ) ; ( w2r6 ) at ( ) ; ( w2r8 ) at ( ) ; ( dout ) ; ( w1r2 ) at ( ) ; ( w1r6 ) at ( ) ; ( w1r8 ) at ( ) ; ( w1r2 ) + + ( 0.3,0 ) + + ( -75:2.5 cm ) ; ( dout|-w2r4 ) + + ( -1,0 ) ; ; ( a ) at ( intersection-1 ) ; ( w1r2 ) + + ( 0.3,0 ) ( a ) ( dout|-w2r4 ) node[right] ; ( w1r6 ) + + ( 0.4,0 ) + + ( -75:2.5 cm ) ; ( dout|-w2r8 ) + + ( -1,0 ) ; ; ( b ) at ( intersection-1 ) ; ( w1r6 ) + + ( 0.4,0 ) ( b ) ( dout|-w2r8 ) node[right] ; ( ) ( w1r2-|b ) ; ( w2r8 ) + + ( 0.3,0 ) ( w1r2-|b ) ( dout|-w1r2 ) node[right] ; ( w1r8 ) ( dout|-w1r8 ) node[right] ; ( w2r2 ) ( dout ) node[right] ; ( ) ( dout|-w2r6 ) ; ( w2r6 ) ( dout|-w2r6 ) node[right] ; the input and output of are given by the input and output of are given by note that and are contained in .thus , the joint output of both channels is .the distribution of the jointly polarized channel is given by where we have shown how to generate from .another case of interest is generating from .denote the output of by .the output of is . from ,we need only compute to find .this is accomplished by . if two channels are ordered by degradation, so are their polar transforms ( * ? ? ?* lemma 4.7 ) .i.e. , if then and . thisis readily extended to joint channels .[ lem_degradation is preserved for joint distributions-+ ] let bms channel . then .using and the definition of we have where is a proper degrading channel .[ lem_degradation is preserved for joint distributions ] if , then , for , .the proof follows similar lines to the proof of .expand using and expand again using the definition of joint degradation with a proper degrading channel . using the one - to - one mappings between the outputs of the polarized channels and the inputs and outputs of non - polarized channels ,the desired results are obtained .the details are mostly technical , and are omitted .the operational meaning of is that to compute an upgraded approximation of we may start with , an upgraded approximation of , and polarize it .the result is an upgraded approximation of .this enables us to iteratively compute upgraded approximations of joint synthetic channels . whenever the joint synthetic channel exceeds an allotted size , we upgrade it to a joint channel with a smaller alphabet size and continue from there .we make sure to use proper upgrading procedures ; this preserves the special structure of the joint channel and enables us to compute a lower bound on the probability of error .in we derive such upgrading procedures . since a sequence of polarization and upgrading steps is equivalent to upgrading the overall polarized joint distribution , using we obtain that the imjp decoding error of a joint distribution that has undergone multiple polarization and proper upgrading steps lower - bounds the sc decoding error of the joint distribution that has undergone only the same polarization steps ( without upgrading steps ) .a binary input channel is called _ symmetric _ if for every output there exists a conjugate output such that .we now extend this to joint synthetic channels .[ def_double symmetry db version ] joint channel exhibits double symmetry if for every , there exist , , such that we call the a - conjugate ; the b - conjugate ; and the ab - conjugate .we can also cast this definition using the regular ( non--value ) representation of joint channels in a straight - forward manner , which we omit here .[ ex_conjugates for w-+ ] let be a bms channel , and consider the joint channel formed by its ` '- and ` '-transforms , .what are the a- , b- , and ab - conjugates of the a - channel output ?recall that the output of the a - channel consists of the outputs of two copies of .denote , where and are two possible outputs of with conjugates , respectively .we then have by symmetry of we obtain , , and .indeed , we leave it to the reader to show that holds for the -value representation of the joint channel .pairs of polar synthetic channels exhibit double symmetry .one can see this directly from symmetry properties of polar synthetic channels , see ( * ? ? ?* proposition 13 ) .alternatively , one can use induction to show directly that the polar construction preserves double symmetry ; we omit the details .this implies the following proposition .[ prop_double symmetry for wab ] let be the joint distribution of two synthetic channels and that result from polarization steps of bms channel . then , exhibits double symmetry .the following is a direct consequence of double symmetry .[ lem_double symmetry and d values ] let be a joint distribution in -value representation that exhibits double symmetry. then 1 . for the b - channel , and have the same b - channel -value .2 . for the a - channel , and have the same a - channel-value , and and have the same a - channel -value .the first item is obvious from .for the second item , note that where is by . in the same manner , and have the same a - channel -value , .implies that an sc decoder does not distinguish between and when making its decision for the a - channel .we now show that a similar conclusion holds for the imjp decoder .[ lem_symmetry of t ] let be some output of .then holds for joint channels given in -value representation , .this is easily seen by following the proof with minor changes . under the -value representation ,becomes the remainder of the proof hinges on double symmetry and follows along similar lines to the proof of , with replaced with and accordingly the sum over replaced with a maximum operation over .implies that the imjp decoder does not distinguish between and .[ cor_imjp decoder makes the same decision for ya and bconj ya ] let be the imjp decoder for the a - channel. then this section we introduce the symmetrizing transform .the resultant channel is _ degraded _ from the original joint distribution yet has the same probability of error .its main merit is to decouple the a - channel from the b - channel .this simpler structure is the key to upgrading the a - channel , as we shall see in .the sc decoder observes marginal distributions and makes a decision based on the -value of each synthetic channel s output .in particular , by , the sc decoder makes the same decision for the a - channel whether its output was or and the b - channel decision is based on without regard to . by , the imjp decoder acts similarly .that is , the imjp decoder makes the same decision for the a - channel whether its output is or , and the decision for the b - channel is based solely on .we conclude that if the a - channel were told only whether its output was one of , it would make the same decision had it been told its output was , say , .this is true for either the sc or imjp decoder .consequently , either decoder s probability of error is unaffected by obscuring the a - channel output in this manner .this leads us to define a _symmetrized _ version of the joint synthetic channel distribution , , as follows .let and does not matter .i.e. , is a _ set _ containing both and . ] and define [ lem_symmetrized and non symmetrized channels have the same pe ] let be a joint synthetic channel distribution , and let be its symmetrized version .then , the probability of error under sc ( imjp ) decoding of either channel is identical . by for the sc decoder or for the imjp decoder ,if the decoder for the symmetrized channel makes an error for some symbol then the decoder for the non - symmetrized channel make an error for both and , and vice - versa .therefore , denoting by the error indicator of the decoder , {\sum_{u_a , u_b \vphantom{{\accentset{\circ}{y}}_a}}}\smashoperator[r]{\sum_{{\accentset{\circ}{y}}_a , d_b } } { \accentset{\circ}{w}}_{a , b}({\accentset{\circ}{y}}_a , u_a , d_b|u_a , u_b ) \mathcal{e } \\ &\stackrel{\mathclap{(a)}}{= } \frac{1}{4 } \smashoperator[l]{\sum_{u_a , u_b\vphantom{d_b } } } \smashoperator[r]{\sum_{y_a , d_b } } w_{a , b}(y_a , u_a , d_b|u_a , u_b ) \mathcal{e}\\ & = p_e(w_{a , b } ) , \end{aligned}\ ] ] where is by . the marginal synthetic channels and given by note that by double symmetry a joint distribution whose marginals satisfy is called _symmetrized_. the name ` symmetrized ' stems from comparison of and .we note that holds for .a symmetrized joint distribution remains symmetrized upon polarization .that is , if is a symmetrized joint distribution and , is the result of jointly polarizing it , then the marginals and satisfy .this is easily seen from and .clearly , is _ degraded _ with respect to , exactly the opposite of our main thrust .nevertheless , as established in , both channels have the same probability of error under sc ( imjp ) decoding .moreover , if we upgrade the symmetrized version of the channel , its probability of error under imjp decoding lower - bounds the probability of error of the non - symmetrized channel under either sc or imjp decoding . what is nt immediately obvious, however , is what happens after polarization .i.e. , if we take a joint channel , symmetrize it , and then polarize it , how does its probability of error compare to the original joint channel that has just undergone polarization ?furthermore , what happens if the symmetrized version undergoes an upgrading transform ?[ prop_symmetrizing the joint distribution yields a lower bound ] let be a joint distribution of two synthetic channels and let denote this joint distribution after a sequence of joint polarization steps. then , where is the distribution of after the same sequence of polarization steps and any number of proper upgrading transforms along the way .let and be the polarized versions of and , respectively . for the -channel , the decoder makes the same decision for either or .this is because the decision is based on the b - channel -value , which is unaffected by symmetrization ( see ) .next , for the channel , using on a derivation similar to the proof of , , where is any combination of an element of and an element of .i.e. , is any one of , , , .thus , the imjp decoder makes the same decision for the -channel for either or .we compare the channels obtained by the following two procedures .* _ procedure 1 : _ joint channel goes through sequence of polarization steps . * _ procedure 2 : _ joint channel is symmetrized to form .it goes through sequence of polarization steps ( without any further symmetrization operations ) .we iteratively apply the above reasoning and conclude in a similar manner to that both channels have the same performance under imjp decoding .next , we modify procedure 2 .* _ procedure 2a : _ joint channel is symmetrized to form .it goes through sequence of polarization steps ( without any further symmetrization operations ) , but at some point mid - sequence , it undergoes a proper upgrading procedure .since polarizing and proper upgrading is equivalent to proper upgrading and polarizing , , we can assume that the upgrading happens after the entire sequence of polarization steps .thus , under imjp decoding , the probability of error of the channel that results from procedure 2a lower - bounds the probability of error of the channels resulting from procedures 1 and 2 .similarly , multiple upgrading transforms can also be thought of as occurring after all polarization steps .[ cor_procedure leads to lower bound ] let be a bms channel that undergoes polarization steps .let be the joint distribution of two of its polar descendants , and let .then . a direct consequence of combined with . due to , we henceforth assume that joint channel is symmetrized , and no longer distinguish symmetrized channels or symbols by the symbol . replacing the joint channel with its symmetrized versionneed only be performed once , at the first instance the two channels go through different polarization transforms . _implementation : _ since symmetrization is performed only once , and since this invariably happens when converting a channel to , we find the a- , b- , and ab - conjugates using the results of .we then form the symmetrized channel using .note that it is sufficient to find just the b - conjugates and use the first equation of .let the joint channel be , which , as mentioned above , we assume to be symmetrized .we have in which we used the independence and uniformity of the input bits and .the distribution is given by whenever is nonzero , distribution is obtained by dividing by .our notation ( with a semicolon , as opposed to ) reminds us that for fixed , is binary - input channel with input and output . if for some , we define to be some arbitrary bms channel , to ensure it is always a valid channel .since the joint channel is symmetrized , by we have .hence , for any , i.e. , a consequence of symmetrization is that given , becomes _ independent _ of .this is not true in the general case where the joint channel is not symmetrized .the decomposition of essentially decouples the symmetrized joint channel to a product of two distributions .[ lem_decomposition of symmetrized distribution ] let be a symmetrized joint distribution .it admits the decomposition for any , is a bms channel with input and output , i.e. , moreover , satisfies using in yields .the remainder of this lemma is readily obtained by using in .[ def_decoupling decomposition ] a decomposition of the form for a symmetrized joint distribution is called a _decoupling decomposition_. channel is obtained by marginalization , for any .channel is obtained by dividing by if is nonzero , and set to an arbitrary bms channel , e.g. , some bsc , if . but for some .] when setting to an arbitrary channel , we make sure not to add new b - channel -values .we use decoupling decompositions of symmetrized joint distributions in the sequel .in this section , we introduce proper upgrading procedures for joint synthetic channels .the overall goal is to reduce the alphabet size of the joint distribution .the upgrading procedures we develop enable us to reduce the alphabet size of each of the marginals without changing the distribution of the other ; there is a different procedure for each marginal . as an intermediate step , we further couple the marginals by increasing the alphabet size of one of them .the joint channel is assumed to be symmetrized and in -value representation .the upgrading procedures will maintain this . as discussed in, we do not distinguish symmetrized channels with any special symbol .the upgrading procedure of hinges on symmetrization .the upgrading procedure of does not require symmetrization and holds for non - symmetrized channels without change .however , we shall see that symmetrization simplifies the resulting expressions .we now introduce a theorem that enables us to deduce an upgrading procedure that upgrades and reduces its output alphabet size .let symmetrized joint channel admit decoupling decomposition .let be another symmetrized joint channel , where represents the -value of the b - channel output .it also admits a decoupling decomposition , [ thm_upgrading wa ] let and be symmetrized joint distributions with decoupling decompositions and , respectively .then , if 1 .channel with degrading channel .2 . channel for all such that . before going into the proof ,some comments are in order .first , we do not claim that any that is upgraded from must satisfy this theorem .second , the meaning of the second item is that , for fixed , bms channel with binary input is upgraded from a set of bms channels with the same binary input . using decoupling decompositions and and the structure of a proper degrading channel , channel if and only if there exist , such that where we now find and from the conditions of the theorem. the first condition of the theorem implies that there exists a channel such that the second condition of the theorem implies that for each there exists a channel such that we set using in , we have it is easily verified that is satisfied by and this , completing the proof . how might one use to upgrade the a - channel ?a naive way would be to first upgrade the marginal to using some known method ( e.g. , the methods of , see appendix [ ap_bms channel upgrades ] ) .this yields degrading channel by which one can find channel that satisfies . with and at hand ,one forms the product to obtain .if the reader were to attempt to do this , she would find out that it often changes the b - channel .moreover , this change may be radical : the resulting b - channel may be so upgraded to become almost noiseless , which boils down to an uninteresting bound , the trivial lower bound .it _ is _ possible to upgrade the a - channel without changing the b - channel ; this requires an additional transform we now introduce .the _ upgrade - couple _ transform enables upgrading the a - channel without changing the b - channel .the idea is to split each a - channel symbol to several classes , according to the possible b - channel outputs .symbols within a class have the same channel , so that confining upgrade - merges to operate within a class inherently satisfies the second condition of .thus , we circumvent changes to the b - channel .this results in only a modest increase to the number of output symbols of the overall joint distribution .let channel have possible -values , .we assume that erasure symbols are duplicated , and . for each a - channel symbol we define upgrade - couple symbols , .the new symbols _ couple _ the outputs of the a- and b - channels ( whence the name of the upgrade - couple tranform ) .namely , if the a - channel output is and , the b - channel output can only be ; if the a - channel output is and , the b - channel output can only be .the upgrade - couple channel is defined by where {\sum_{d = \pm d_{bj } } } w_2(d|u_b;y_a , \bar{u}_a ) & \begin{aligned } u_a&=0 , \\[-0.1 cm ] d_b&=\pm d_{bi}\end{aligned } \\\smashoperator[r]{\sum_{d = \pm d_{bi } } } w_2(d|u_b;y_a , \bar{u}_a ) & \begin{aligned } u_a&=1 , \\[-0.1 cm ] d_b&=\pm d_{bj}\end{aligned}\\ 0 & \text{otherwise , } \end{dcases}\ ] ] and is from the decoupling decomposition of in .the factor is indeed independent of due to symmetry .as we now show , since is symmetrized , so is .[ lem_upgrade couple channel is symmetrized ] let be a symmetrized joint distribution .then , , defined as in , is also symmetrized . to establish the lemma, we need to show that holds for the upgrade - couple channel . for the a - channel ,let symbols be conjugates , i.e. , .channel is symmetrized , so it satisfies under which .furthermore , obviously .thus , next , recall that , so that .thus , holds as required . in the proof ofwe have seen that the conjugate symbol of is ( with the order of and flipped ) .we summarize this in the following corollary .[ cor_conjugate symbols of upgrade - couple channel ] if then . since is symmetrized , it admits decoupling decomposition in we derive ( see ) and establish that for every , \mathrm{bsc}\left(\frac{1-d_{bj}}{2}\right ) & u_a = 1 .\end{dcases}\label{eq_w2 for decoupling decomposition}\ ] ] i.e. , when we have , when we have , and is zero for any other .we remark that we define using even if .[ lem_properties of upgrade - couple ] let be a symmetrized joint distribution and let be defined as in , with decoupling decomposition. then 1 .joint channel is upgraded from joint channel with a proper degrading channel that deterministically maps to .symbols of channel and of channel have the same a - channel -value for every such that .3 . for every ,bms channel with input and output is if and if .for the first item , note that . by summing over obtain i.e. , joint channel is upgraded from with degrading channel that deterministically maps to .this is a proper degrading channel . for the second item, we marginalize over and and obtain that for every , where whenever , we have .thus , implying that and have the same a - channel -value for their respective channels . for the final item ,if , we are free to set as we please , so we set it as per the item. otherwise , there are only two values of for which is nonzero .hence , can output only two b - channel -values for fixed and .thus , is a bms channel with only two possible outputs , or , in other words , a bsc . a bsc that outputs -values , , has crossover probability .this establishes the item .the canonical channel of channel has a single entry for each -value .i.e. , denoting by the set of symbols whose -value is , we have it can be shown that a channel is equivalent to its canonical form , i.e. , each form can be degraded from the other .[ cor_wbstar and wbhatstar are the same ] the canonical b - channels of and coincide .this is a direct consequence of the first item of : the canonical a - channels of and coincide . a direct consequence of the second item of , using , and noting that for any .the _ class _ is the set of symbols with fixed .there are classes .the size of each class is the number of symbols . by, is the _ same _ bsc for all symbols of class and fixed .thus , the second item of becomes trivial and is immediately satisfied if we use an upgrading procedure that upgrade - merges several symbols of the same class . to determine which upgrading proceduresmay be used , we turn to the degrading channel .so long as the degrading channel does not mix a symbol and its conjugate , the upgrading procedure can be confined to a single class .this is because conjugate symbols belong to different classes , as established in .thus , of the upgrading procedures of we can use either upgrade - merge-3 without restriction or upgrade - merge-2 provided that the two symbols to be merged have the same a - channel -value .[ thm_upgrade split ] let be some joint distribution with marginals and upgrade - couple counterpart .let obtained by an upgrade - merge-3 procedure .then there exists joint distribution with canonical marginals such that and .the idea is to confine the upgrading procedures to work within a class , utilizing over each class separately .assume that the upgrading procedure from to replaces symbols with symbols .we obtain by using for each class of separately .the a - channel upgrade procedure for class is upgrade - merge-3 from to that replaces symbols with symbols .as the upgrade is confined to symbols of the same class , the channel is the same regardless of , as established in , item 3 .hence , within a class the second item of is automatically satisfied with for all .channel is then obtained by the product of and as per : by properties of upgrade - merge-3 ( see in appendix [ ap_upgrade merge 3 ] ) we have therefore , where in we used the decoupling decomposition ; and are by , item 3 and by ; finally , is due to . to see that the canonical a - channel marginals coincide , note that by , item 2 , for any fixed , the symbols all have the same a - channel -valuelet be some a - channel -value , and let be the set of a - channel outputs whose a - channel -value is .then , where is a direct consequence of the expressions for upgrade - merge-3 and our construction of upgrading each class separately .to use , one begins with a design parameter that controls the output alphabet size .working one class at a time , one then applies upgrade operations in succession to reduce the class size to .the resulting channel , therefore , will have symbols overall .the canonical a - channel marginal that results from this operation will have at most symbols .the upgrade - merge-3 procedure replaces three conjugate symbol pairs with two conjugate symbol pairs .recall from that after the upgrade - couple transform , conjugate symbols belong to different classes .in particular , if and are a conjugate pair of the a - channel before the upgrade - couple transform , then and are a conjugate pair of the a - channel after the upgrade - couple transform .therefore , when one uses to replace the symbols one must also replace their conjugates we still always operate within a class as nowhere do we mix symbols from different classes .alternatively , one may upgrade only classes with and then use channel symmetry to obtain the upgraded forms of classes .there is one case where it is possible to use upgrade - merge-2 , as stated in the following corollary .[ cor_upgrade split ] also holds if the a - channel upgrade procedure is upgrade - merge-2 applied to two symbols of the same a - channel -value .while in general the upgrade - merge-2 procedure mixes a symbol and its conjugate , when the two symbols to be merged have the same a - channel -value this is no longer the case ( see appendix [ ap_upgrade merge 2 ] ) , and we can follow along the lines of the proof of .we omit the details .the reason that introduced both the upgrade - merge-2 and upgrade - merge-3 procedures despite the superiority of the latter stems from numerical issues . to implement upgrade - merge-3we must divide by the difference of the extremal -values to be merged . if these are very close this can lead to numerical errors .upgrade - merge-2 is not susceptible to such errors . on the other hand , upgrade - merge-2can not be used in the manner stated above ; it requires us to mix symbols from two classes and that may have wildly different channels .thus , this will undesirably upgrade the b - channel . in practice , however , we may be confronted with a triplet of symbols with very close , but not identical , a - channel -values . to avoid numerical issues ,we utilize a fourth nearby symbol .say that our triplet is with a - channel -values such that , for some `` closeness '' threshold .let have a - channel -value such that .then , we apply upgrade - merge-3 twice : first for obtaining with a - channel -values and then for , ending up with with a - channel -values . in this examplewe have chosen a fourth symbol with a greater a - channel -value than , but we could have similarly chosen a fourth symbol with a smaller a - channel -value than instead .we now show how to upgrade to channel such that marginal and marginal .the idea is to begin with , a channel equivalent to in which and are not explicit in the output .the channel is given by .we upgrade to using some known method , such that channel degrades to . to form upgraded channel , we `` split '' the outputs of to include and and find a degrading channel that degrades to .we shall see that the upgraded channel is given by where and are defined in , below . finally , we form the joint channel using .we illustrate this in .[ thm_wb can be upgraded to qb with ya and va intact ] let be a joint distribution where is the -value of the b - channel s output .let be a channel equivalent to , and let with degrading channel .then there exists joint channel such that and . we shall explicitly find and an appropriate degrading channel .the degrading channel will be of the form , i.e. , pass through the degrading channel unchanged .such degrading channels are proper .since we have , for any and , denote we assume that , for otherwise output never appears with positive probability and may be ignored , and define for each , we will shortly define constants such that and .similar to , we use these constants to define channel by indeed , .we now find the constants and an appropriate degrading channel such that which will establish our goal .let , and be such that the left - hand - side of is positive , there will always be at least one selection of for which the left - hand - side of is positive .] , so that .we shall see that the resulting expressions hold for the zero case as well . using and , we can rewrite as comparing this with , we set it is easily verified that and . using the expression for in yields is a valid probability distribution .we remark that is satified by and even when .we have found and a proper degrading channel as required .( qb * ) at ( -2,3 ) ; ( wb * ) at ( 3.5,3 ) ; ( qb ) at ( -2,0 ) ; ( wb ) at ( 3.5,0 ) ; ( rect1 ) at ( ) [ rectangle , draw , inner sep = 3pt , minimum width = 1.2 cm ] ; ( rect2 ) at ( ) [ rectangle , draw , inner sep = 3pt , minimum width = 1.2 cm ] ; ( a1 ) at ( ) ; ( a2 ) at ( ) ; ( a3 ) at ( ) ; ( a1 ) + +( -0.8,0 ) node[left ] ; ( a2 ) + +( -0.8,0 ) node[left ] ; ( a3 ) + +( -0.8,0 ) node[left ] ; ( qb * ) ( rect1 ) ; ( rect1 ) ( wb * ) ; ( qb ) ( rect2 ) ; ( rect2 ) ( wb ) ; ( qb * ) ( a1 ) ; ( a1 ) ( qb ) ; ( wb * ) ( a2 ) ; ( a2 ) ( wb ) ; ( rect1 ) ( a3 ) ; ( a3 ) ( rect2 ) ; in , the marginal a - channels of and coincide . by construction , the degrading channel from to does not change the a - channel output , implying that the a - channel marginal remains the same .to use , one begins with design parameter that controls the output alphabet size .the channel , with output alphabet of size , is obtained from using a sequence of upgrade operations . to obtain upgraded joint channel , one uses the theorem to turn them into a sequence of upgrade operations to be performed on channel .if one uses the techniques of , the upgrade operations will consist of upgrade - merge-2 and upgrade - merge-3 operations ( appendix [ ap_bms channel upgrades ] ) . in the following examples we apply specifically to these upgrades . for brevity, we will use the following notation : [ ex_upgrading based on upgrade - merge-2 ] the upgrade - merge-2 procedure of selects two conjugate symbols pairs and replaces them with a single conjugate symbol pair .the details of the transformation , in our notation , appear in appendix [ ap_upgrade merge 2 ] .let joint channel have b - channel marginal , in which all symbols with the same -value are combined to a single symbol .we select symbols and their respective conjugates , such that and upgrade to given by ( appendix [ ap_upgrade merge 2 ] ) .we denote by the output alphabet of and by the set the output alphabet of is ; outputs of represent -values . in particular , the -values of and and , respectively . using , we form channel by { \mu_{{y_a , u_a}}^{\bar{z}_{bk}}}q_b^*(\bar{z}_{bk}|u_b ) & z_b = \bar{z}_{bk}\\[0.1 cm ] w_b(y_a , u_a , z_b|u_b ) & \text{otherwise , } \end{cases}\ ] ] where by , { \mu_{{y_a , u_a}}^{\bar{z}_{bk}}}&= \frac{\sum_{d \in \mathcal{d}_{z_{bk}}}\left({{\pi}_{{y_a , u_a}}^{d}}\cdot ( d_{bk } - d)\right)}{2({{\pi}_{}^{d_{bj } } } + { { \pi}_{}^{d_{bk}}})d_{bk}}. \end{aligned}\ ] ] we can simplify this when is a symmetrized channel . in this case , , yielding therefore , the upgraded joint channel becomes { \pi_{y_a , u_a}^{z_{bk } } } \left(\frac{1-(-1)^{u_b}d_{bk}}{2 } \right ) & z_b = \bar{z}_{bk } \\[0.1 cm ] w_b(y_a , u_a , z_b|u_b ) & \text{otherwise , } \end{dcases}\ ] ] where [ ex_upgrading based on upgrade - merge-3 ] the upgrade - merge-3 procedure replaces three conjugate symbols pairs with two conjugate symbol pairs .the details of the transformation , in our notation , appear in appendix [ ap_upgrade merge 3 ] .as above , let joint channel have b - channel marginal .for the upgrade procedure we select symbols and their respective conjugates , such that .. at least one of the inequalities or must be strict .] we upgrade to given by ( appendix [ ap_upgrade merge 2 ] ) .we denote by the output alphabet of and by the set the output alphabet of is ; outputs of represent -values . in particular , the -values of and and , respectively .assuming that is symmetrized , we form channel using as { \mu_{{y_a , u_a}}^{z_{bi}}}q_b^*(z_{bi}|u_b ) & z_b = z_{bi } \\[0.1 cm ] { \mu_{{y_a , u_a}}^{\bar{z}_{bi}}}q_b^*(\bar{z}_{bi}|u_b ) & z_b = \bar{z}_{bi } \\[0.1 cm ] { \mu_{{y_a , u_a}}^{\bar{z}_{bk}}}q_b^*(\bar{z}_{bk}|u_b ) & z_b = \bar{z}_{bk}\\[0.1 cm ] w_b(y_a , u_a , z_b|u_b ) & \text{otherwise , } \end{cases}\ ] ] where by , { \mu_{{y_a , u_a}}^{z_{bi}}}&= \frac{{{\pi}_{y_a , u_a}^{d_{bi } } } + \left(\frac{d_{bk}-d_{bj}}{d_{bk}-d_{bi } } \right){{\pi}_{y_a , u_a}^{d_{bj}}}}{{{\pi}_{}^{d_{bi } } } + \left(\frac{d_{bk}-d_{bj}}{d_{bk}-d_{bi } } \right){{\pi}_{}^{d_{bj } } } } , \end{aligned}\ ] ] and , .the latter two equalities are due to our assumption that is symmetrized . denoting upgraded joint channel is given by { \pi_{y_a , u_a}^{z_{bi}}}\left(\frac{1+(-1)^{u_b}d_{bi}}{2 } \right ) & z_b = z_{bi } \\[0.1 cm ] { \pi_{y_a , u_a}^{z_{bi}}}\left(\frac{1-(-1)^{u_b}d_{bi}}{2 } \right ) & z_b = \bar{z}_{bi } \\[0.1 cm ] { \pi_{y_a , u_a}^{z_{bk}}}\left(\frac{1-(-1)^{u_b}d_{bk}}{2 } \right ) & z_b = \bar{z}_{bk } \\[0.1 cm ] w_b(y_a , u_a , z_b|u_b ) & \text{otherwise . } \end{dcases}\ ] ] we observe from these examples an interesting parallel between the a - channel and b - channel upgrading procedures . in the former case ,we confine upgrade operations to a single class , in which the b - channel -values are fixed . in light of the above examples, the latter case may be viewed as confining upgrade procedures to `` classes '' in which are fixed .the previous sections have introduced several ingredients for building an overall procedure for obtaining a lower bound on the probability of error of polar codes under sc decoding .we now combine these ingredients and present the overall procedure .first , we lower - bound the probability of error of two synthetic channels . then, we show how to use lower bounds on channel pairs to obtain better lower bounds on the union of many error events .we now present an upgrading procedure for that results in channel with a smaller alphabet size .the procedure leverages the recursive nature of polar codes .the input to our procedure is bms channel , the number of polarization steps , the indices and of the a - channel and b - channel , respectively , and parameters and that control the output alphabet sizes of the a- and b - channels , respectively .the binary expansions of and are and , respectively .these expansions specify the order of polarization transforms to be performed , where implies a ` '-transform and implies a ` '-transform .the algorithm consists of a sequence of polarization and upgrading steps .after each polarization step , we bring the channel to -value representation , as described in . a side effect of polarizationis increase in alphabet size .the upgrading steps prevents the alphabet size of the channels from growing beyond a predetermined size . after the final upgrading stepwe obtain joint channel , which is properly upgraded from .we compute , which serves as a lower bound to .we recall that is the probability of error under sc decoding of the joint synthetic channel .this , in turn , lower - bounds ( see ) . provides a high - level description of the procedure .we begin by determining the first index for which and differ ( i.e. for and ) .the first polarization steps are of a single channel , as the a - channel and b - channel indices are the same .since these are single channels , we utilize the upgrading procedures of to reduce the output alphabet size . at the polarization step ,the a- and b - channels differ .we perform joint polarization described in and symmetrize the channel using .this symmetrization need only be performed once as subsequent polarizations maintain symmetrization ( ) .we then perform the b - channel upgrading procedure ( ) , which reduces the b - channel alphabet size to .following that , we upgrade the a - channel .as discussed in , this consists of two steps .first , we upgrade - couple the channel , to generate classes .second , for each class separately , we use the a - channel upgrade procedure until each class has at most elements ( see and ) .we confine the a - channel upgrade procedure to the class by utilizing only upgrade - merge-3 operations .-value we may also use the upgrade - merge-2 procedure . ] we continue to polarize and upgrade the joint distribution in this manner , until .after the final polarization and upgrading operation , we compute the probability of error of the imjp decoder for the resulting channel . the lower bound of this procedure compares favorably with the trivial lower bound , .this is because our upgrading procedure only ever changes one marginal , keeping the other intact .since it leverages upgrading transforms that can be used on single channels , the marginal channels obtained are the same as would be obtained on single channels using the same upgrading steps .thus , by this lower bound is at least as good as .when the bms is a bec , we can recover the bounds of and using our upgrading procedure .only a - channel upgrades are required , as the b - channel , in -value representation , remains a bec . for each a - channel symbol , the channel in is either a perfect channel or a pure - noise channel ( see in appendix [ ap_imjp for bec ] ) .thus , the upgrade - couple procedure splits the a - channel symbols to those that see a perfect channel regardless of and those that see a pure - noise channel regardless of . merging a - channel symbols of the same classis equivalent to merging a - channel symbols for which is the same type of channel .we thus merge a - channel symbols of the same a - channel -value that `` see '' the same type of b - channel .this corresponds to keeping track of the correlation between erasure events of the two channels .an initial step of is to upgrade the channel , even before any polarization operations .this step enables us to apply our algorithm on continuous - output channels , see ( * ? ? ?* section vi ) .recall that the probability of error of polar codes under sc decoding may be expressed as . in the previous section, we developed a lower bound on , which lower bounds .this lower bound may be strengthened by considering several pairs of synthetic channels and using .we now show how this can be done .[ lem_lower bound on union using unions of two events ] the probability of error of a union of events is lower bounded by the proof hinges on using the identity in .note that any set of numbers satisfies so that therefore , using this in yields the desired bound . in practice , we combine the lower bound of with .i.e. , we compute lower bounds on for all pairs of channels in some subset of the non - frozen set , and use over this subset .such bounds are highly dependent on the selection of the subset .one possible strategy is as follows .let be the set of worst channels in the non - frozen set for some . for each channel pair in , compute a lower bound on the joint probability of error using . then , form all possible subsets of ( there are such subsets ) and use for each subset . choose the subset with the highest upper bound as .the reason for going over all possible subsets is that bounds based on the inclusion - exclusion principle are not guaranteed to be higher than the highest pairwise probability , see .bounds on the probability of error of a polar code of length over a bsc are shown in .we designed the polar code for a bsc with crossover probability using the techniques of with quantization levels .as the non - frozen set we selected the channels with smallest probability of error , to yield a code rate of approximately .this non - frozen set was fixed .then , for bscs with various crossover probabilities , we computed the following bounds . for the upper bound ,we computed an upper bound on , and for the trivial lower bound we computed a lower bound on ; upper and lower bounds on were obtained using the techniques of . for the pair and combination lower bounds we used lower bounds on the imjp decoder , described in this paper .these were computed with and for all possible pairs of the worst channels in the non - frozen set .the pair lower bound is merely the highest probability of error of all pairs , whereas the combination lower bound is based on computed for the subset of these channels that yielded the highest bound .as one may observe , our bounds improve upon the previously known lower bound .further , the combination lower bound is , as may be expected , tighter than the pair lower bound .table[row sep = crcr]0.12 1.693109000000000e-11 + 0.13 1.545255000000000e-10 + 0.14 1.159648000000000e-09 + 0.15 7.346006e-09 + 0.16 4.011035e-08 + 0.17 1.918491e-07 + 0.18 8.154281e-07 + 0.19 3.115159e-06 + 0.2 1.228296e-05 + 0.215.337406e-05 + 0.22 0.0002041444 + 0.23 0.0007395706 + 0.24 0.002282766 + ; ; table[row sep = crcr]0.12 2.508326873872544e-11 + 0.13 2.307435793495750e-10 + 0.14 1.734792239720996e-09 + 0.15 1.09975033524279e-08 + 0.16 6.00291268915143e-08 + 0.17 2.87259218745284e-07 + 0.18 1.22135715052707e-06 + 0.19 4.66745549465575e-06 + 0.2 1.88976192395662e-05 + 0.21 8.05679540436532e-05 + 0.22 0.000306600358273346 + 0.23 0.00101190312157073 + 0.24 0.00294503018207948 + ; ; table[row sep = crcr]0.12 2.923330117301268e-11 + 0.13 2.894542483207269e-10 + 0.14 2.265421832721033e-09 + 0.15 1.46020108675518e-08 + 0.16 8.04905239400794e-08 + 0.17 3.8866494847106e-07 + 0.181.66829167919542e-06 + 0.19 6.44340073025598e-06 + 0.2 2.42961609697426e-05 + 0.21 9.35945402876541e-05 + 0.22 0.000357524980326714 + 0.23 0.00118197056331169 + 0.24 0.00348174646663696 + ; ; table[row sep = crcr]0.12 4.184634279569551e-11 + 0.13 3.850869663323411e-10 + 0.14 2.941405887386531e-09 + 0.15 1.935488070643073e-08 + 0.16 1.139631130132584e-07 + 0.17 6.238867069515004e-07 + 0.18 3.261014180666720e-06 + 0.19 1.640412664303528e-05 + 0.2 7.835253524825373e-05 + 0.21 3.477550110270991e-04 + 0.22 0.001403696167960 + 0.23 0.005077423810895 + 0.24 0.016341314263883 + ; ;in the special case where is a bec and and are two of its polar descendants , we have the following . [ thm_for bec p(e1ue2 ) is identical to p(eml1ueml2 ) ] let and be two polar descendants of a bec in the same tier . then , the imjp and the iml ( sc ) decoders coincide . to prove this , we first show that for the bec erasures are determined by the received channel symbols , , and not previous bit decisions .this implies that for fixed , regardless of and in particular , either channel always experiences an erasure , or always experiences a non - erasure .if experiences an erasure , it does nt matter what decides in terms of the imjp decoder it may as well use an ml decoder ; if does not experience an erasure , then the best bet of is to use an ml decoder .this suggests that the iml and imjp decoders coincide .[ lem_for bec erasures are based on channel outputs only ] let be a polar descendant of a bec , .then , there exists a set , dependent only on , such that has an erasure if and only if . here , are the received channel symbols , and the previous bit decisions that are part of s output .let be the binary expansion of , with the msb .recall that channel is the result of polarization steps determined by , where is a ` '-transform and is a ` '-transform .consider first the case where , i.e. , .if then has an erasure if and only if at least one of is an erasure , i.e. , if and only if , .if then has an erasure if and only if both and are erasures , i.e. , if and only if , . therefore , the claim is true for .we proceed by induction .let the claim be true for : for , there exists a set such that has an erasure if and only if .if , then is the result of a ` '-transform of two bec channels , so it has an erasure if and only if at least one of them erases . in other words, has an erasure if and only if , . if , however , , then is the result of a ` '-transform of two bec channels , so it has an erasure if and only if both of them erase .in other words , has an erasure if and only if , .thus , the claim is true for as well , completing the proof . by, a decoder that minimizes is an ml decoder .it remains to show that a minimizing is also an ml decoder .marginalizing the joint distribution yields : the ml decoder for channel maximizes with respect to ; decoder , on the other hand , maximizes , defined in . using we recast the expression for in the same form as the expression for , by , whether has an erasure depends solely on the received channel symbols , which are wholly contained in , and not on previous bit decisions .in particular , in computing or , we either sum over only erasure symbols or over only non - erasure symbols .since is an ml decoder for , if is an erasure of then ; if is not an erasure of then . in either case , it is clear that the decision based on is identical to the ml decision .therefore , is an ml decoder as well , implying that the imjp decoder is an iml decoder .the decision of an ml decoder for a memoryless binary - input channel may be based on any sufficient statistic of the channel output .one well - known sufficient statistic is the log - likelihood ratio ( llr ) , .when is positive , the decoder declares that was transmitted ; when is negative , the decoder declares that was transmitted ; constitutes an erasure , at which the decoder makes some random choice .another sufficient statistic is the -value .the -value of output , , is given by clearly , .a maximum likelihood decoder makes its decision based on the sign of the -value . assuming a symmetric channel input , with probability , using bayes law on yields the input is binary , hence .consequently yields there is a one - to - one correspondence between and , or , equivalently , if channel is symmetric , for each output there is a conjugate output ; their llrs and -values are related : since the -value is a sufficient statistic of a bms channel , we may replace the channel output with its -value .thus , we may assume that the output of channel is a -value , i.e. , .in this case , we say that is in -value representation .recall that every bms channel can be decomposed into bscs ( * ? ? ?* theorem 2.1 ) .we can think of the output of a bms as consisting of the `` reliability '' of the bsc and its output .the absolute value of the -value corresponds to the bsc s reliability and its sign to the bsc output ( or ) .a comprehensive treatment of -values and llrs in relation to bms channels appears in ( * ? ? ?* chapter 4 ) .we state here in our notation the two upgrades of a bms channel from .let be a discrete bms whose outputs are -values , and let the probability of symbol be , . without loss of generality , . clearly , for all , and .moreover , .i.e. , this is a bms that decomposes to different bscs , with crossover probabilities , .bsc channel is selected with probability .we have and .the first upgrade - merge of takes two -values and merges them by transferring the probability of to .we call it _upgrade - merge-2_. channel is upgraded to channel ; the output alphabet of is and { { \pi}^{z_{k}}}\left(\frac{1 - ( -1)^u d_k}{2}\right ) & z = -z_k\\[0.1 cm ] w(z|u ) & \text{otherwise , } \end{dcases}\label{eq_formula for simple upgrade merge 2}\ ] ] where the degrading channel from to is shown in .we show only the portion of interest , i.e. , we do not show the symbols that this degrading channel does not change .the parameters of the degrading channel are indeed , and , so this constitutes a valid channel .note that if then .the second upgrade - merge of removes a -value by splitting its probability between a preceding -value and a succeeding -value .we call it _upgrade - merge-3_. unlike upgrade - merge-2 , at least one of these inequalities must be strict ( i.e. , either or ) .channel is upgraded to channel with output alphabet and { { \pi}^{z_i}}\left(\frac{1 + ( -1)^ud_{i}}{2}\right ) & z = z_{i } \\[0.1 cm ] { { \pi}^{z_i}}\left(\frac{1- ( -1)^ud_{i}}{2}\right ) & z = \bar{z}_{i } \\[0.1 cm ] { { \pi}^{z_k}}\left(\frac{1- ( -1)^ud_{k}}{2}\right ) & z = \bar{z}_{k } \\[0.1 cm ] w(z|u ) & \text{otherwise , } \end{dcases } \label{eq_formula for upgrade merge 3}\ ] ] where 0 & \ell = j \\[0.1 cm ] { { \pi}^{d_k } } + { { \pi}^{d_j}}\left(\frac{d_{j}-d_{i}}{d_{k}-d_{i } } \right ) & \ell = k \\[0.1 cm ] \pi_{\ell } & \text{otherwise . }\end{dcases}\ ] ] note that the degrading channel from to is shown in , showing only the interesting portion of the channel .the parameters of the channel are , and , .this is a valid channel as .e. arikan , `` channel polarization : a method for constructing capacity - achieving codes for symmetric binary - input memoryless channels , '' _ ieee transactions on information theory _ ,55 , no . 7 , pp .30513073 , july 2009 . r. mori and t. tanaka , `` performance and construction of polar codes on symmetric binary - input memoryless channels , '' in _ proc .ieee international symposium on information theory _ ,june 2009 , pp .
polar codes are a family of capacity - achieving codes that have explicit and low - complexity construction , encoding , and decoding algorithms . decoding of polar codes is based on the successive - cancellation decoder , which decodes in a bit - wise manner . a decoding error occurs when at least one bit is erroneously decoded . the various codeword bits are correlated , yet performance analysis of polar codes ignores this dependence : the upper bound is based on the union bound , and the lower bound is based on the worst - performing bit . improvement of the lower bound is afforded by considering error probabilities of two bits simultaneously . these are difficult to compute explicitly due to the large alphabet size inherent to polar codes . in this research we propose a method to lower - bound the error probabilities of bit pairs . we develop several transformations on pairs of synthetic channels that make the resultant synthetic channels amenable to alphabet reduction . our method improves upon currently known lower bounds for polar codes under successive - cancellation decoding . channel polarization , channel upgrading , lower bounds , polar codes , probability of error .
modeling and investigating the dynamics of populations is commonly viewed as one of central topics of modern mathematical demography , population biology and ecology .having its origin in the works of malthus dating back to 1798 and historically preceded by fibonacci s elementary considerations from 1202 , the mathematical theory of population dynamics underwent a rapid growth during the 19th and 20th centuries . among others, one should mention the works of sharpe ( 1911 ) , lotka ( 1911 and 1924 ) , volterra ( 1926 ) , mckendrick ( 1926 ) , kositzin ( late 1930s ) , fisher ( 1937 ) , kolmogorov ( 1937 ) , leslie ( 1945 ) , skellam ( 1950-s and 1970-s ) , keyfitz ( 1950-s through 1980-s ) , fredrickson & hoppensteadt ( 1971 and 1975 ) , gurtin ( 1973 ) , gurtin & maccamy ( 1981 ) , etc . for a detailed historical overview , we refer the reader to the monographs by ianelli __ and okubo & levin and references therein .the classical mckendrick - von foerster model ( often also referred to as sharpe - lotka - mckendrick model ) reads as where stands for the population individuals density of age , , at time .equation ( [ equation_sharpe_lotka_mckendrick_model ] ) as well as its nonlinear modifications and generalizations for the case of multiple competing populations have attracted a lot of attention . in particular, one should mention the works and monographs by arino , chan & guo , ianelli _ et al . _ , song _ et al . _ , webb , , etc .the questions addressed by the author range from local and global existence and uniqueness studies , positivity and spectrum investigations as well as stability and asymptotics considerations to optimization and control problems , etc .the typical functional analytic framework for equation ( [ equation_gurtin_and_mac_camy_model ] ) is the lebesgue -space , .whereas most well - posedness results were obtained for and similarly hold for all , the hilbert - space case turns out to be more appropriate in some other cases ( cf . , ) . a generalization of ( [ equation_sharpe_lotka_mckendrick_model ] ) is given by gurtin & maccamy s model with spatial diffusion with denoting the density of the population individuals of age , , at space position of a spatial domain at time . global well - posedness and asymptotic behavior for equation ( [ equation_gurtin_and_mac_camy_model ] ) as well as its nonlinear and stochastic versions have been studied by busenberg & iannelli , chan & guo , kunisch _ et al . _ , langlais , etc .since equation ( [ equation_gurtin_and_mac_camy_model ] ) can be viewed as a `` hyperbolic - parabolic '' partial integro - differential equations , equation ( [ equation_gurtin_and_mac_camy_model ] ) is typically studied in for . in constrast to animal populations , the migration in modern human populations is essentially nonlocal making it possible to ignore small fluctuations arising from the random walk and accounted for by the laplacian term in equation ( [ equation_gurtin_and_mac_camy_model ] ) . on the other hand , equations ( [ equation_gurtin_and_mac_camy_model ] )is too unrealistic to be applied in demography since it does not account for the gender structure of the population . to address this shortcoming ,sex - structured models been developed in the 1970s , mostly within the ode framework .one of the first pde models proposed is probably the one due to keifitz . in his article , he presented a straightforward generalization of mckendrick - von foerster model from equation ( [ equation_sharpe_lotka_mckendrick_model ] ) describing the temporal evolution of an age- and sex - structured population by the following system of partial integro - differential equations with here , ] be the maximal life expectancy for male or female individuals in the population , respectively .further , let , be the age domains for male or female individuals , respectively . for , let denote the total number of male individuals of age in the population .similarly , let denote the total number of female individuals of age .let and be the age - specific mortality moduli of male or female individuals of age or , respectively .further , let and describe the total number of male or female infants , respectively , born to all couples made up of males of age and females of age with the couples being not necessarily monogamous .assuming for some regular , with and performing for each a linearization of around for , we obtain the approximation with here , stand for the age- and sex - specific fertility moduli for male or female infants .usually , , since the influence of the male part of population is overwhelmingly nonlinear ( cf .further , let and be the net immigration of male or female individuals of age or , respectively , at time . with and describing the total number of male or female individuals of age or , respectively , in the population at the initial moment of time and and quantifying the net immigration of male or female individuals of age or at time , the evolution equations for read as here , equations ( [ equation_model_equation_1])([equation_model_equation_2 ] )represent a conservation law describing the natural ageing and migration whereas equations ( [ equation_model_equation_3])([equation_model_equation_4 ] ) stand for the so - called `` birth law '' being a boundary condition with a non - local term .finally , equations ( [ equation_model_equation_5])([equation_model_equation_6 ] ) prescribe the initial population structure .following , we assume to be lebesgue - integrable and define the survival probability for male or female individuals till the age or , respectively , as for and to vanish in or , respectively , we require that the integrals are divergent . for finite , this would mean , .in contrast to that , we have , both for finite and infnite .additionally , we impose the natural condition the latter is satisfied if exhibit a sufficiently rapid decay rate in or , respectively . thus , to avoid the necessity of working with weighted lebesgue- and sobolev spaces , similar to , we define the new variables introducing the age- and sex - specific maternity functions we can use equations ( [ equation_model_equation_1])([equation_model_equation_6 ] ) to easily verify that solves the problem where and this section , we want to prove the classical well - posedness in the sense of hadamard for ( [ equation_model_transformed_equation_1])([equation_model_transformed_equation_6 ] ) .to this end , we state the problem in a hilbert space setting and apply the operator semigroup theory ( see , ) .our approach differs inasmuch from the classical one ( see , e.g. , and references therein ) as we use the semigroup theory instead of fredholm integral equation theory to obtain the well - posedness .further , unlike other authors ( cf . , ) who also applied the semigroup theory to similar problems , we exploit only hilbert space techniques rather then working with the -space .though at first glance the -space might appear to be not the most intuitive choice since it the -norm can not be directly related to the population size , it provides more structure and thus facilitates the analytical and numerical treatment of the problem without being an actual restriction in demographical applications . in the following , we assume and .we consider the hilbert space endowed with the standard product topology .we define the operator given as with the domain equipped with the standard product topology on . here and in the sequel, will denote the standard scalar - valued ( see , e.g. , ( * ? ? ?* chapter 3 ) ) or banach - space - valued sobolev space ( cf , e.g. , ) . under the condition the expression ^{1/2 } \notag\ ] ] gives a seminorm on , being additionally a norm on the subspace of constant functions , and thus constitutes an equivalent norm on by virtue of the third poincar s inequality . due to the sobolev embedding theory ( cf .* theorem 4.12 ) ) , we know thus , is well - defined .the linearity of is also obvious . letting , , and , equations ( [ equation_model_transformed_equation_1])([equation_model_transformed_equation_6 ] )can now be equivalently written in the abstract form since we will observe that is closed and has a non - empty resolvent set ( cf .lemmas [ lemma_operator_a_dense_and_closed ] and [ lemma_a_minus_beta_m_dissipative ] below ) , by a well - known result on operator semigroups ( see , e.g. , ( * ? ? ?* theorem 3.1.12 ) ) , proving the classical well - posedness for the abstract cauchy problem ( [ equation_model_abstract_form ] ) and thus also for the original initial - boundary value problem ( [ equation_model_equation_1])([equation_model_equation_6 ] ) reduces to showing that is an infinitesimal generator of -semigroup of bounded linear operators on .[ lemma_operator_a_dense_and_closed ] the operator is densely defined and closed . * _ density : _ let and let be arbitrary .due to the density of test functions in and as well as the monotonicity of lebesgue integral , there exists a number such that for any there exist test functions , such that we let note that by the virtue of hlder s inequality , both and are absolutely and uniformly bounded with respect to by the number with + for , and , consider the measurable function }(a ) \notag\ ] ] with } ] . letting we observe , .now , the parameters , have to be selected such that holds true , i.e. , there suffices to fulfil the latter conditions are satisfied if estimating for and observing that the matrix is invertible with the operator norm of the inverse being uniformly bounded by if , i.e. , if , e.g. , we conclude that the linear system ( [ equation_condition_delta_theta_1 ] ) , ( [ equation_condition_delta_theta_3 ] ) is uniquely solvable for with hence , selecting all equations and inequalities in ( [ equation_condition_delta_theta_1])([equation_condition_delta_theta_4 ] ) are satisfied .thus , the constructed function lies in an -neighborhood of . * _ closedness : _ we consider the operator with by the virtue of sobolev embedding theorem , is a bounded linear operator .since is a closed subspace of and , the latter is a closed subspace of and thus a banach space .now , the operator is bounded linear map between the banach spaces and and therefore a closed linear operator .the proof is finished .[ lemma_a_minus_beta_m_dissipative ] for sufficiently large , the operator is m - dissipative . for and , we have thus , for with the operator is dissipative .next , we show that the operator is surjective for some .for , we solve for the equation multiplying equation ( [ equation_operator_equation ] ) with in , we obtain the weak formulation with the bilinear form given as now , we want to apply babuka - lax - milgram lemma to solve equation ( [ equation_elliptic_equation_weak_formulation ] ) .this amounts to showing that is continuous on and satisfies the inf - sup condition whereas the continuity of is obvious , the inf - sup - condition holds true if and only if there exist constants such that for any there exists such that indeed , let be arbitrary . for a sufficiently large , we look for satisfying where the condition dictates from equation ( [ equation_construction_of_u_for_babuska_lax_milgram ] ) , we obtain by the virtue of duhamel s formula for some constants . note that we trivially have equations ( [ equation_construction_of_u_for_babuska_lax_milgram_solution ] ) , ( [ equation_construction_of_u_for_babuska_lax_milgram_ic ] ) yield a linear system for the latter can be written as with since we can estimate there exists a number such that the matrix is invertible for all with its inverse matrix given as a neumann series .further , the vector is well - defined since moreover , we see that the expression linearly depends on whereas does not depend on . therefore , plugging this into equation ( [ equation_construction_of_u_for_babuska_lax_milgram_solution ] ) , we obtain a solution satisfying equations ( [ equation_construction_of_u_for_babuska_lax_milgram ] ) , ( [ equation_construction_of_u_for_babuska_lax_milgram_ic ] ) and thus lying in . by construction , we obtain and thus , the bilinear form satisfies the inf - sup - condition meaning that the operator is continuously invertible and therefore surjective .altogether we have shown that is m - dissipative for .taking into account lemmas [ lemma_operator_a_dense_and_closed ] and [ lemma_a_minus_beta_m_dissipative ] , we apply the theorem of lumer & phillips as well as the well - known perturbation result for bounded operators ( cf .* corollary 1.3 ) ) to conclude the operator is a generator of a -semigroup of bounded linear operators on .now , we exploit ( * ? ? ?* theorem 3.1.12 ) and ( * ? ? ?* corollary 3.1.17 ) and conclude [ theorem_strong_solution_existence_and_uniqueness ] assume that , .then there exists a unique mild solution to equation ( [ equation_operator_equation ] ) given as continuously depending on the data in sense of the existence of constants , such that if and , then there exists a constant such that equation ( [ equation_model_abstract_form ] ) possesses a unique classical solution , x ) \cap c^{0}([0 , t ] , d(a ) ) .\notag\ ] ] finally , we want to study the asymptotic behavior of solutions to ( [ equation_model_equation_1])([equation_model_equation_6 ] ) in the absense of immigration or emigration , i.e. , .we define the `` natural '' energy via and easily see that the exponential stability of the zero solution to ( [ equation_model_equation_1])([equation_model_equation_6 ] ) is equivalent with the exponential stability of the zero solution to ( [ equation_model_abstract_form ] ) whereas the latter holds true if and only if the semigroup is exponentially stable .assume that then the energy decays exponentially to zero for , i.e. , with since any initial data can be approximated by a sequence from , we assume without loss of generality that and denote by the corresponding unique classical solution of equation ( [ equation_model_abstract_form ] ) , which in its turn is a classical solution to ( [ equation_model_equation_1])([equation_model_equation_6 ] ) .we consider the lyapunov functional obviously , moreover , is frecht differentiable along the solution and due to equations ( [ equation_model_transformed_equation_1])([equation_model_transformed_equation_4 ] ) satisfies \\ & \leq -\tfrac{1}{2 } \sum_{\circledast \in \{{\text{\male } } , { \text{\female}}\ } } \big[\int_{a_{\circledast } } u_{\circledast}^{2}(t , a_{\circledast } ) \mathrm{d}a_{\circledast } \\ & -4 a_{\circledast } \sum_{\circledcirc \in \{{\text{\male } } , { \text{\female}}\ } } a_{\circledcirc } \|m_{\circledast \circledcirc}\|_{l^{\infty}(a_{\circledcirc})}^{2 } \int_{a_{\circledcirc } } u_{\circledcirc}^{2}(t , a_{\circledcirc } ) \mathrm{d}a_{\circledcirc}\big ] \\ & = -\sum_{\circledast \in \{{\text{\male } } , { \text{\female}}\ } } \big(1 - 4 \sum_{\circledcirc \in \{{\text{\male } } , { \text{\female}}\ } } a_{\circledast}^{\dag } a_{\circledcirc}^{\dag } \|m_{\circledast \circledcirc}\|_{l^{\infty}(a_{\circledcirc})}^{2}\big ) \int_{a_{\circledast } } u_{\circledast}^{2}(t , a_{\circledast } ) \mathrm{d}a_{\circledast } \\ & = -\min\big\{1 - 4 \sum_{\circledcirc \in \{{\text{\male } } , { \text{\female}}\ } } a_{\circledast}^{\dag } a_{\circledcirc}^{\dag } \|m_{\circledast \circledcirc}\|_{l^{\infty}(a_{\circledcirc})}^{2 } \,\big|\ , \circledast \in \{{\text{\male } } , { \text{\female}}\}\big\ } e(t , u ) = -2\alpha f(t , u ) , \end{aligned}\ ] ] where we performed an integration by parts and used young s and hlder s inequalities . now ,applying gronwall s inequality , we obtain which was our claim .in this section , we propose an implicit finite difference method to numerically solve the initial - boundary value problem ( [ equation_model_equation_1])([equation_model_equation_6 ] ) . under minimal regularity assumptions on the data ,we show the scheme to be convergent . in our investigations, we decided to depart from the standard approach of assuming the -differentiability of solutions ( cf ., e.g. , ) , since , to assure for this high regularity of solutions , one would require in addition to an extra smoothness condition on the data and system parameters some rather restrictive compatibility conditions on and which are usually not satisfied in real applications .though finite difference discretizations of equations ( [ equation_model_equation_1])([equation_model_equation_6 ] ) satisfy the courant - friedrichs - levy condition , we decided to use an implicit scheme instead of an explicit one to assure for better stability on long time horizons .to the authors best knowledge , earlier works ( viz . , , etc . )do not provide a rigorous convergence study for the implicit scheme in -settings , in particular , under minimal regularity assumptions . for studies on explicit schemeswe refer the reader to , . throughout this section , we assume that for and , c^{0}(\bar{a}_{{\text{\male } } } ) \times c^{0}(\bar{a}_{{\text{\female}}})\big ) .\notag\ ] ] then , the conditions of theorem [ theorem_strong_solution_existence_and_uniqueness ] are trivially fulfilled and we obtain a unique strong solution of equation ( [ equation_model_abstract_form ] ) . again , it should be stressed that no compatibility conditions are required here . selecting the age discretization steps we define the equidistant age lattices as well as their `` interiors '' and `` boundaries '' for . in this section ,we adopt the notation from the appendix letting denote discrete lebesgue spaces . for each time ,the functions and will be approximated by the lattice functions for . using the backwards difference approximation for the age derivatives and a riemann sum discretization for the integral ,we obtain the following semi - discretization with respect to the age variables with and approximating and , respectively , and being an approximation for for .we let and define the restriction operators further , we introduce the linear operators and by the means of (a^{h_{\circledast}}_{h_{\circledast } , i_{\circledast } } ) & = \begin{cases } -\tfrac{\stackrel{\circ}{u^{h}}_{\circledast , 1}^{h_{\circledast } } - \stackrel{\circ}{b^{h}_{\circledast } } \stackrel{\circ}{u^{h}}}{h_{\circledast } } , & i_{\circledast } = 1 \\-\tfrac{\stackrel{\circ}{u^{h}}_{\circledast , i_{\circledast}}^{h_{\circledast } } - \stackrel{\circ}{u}_{\circledast , i_{\circledast } - 1}^{h_{\circledast}}}{h_{\circledast } } , & i_{\circledast } \in \{2 , \dots , n_{\circledast , h_{\circledast}}\ } \end{cases } \text { for } \circledast \in \{{\text{\male } } , { \text{\female}}\ } \notag\end{aligned}\ ] ] where is equipped with the inner product hence , equations ( [ equation_model_transformed_discretized_equation_1])([equation_model_transformed_discretized_equation_3 ] ) can be equivalently transformed to where and are approximations of and , respectively . for , we consider a time step with and define the time lattice as well as its `` interior '' . the functions \to x^{h} ] will be approximated by . for ] , being in general some lebesgue equivalence classes , have a continuous representative and thus can be evaluated pointwise .[ lemma_consistency_for_a_h ] for let . then + 1 . as .2 . {\circledast}(a_{\circledast , i_{\circledast}}^{h_{\circledast } } ) - a_{\circledast}(a_{\circledast})\big)^{2 } \mathrm{d}a_{\circledast } \to 0 ] and let , x ) \cap c^{0}([0 , t ] , d(a)) ] .thus , can not be restricted onto the time - space grid whereas it is possible to restrict onto the time grid obtaining an -valued function . for , with , we denote and for [ theorem_consistency ] there holds splitting the norms of each of the three components , adding and subtracting and in the second and third group of terms in for all , , , using the definition of and equations ( [ equation_model_transformed_equation_1])([equation_model_transformed_equation_6 ] ) , applying lemma [ lemma_consistency_for_a_h ] and lemma [ lemma_appendix_difference_operator_estimate ] and exploiting the cauchy & schwarz inequality , we get as .our stability investigations are very much related to deducing a resolvent estimate in section [ section_well_posedness_and_asymptotics ] . whereas the latter was obtained using multiplier techniques based on partial integration , a summation by parts formula will be expoloited here to obtain a uniform resolvent estimate for .further , a uniform -estimate for the numerical solution based on the rational approximation for the corresponding -semigroup will be shown . together with the consistency result from the previous subsection, this will lead to the unconditional convergence of the implicit scheme .we let for .+ [ lemma_estimate_for_a_h ] for any , there holds for any let and let be an arbitrary number to be fixed latter . using lemma [ lemma_summation_by_parts ], we can estimate hence , the claim follows now for . from lemma [ lemma_estimate_for_a_h ], we get using ( * ? ? ? * theorem 4.2 ) the following resolvent estimate for . [ corollary_estimate_for_the_resolvent_of_a_h ] for , the operator is continuously invertible with now , we can prove the following unconditional stability result .[ theorem_stability ] let ] and let be the corresponding unique classical solution . for ] and . to verify our model and test the numerical scheme , we ran a numerical simulation to predict the growth of the united states population over the decade between 2001 and 2011 .the information on the population structure in 2001 and 2011 was obtained from the international data base of the u.s .bureau of census ( last updated in december 2013 ) . during the whole period of 20012011 ,the age - specific survival probabilities both for men and women were assumed to be constantly equal to those reported for 2011 in ( * ? ? ?* table 1 , pp .202203 ) .the birth rates by age of mother were selected to be constantly equal to those reported for 2008 in ( * ? ? ?* table 4 , p. 52 ) .the sex ratio was chosen as 1.05 ( cf .the annual net immigration was selected as the average net immigration over the period 20012009 as reported in ( * ? ? ?* table 2 ) . due to the lack of more accurate information , the age and sex structure of the newcomer immigrants cohortwas assumed to be the same as of those immigrants who have already dwelled in the u.s . in 2001 or before ( see ) . unless the data were divided into single - year age groups , the average value in each of the groups was computed to estimate each of the single - year values .using the age - specific survival probabilities , all system data and parameters were transformed to the form ( [ equation_full_discretization_algebraic_1])([equation_full_discretization_algebraic_3 ] ) .both age and time steps were chosen as . based on this selection , we linearly interpolated the data onto the grid .subsequently , equations ( [ equation_full_discretization_algebraic_1])([equation_full_discretization_algebraic_3 ] ) were solved using the crank & nicholson method corresponding to selecting and the output was back - transformed using the age - specific survival probabilities . finally , we restricted the simulation results onto the single - year - spaced grid .our ` matlab`-code can be downloaded from ` mathworks ` under ` http://www.mathworks.com/matlabcentral/fileexchange/48072 ` table [ table_us_population_2011 ] below gives a comparison between the total male and female population in the u.s . as reported by and as estimated from our simulation .as table [ table_us_population_2011 ] suggests , we underestimated both the male and female population by merely 2.54% and 2.82% , respectively . probably , this is due to the fact the immigration data are not sufficiently reliable and tend to be somewhat underestimated in official surveys .though not being perfect , our estimate seem to outperform the expected precision of 4.1% described in for the decade 19701980 . thus , our prediction seems to be rather accurate even without accounting for the official marital status of population members unlike ..summary on the u.s .population in 2011 .[ table_us_population_2011 ] [ cols="^,^,^,^,^ " , ] finally , figure [ figure_population_2011_reported ] displays the u.s . population in 2011 as reported in , whereas figure [ figure_population_2011_simulated ] depicts the outcome of our numerical simulation for the same year .both figures seem to be in a good accordance with each other though the reported population looks somewhat `` spiky '' .statistically , the latter can be explained by the fact the data are binned and thus can exhibit such roughness patterns due to grouping ( cf .* chapter 2 ) ) .let be a bounded interval and let be a hilbert space .for such that , let be partitioned by an equidistant lattice with , .we define the discrete lebesgue -space for , we simply write . letting and , we define the backwards and forwards difference operators respectively .note that both and are linear , bounded operators from to and , respectively , by the virtue of sobolev embedding theorem .we have the well - known summation by parts formula : [ lemma_summation_by_parts ] for , there holds as an immediate consequence of ( * ? ? ?* propositions 1.1.6 and 1.2.2 ) , we have the following two lemmas [ lemma_appendix_difference_operator_estimate ] for any , there holds [ lemma_appendix_integration_operator_estimate ] let . for any , there holds work has been funded by a research grant from the young scholar fund supported by the deutsche forschungsgemeinschaft ( zuk 52/2 ) at the university of konstanz , konstanz , germany .
we study a linear model of mckendrick - von foerster - keyfitz type for the temporal development of the age structure of a two - sex human population . for the underlying system of partial integro - differential equations , we exploit the semigroup theory to show the classical well - posedness and asymptotic stability in a hilbert space framework under appropriate conditions on the age - specific mortality and fertility moduli . finally , we propose an implicit finite difference scheme to numerically solve this problem and prove its convergence under minimal regularity assumptions . a real data application is also given . * key words : * population dynamics , partial integro - differential equations , well - posedness , exponential stability , finite difference scheme , numerical convergence * ams : * 35m33 , 35a09 , 35q92 , 65m06 , 65m12 , 65m20
we have created pixel level annotation of word images publicly available for download , specifically for word image segmentation . we have annotated different datasets consisting of different kinds of word images . to our knowledge ,annotation at pixel level and among several datasets has not been carried out , until now .small subsets from different datasets have been annotated and utilized for algorithms .we have annotated 3606 word images at pixel level .annotation is not fully automated .hence , it is a huge task as compared to similar tasks in computer vision or document imaging community .a human being requires a very short time to analyze any given image . to perform similar analysis by a computer algorithmis not simple .people analyze images using both top - down and bottom - up paradigms . combining these two approachesis not an easy task .we often read that top - down is far better than bottom - up approach in image analysis .the relative contribution of top - down and bottom - up approaches in human vision is clearly unknown .an approach is developed to understand this contribution . for this approach, it was essential to annotate word images at the pixel level .we split the recognition of word from images into segmentation and recognition tasks .the term ` binarization ' is commonly used in place of segmentation .we require complex algorithms to segment an image . in document imaging community ,conventional research primarily focused on digitization of scanned documents .it involved binarization of document image and recognition . in the section on annotation , we discuss known algorithms for segmentation of word images . these algorithms were helpful in improving the speed of pixel level annotation .annotated pixel level word images can be used to train and test any classifier .however , several good optical character recognition ( ocr ) engines are already available for roman script .hence , we focus only on the annotation algorithm and annotating datasets. necessity of annotation arises during benchmarking datasets .earlier to our pixel level approach , scene - text images have largely been annotated using bounding box approach .it makes annotation an easier task . of coursefew datasets do provide pixel level annotation , but they do not cover thousands of images .the annotated images are passed on to the recognition stage .the recognition step can be performed using a training dataset or an ocr engine .we use the trial version of omnipage professional 16 ocr for recognition of characters in the binarized image to create the benchmark recognition result . definitely the numbers will slightly vary if we use any other standard ocr and hence the benchmark results we report here indicate a rough level of recognition that can be achieved , rather than the exact maximum value attainable in current circumstances . if a single dataset is used in the experiments , it may lead to a dataset specific approach .so , to justify our approach that annotation is dataset independent , we cover five datasets for benchmarking .either top - down or bottom - up approach is used in some datasets and both in others .these datasets are from icdar 2003 competition , icdar 2011 competition , street view and sign evaluation datasets .when a camera captured image is presented to an ocr engine , the recognition performance is not necessarily very good . this led to splitingthe process of word recognition in camera captured images into two parts , namely localization ( or detection ) and recognition by lucas et .al .in international conference on document analysis and recognition ( icdar ) 2003 , they organized separate competitions for text localization on camera captured images and recognition from the word images extracted by placing a bounding box on the image .they received five entries for text localization and none for word recognition . in the following icdar 2005 conference ,text localization was the main theme and word recognition was skipped .there are several publicly available datasets for text localization .these datasets are known as iapr tc11 reading systems - datasets .one may assume that the bounding box information of a word is sufficient for any ocr to recognize .however we see that the best performing algorithm on icdar 2003 sample word image dataset ( not the test set ) has the word recognition rate of only around 52% , without post processing using lexicon .recently held icdar 2011 , robust reading challenge 2 reports that the best word recognition rate is 41.2% .figure [ examples ] shows sample word images from this challenge .+ + karatzas et .al initiated another robust reading challenge in icdar 2011 for born - digital images .born - digital images are formed by a software by overlaying text on an image .for the competition , these images were collected from web pages and email .most words present in this dataset are oriented horizontally .the reason behind horizontal placement of text may be the simplicity involved in creating the born - digital image using standard softwares .low resolution of text and anti - aliasing are the main issues to be tackled in born - digital images , whereas illumination changes and motion blur are difficult problems in the case of camera captured images .these issues indicate the complexity involved in processing born - digital and camera captured scenic images .an attempt for using top - down approach in word recognition can be observed in , sparse belief propagation with lexicon for word recongition by weinmann et .al .similarly , wang et .al use limited lexicon on street view text ( svt ) dataset . both , weinmann et .al and wang et .al , use top - down approach for word recognition .they use an unsegmented character dataset to train a classifier .if the confidence of the character classifier is less , then top - down approach helps in classification using lexicon .weinmann et .al use character level image annotation of the training data and textual features to classify the testing dataset .a limitation of this method is that it requires good quality character images with high resolution for training ; else the classification will be erroneous .al used amazon s mechanical turk for annotation of svt images .bounding box was placed around the word spotted .the placement of these bounding boxes was not defined rigorously .the resulting irregularities in the word bounding boxes add additional complexity to the segmentation task and can be inferred from the low f - score reported . in the section on benchmark results of svt dataset , we discuss as to how one can avoid this complexity .benchmarking is not a good idea , if annotation is not explicitly defined rigorously .we took five different datasets which have different definitions for bounding box and contain human errors while annotating bounding boxes for words . our pixel level segmentation and annotationhas been cross - checked thoroughly to reduce human errors to a minimum .a multi - script annotation toolkit for scenic text ( mast ) was developed by mile lab in 2011 .it can be used to annotate scenic images .mast has the facilities to annotate multiple scenic images or scenic word images .it has options for annotating multiple scripts .it has the additional capability for adding plug - ins with suitable layout for new scripts during annotation .it is publicly available for download .mast - ch , an enhanced version of character annotation tool kit has been recently developed by us .we discuss the differences between the two programs .mast is designed to annotate scenic images with multiple word images with reasonably good resolution . using seed points input by the user ,the tool uses region growing and annotates at the pixel level , with a bounding box and text annotation for multiple scripts . on the other hand ,mast - ch handles a single word image at a time and annotates characters at the pixel level using multiple segmentation methods and user selection of output .it does not have provision to generate the text annotation for different scripts .since some of the images in the datasets used contain low resolution images and truthed text is already available , we use mast - ch toolkit to perform pixel level annotation .+ we have added new functionalities based on feedback from mile lab project staff , who helped annotate the various datasets .a gui of the tool kit with the buttons and a single window for image is shown in figure [ mgui ] . `load ' button enables us to load images from a particular directory .if a word image is highly degraded , and hence requires more time to annotate , it can be skipped .those skipped images will not be tagged .` next ' and ` prev ' buttons provide the user options for such skipping and going back during annotation , that help in rapid annotation of clean word images . `save ' button saves an annotated word image in .bmp format , also containing component ordering information and in .tiff format , containing colour map for individual components segmented ) .gui also displays whether the currently loaded word image has already been tagged or not . `view mask ' button overlays the obtained segmentation mask on the original word image . in mast , wesegment words by region growing on the seeds placed by the user and then annotate the segmented words .difficulty crops up when low resolution characters are to be annotated . to reduce the manual task and also to improve segmentation , we have removed the seed growing option . in place of it , we now use known segmentation algorithms . for segmentation , we have provided a drop down button giving ` binarize ' and ` invert ' options . the user can invoke the suitable option based on the relative colors of the foreground and background . using multiple approaches ,we create 16 different segmentation outputs .first , we split a colour image into the r , g , b planes and apply otsu s threshold on each plane .we also convert the rgb image to hsv and cie lab space formats .then , we split each of them into three planes and apply otsu s threshold .in addition , we form three clusters using the rgb information directly and obtain the permutations for the clusters formed ( each of the 3 clusters and union of any two clusters at a time ) . finally , we apply robust automatic threshold selection algorithm on intensity of word image .we display all of these segmentation results in another window and provide a manual keyboard input for the user to select one of the results .once a user input is fed , a mask is generated and overlaid on the original image . by this way , we have removed manual seeding technique , which has improved the speed of segmentation task and reduced the fatigue of the annotators. if the mask generated has distinct or well separated characters , then the user can save the annotated result by clicking the ` save ' button .if none of the segmentation results are satisfactory , the user can choose ` 0 ' and thus no mask will be generated .` reload ' button is used to load a saved mask and the corresponding original image .this is useful to examine annotated images . to minimize human errors ,we cross - checked the annotated datasets three times .a degraded image may not get segmented properly .this may be due to illumination changes , occlusion or low resolution of characters . to overcome these degradations, we provide a polygonal mask .these masks can be used to add parts of characters which are merged to the background or delete parts of the background that get added to a character . ` add patch ' button provides the option for adding pixels to the annotated mask in the polygonal format . `delete patch ' button facilitates deletion of the background segmented as characters or splitting merged characters .when add or delete option is selected , we can place a single polygon at a given time .mask will be modified based on the operation performed and the annotation tool asks whether to continue the same operation .if user chooses ` yes ' , then the user can place another polygon to modify the annotated characters .if the choice is ` no ' , then the tool exits this edit loop . +the pixel - level segmented images are fed to the recognition engine .tesseract , omnipage , adobe reader and abbyy fine reader are examples of readily available ocr engines . any of these ocr softwares can be used to recognize the binarized word image . in our experiment, we use the trial version of omnipage professional 16 ocr for recognizing the word images .the recognition rate of the ocr on the above segmented word images is compared with the recognition results of the methods reported in the literature . in all these datasets, we can observe that the recognition rate on human segmented images is better than the rest .normally , any scanned document image contains top , left , right and bottom margins .however , as shown in figure [ recpreprocess](a ) , when we binarize a scene or a born - digital word image , margins do not exist since we have segmented at the word boundary . in such cases , where characters touch the boundary , we observe difficulty in recognition with the ocr engine .to avoid this difficulty and also to provide margin in all directions , we add zero rows at the top and the bottom of the image , equal to half the original number of rows in the word image .similarly , we pad zero columns on both the left and right sides of the word image .we refer these images as _ preprocessed _binarized images [ see figure [ recpreprocess](b ) ] . preprocessed binarized images are sent to the ocr for recognition .recognition rates on binarized images are reported in the experimental section .we consider five word image datasets for experimentation .all these datasets are tagged using the annotation tool explained in section 3 .icdar , svt and born - digital word images have been annotated .images with visually distinguishable boundaries between characters and background are tagged .others have been ignored , since if a human can not tag the text , we can not expect an algorithm to either segment or recognize it .the annotated dataset is available for download from our mile laboratory website .if any errors are observed , please report to the authors .these datasets cover different types of degradations except for motion blurs .all words in the dataset are tagged appropriately such that the visual distortion with respect to original image is minimum . in all the datasets , we have considered the testing set .we can improve the character segmentation using word images from the training set .we give below the recognition results for the five different datasets experimented upon .robust reading competition was first conducted in icdar 2003 .there were five entries for text localization and none for word recognition .mishra et .al express the importance of binarization for word images and show 52% as word recognition on sample dataset .this result explains that an equal importance should be given to word recognition .if we compare recognition rates of existing methods , this becomes more obvious .icdar 2003 test dataset consists of 1110 word images , all of which are segmented by the authors .the word recognition rates are tabulated in table [ icdar03table ] .table [ icdar03table ] shows a large gap in recognition rate between the preprocessed and non - processed images .this is because the low resolution text images are not recognized properly without proper margins formed by the background .wang and mishra et .al have used 829 images , a subset of icdar 2003 image dataset .hence , the reported result is averaged to the total number of images in the dataset ..recognition rates on word images binarized by methods reported in the literature for icdar 2003 dataset .[ cols="^,^",options="header " , ] this dataset , a subset of icdar 2003 dataset , was used in icdar 2011 robust reading challenge task 2 .it consists of 716 word images . in this dataset ,a few additions have been made and repeated words from the icdar 2003 dataset have been removed .those removed images are from the scene images and were not considered either in the testing or training of icdar 2011 competition .the recognition rates of existing methods are shown in table [ icdar11table ] .we can observe that the recognition rate has improved with respect to icdar 2003 dataset .there were three entries for word recognition competition in icdar 2011 : robust reading competition challenge 2 .so , we have included this dataset for discussion . here , we could not access the recognition rate of non - preprocessed binarized image .due to polarity reversal of some images by ocr itself , this resulted in erroneous text .the reason is that the bounding boxes specified are tight .word images in the test dataset do not have any additional pixels around the word boundary , as discussed in born - digital 2011 dataset .we have prepared pixel level annotation for five word image datasets .we took this huge task of annotation , in order to show that segmentation of word images is important to recognize characters / words . even though top - down analysis is useful in improving the recognition rate on specific datasets using a limited lexicon , it is not practical in real world situation . weinmann et .al showed that the recognized word rate reduces with full lexicon .+ + around 85% of word recognition is achieved with manual segmentation .thus , if we provide more importance to proper segmentation of characters countering all degradations , we can improve the recognition . here, all word images were segmented in such a way that individual components in the segmented image can be properly recognized or classified by a classifier or an ocr engine . in this paper, we infer that if we train dataset specific classifier with annotated word images , then we can use the training dataset for word recognition .skewed or curved words in the images can be classified better by a custom - built classifier than an ocr engine .we can observe that in street view dataset , the recognition rate of words is often poor due to skew or curvy nature of words .figure [ svtimages ] shows sample images from street view dataset with different degradations .hence , the trained classifier will help in improving the recognition rate .in the case of skewed or curved words , a trained classifier is less affected and with minimal processing , we can improve the recognition rate .standard ocr engines do not provide this functionality . also we use individual test characters segmented to measure stroke width of the characters , which helps in improving the segmentation as a top - down approach .we have completed the annotation of five standard databases .we have made the annotated datasets publicly available for download from our mile website .any one can download and test them using any ocr engine .the recognition rate differs across ocr engines and also with the versions . from all the tabulated results ,it is evident that we need to improve the segmentation algorithms to get better word recognition .our approach indicates the requirement for good segmentation , since it is the major part of the bottom - up approach .we can use lexicon information to improve the recognition rate reported .the validity of our good segmentation can be indirectly seen from the achieved word recognition rates .we express our heartfelt thanks to shanti devaraj , shanti s and saraswathi s , who were involved in the segmentation of word images .improvement in ui for word image annotation was possible only from their feedback .the annotation work has minimal errors .the credit goes to them for careful annotation and committed interaction with the authors .finally , without them , our dream to annotate all the images at the pixel level would not have been accomplished .s. m. lucas et .al , `` icdar 2003 robust reading competitions : entries , results , and future directions '' , _ international journal on document analysis and recognition _ , vol . 7 , no .2 , pp . 105122 , june 2005 .h. liu , x. ding , `` handwritten character recognition using gradient feature and quadratic classifier with multiple discrimination schemes '' , proc .8th _ int .document analysis and recognition _ , pp.19 - 25 , 2005 .s. lee , m. s. cho , k. jung and j. h. kim , `` scene text extraction with edge constraint and text collinearity , '' _ international conference on pattern recognition _ , pp . 39833986 , 2010 .d. karatzas , s. robles mestre , j. mas , f. nourbakhsh and p. pratim roy , `` icdar 2011 robust reading competition - challenge 1 : reading text in born - digital images ( web and email ) '' , proc .11th _ international conference of document analysis and recognition _ , pp . 14851490 , 2011 .http://www.cv.uab.es/icdar2011competition/ a. shahab , f. shafait and a. dengel , `` icdar 2011 robust reading competition - challenge 2 : reading text in scene images '' , proc .11th _ international conference of document analysis and recognition _ , pp . 14911496 , 2011 .t. kasar , d. kumar , m. n. anil prasad , d. girish and a. g. ramakrishnan , `` mast : multi - script annotation for scenic images toolkit '' , proc ._ joint workshop on multilingual ocr and analytics for noisy and unstructured text data _ , pp .18 , beijing , china , september 2011 .t. kasar and a. g. ramakrishnan,``multiscript and multioriented text localization from scene images '' , proc .4th _ international workshop on camera - based document analysis and recognition _ , pp .1520 , beijing , china , 2011 .
_ we have benchmarked the maximum obtainable recognition accuracy on various word image datasets using manual segmentation and a currently available commercial ocr . we have developed a matlab program , with graphical user interface , for semi - automated pixel level segmentation of word images . we discuss the advantages of pixel level annotation . we have covered five databases adding up to over 3600 word images . these word images have been cropped from camera captured scene , born - digital and street view images . we recognize the segmented word image using the trial version of nuance omnipage ocr . we also discuss , how the degradations introduced during acquisition or inaccuracies introduced during creation of word images affect the recognition of the word present in the image . word images for different kinds of degradations and correction for slant and curvy nature of words are also discussed . the word recognition rates obtained on icdar 2003 , sign evaluation , street view , born - digital and icdar 2011 datasets are 83.9% , 89.3% , 79.6% , 88.5% and 86.7% respectively . _ * keywords : * word images , pixel level segmentation , annotation , graphical user interface , word recognition , benchmarking , scenic images , born - digital images .
the investigation of ionospheric effects caused by the earth s surface sources is a traditional and effective technique for studies of neutral - ionospheric interaction .a lot of papers are devoted to theoretical and experimental aspects of the problem .one of such phenomena is generation of midscale ionospheric irregularities directly by shock wave from the supersonic seismic source propagating over the surface . to estimate the seismic disturbance efficiency to the ionosphereis the essential problem for planning and interpretation of each single experiment .the effective generation of intrinsic gravity waves ( igw ) in the vicinity of the epicenter according to gps - data requires large enough earthquake magnitude .but there are no statistical information about the boundary values for the intensity of seismic ( rayleigh ) waves that produce the vertical midscale ionospheric structures ( multicusp ) . for the study of the multicusp the vertical ionosondesare widely used .the use of chirp ionosondes for this problem significantly increases the signal - to - noise ratio and improves the temporal and spatial resolution , which are the key parameters for such studies .the efficiency of replacement of the pulse ionosondes by chirp ionosondes for diagnosis of ionospheric profile was demonstrated , for example in .baikal region is a seismically active area , so studies of seismic activity effects are constantly being conducted . a number of various ionospheric , optical , magnetic , seismic and acoustic instruments are involved into monitoring and case - study tasks .the aim of this paper is a preliminary statistical analysis of the emergence of midscale vertical ionospheric irregularities caused by large earthquakes at a large distance from the epicenter and investigating the effectiveness of this mechanism . in order to complete this, we used the 2011 - 2016 data from the irkutsk fast monostatic chirp ionosonde ( ifmci ) .we also used a variety of gps - receivers in the areas near the epicenters of the earthquakes to compare near zone and far zone ionospheric effects .ifmci ( 51.81n , 103.078e ) started its continuous operation in 2012 , 150 km to the south - west from irkutsk .for operation it uses a continuous chirp - signal with a frequency range from 1.5 mhz to 15 mhz and 10watts of transmitted power .the spacing between the receiving and transmitting antenna is approximately 150 meters .low power and minimal spacing between antennas allows transmitter and receiver to operate at the same time .the use of modern technologies of digital reception allows us to reach a high flexibility in the use of equipment and get a wide dynamic range .the fully digital structure of the ionosonde allows us to precisely control the shape of the transmitted signal as well as the impulse response and filtering characteristics of the receivers .we develop and use original techniques for filtering out the signals from neighbor public radiostations and significantly improve the quality of the ionosonde data .the typical frequency sweep speed is about 500 khz / s , providing 27 second time resolution . from the beginning of its continuous operation in 2012the ionosonde uses 1-minute time resolution typical for modern monitoring techniques .this allows us to use its data for detailed study of vertical ionospheric disturbances generated by seismic waves . for analysis of seismic variationswe use the data from `` talaya '' seismic station ( tly , 51.681n,103.644e , ) .the station is located near the point of ionospheric sounding , and both of them are marked at fig.[fig:1]a by single vertical cross . for analysis of the results presented in other publications , we used the data from seismic stations aru ( 56.429n , 58.561e , ) , nkc ( 50.233n , 12.448e, ) and tato ( 24.973n , 121.497e , ) , also located near the points of corresponding ionospheric observations .strong earthquakes at large distances from the epicenter sometimes produce in the ionosphere vertical midscale irregularities ( multicusp ) .their generation is associated with the passage of supersonic surface ( rayleigh ) seismic waves below the point of ionospheric sounding , and with propagation of resulting acoustic shock wave in the atmosphere and ionosphere .the amplitude of the effect is determined by the amplitude of the source seismic vibrations . so selecting the earthquakes that produce the effect over their magnitude , the distance to them , or other parameters of the earthquake related to epicenter is not optimal .to select the earthquakes participating in the study , we analyzed the amplitude of the seismic vibrations near the point of ionospheric observations .we estimated the equivalent class of the earthquake seismic vibrations by representing the observed vertical seismic amplitude in logarithmic scale .this gives us a rough estimation of the amplitude of the source of the shock wave in the ionosphere at large distances from the epicenter .the is : where is the amplitude of the vertical seismic oscillations ( in nanometers ) , calculated as the median over 4 seconds to remove noise .table[tab:1 ] shows the list of detected at tly station large seismic disturbances with , that were analyzed in the paper , and corresponding earthquakes .[ fig:1]a shows the geometry of investigated earthquakes , marked by circles with radius proportional to their , measured at tly station .as one can see , the earthquakes at larger distances appear with a smaller , than the earthquakes at smaller distances from seismic station .this illustrates the local character of the class , related with not only the earthquake magnitude , but with the relative position of the epicenter and seismic station also .investigation of the ionospheric total electron content variations ( tec ) in the vicinity of the earthquake epicenters was made on the basis of phase measurements at dual frequency gps receivers .for every given earthquake epicenter the data from nearby gps - stations from international hemodynamic network igs ( http://sopac.ucsd.edu ) were used for this study .tec variations were calculated by the standard technique used to study the earthquake effects . in accordance with generally accepted standards , we relate the measured tec with the ionospheric point , i.e. with the point of intersection of the ray `` satellite - receiver '' with equivalent ionospheric layer at the height of maximal electron density . in our researchwe used typical .we filtered the initial tec series to get the variations with periods ranging from 2 to 10 minutes , typically intensified by earthquakes in the vicinity of the epicenter and related with the midscale ionospheric disturbances caused by earthquakes . to make more accurate comparison of the intensity of ionospheric disturbances at different stations we transform the inclined tec variations into equivalent vertical tec variations .to differ earthquake effects from regular one we compared the tec variations at the day of the earthquake with tec variations in previous and subsequent days , following to traditional approach .one of the effects observed after a powerful earthquake is multicusp in the ionosphere , associated with the passage of powerful shock wave ( mach cone ) from supersonic surface seismic waves ( rayleigh waves ) .the propagation of shock wave in the neutral atmosphere generates similar irregularities in the ionosphere through the neutral - ion collisions ( see , for example ) .these ionospheric irregularities can be detected by vertical ionosondes , in a form of very specific , short - lived disturbances at the ionograms .this phenomenon was clearly observed after the tohoku earthquake 11/03/2011 and after chile earthquake 27/02/2010 .usually this effect is very fast : its total duration is associated with the passage of the surface seismic waves , and usually do not exceed several minutes .therefore , for its diagnosis one needs the ionospheric instruments with temporal resolution better than 1 minute - a network of gps - receivers or fast ionosondes .the weakness of the effect even after strong earthquakes makes possible their observation with gps in either case of the most powerful earthquakes ( when disturbances in the total electron content are significant ) , or with large spatial networks of gps - receivers using special processing and accumulation techniques presented , for example , in . in comparison with gps - receiver , covering huge spatial area and allowing us to investigate mostly horizontal structure of the irregularities, the ionosonde usually investigates ionosphere above its location .ionosondes are more sensitive instruments for investigating the midscale vertical irregularities , and , thus , they are more useful than gps for analysis of vertical ionospheric disturbances after weaker earthquakes or effects at larger distances from the epicenter .the only drawback of ionosondes is their poor temporal resolution of the order of 10 - 15 minutes intended mainly for the diagnosis of secondary ionospheric parameters - the behavior of critical frequencies and the height of the maximum .ionosondes with a higher temporal resolution were suggested and used for case studies a long time ago , but for regular studies they appeared quite recently .ifmci has the necessary temporal resolution and works in 1 minute mode constantly - since 2012 , and selectively - since 2011 .this allowed us to accumulate a huge statistics for monitoring of ionospheric perturbations associated with earthquakes in this period .fig.[fig:2 ] shows an example of the dynamics of vertical ionospheric disturbances ( multicusp ) at irkutsk ionosonde data during deep okhotsk sea earthquake ( 24/05/2013 ) , rarely discussed in the literature .the most significant multicusp effect is observed up to 300 km effective height during 06:02 - 06:06ut .as one can see , the effect duration is about several minutes .this makes it difficult for detection by standard 15-minute ionosondes .the effect is observed about 7 - 10 minutes after the moment of maximal seismic vertical variation .this delay is related with propagation of acoustic signal from the ground to these heights , and explains well experimental observations . to analyze seismic effects in the ionosphere we analyzed the ionogrames in the period from the beginning(end ) of most powerful seismic disturbances during the investigated period .the ionograms related to each seismic disturbance are shown in support materials . as a result, we found that the multicusp effect of earthquakes at ifmci data was observed during the following earthquakes : 11/03/2011 ( mw9.0 tohoku , japan ) , 26/02/2012 ( mw6.6 southwestern siberia , russia ) , 11/04/2012 ( mw8.4 northen sumatra ) , 11/04/2012 ( mw8.0 northen sumatra ) , 24/05/2013 ( mw8.3 okhotsk sea ) , 26/04/2015 ( mw6.7 nepal ) , 12/05/2015 ( mw7.3 nepal ) , 17/09/2015 ( mw8.3 coquimbo , chile ) , 25/04/2015 ( mw7.8 nepal ) .fig.[fig:3 ] shows the most revealing ionograms during these earthquakes . from fig.[fig:3 ] one can see that sometimes the multicusp effect is also accompanied by distortion and bifurcation of the f - track ( fig.[fig:3]h2 ) , that can be interpreted as horizontal irregularities .more detailed ionogram dynamics for each earthquake response can also be found in the support information .as it is shown by , even for a powerful tohoku 11/03/2011 earthquake the specific periods of perturbation and their amplitude can be evaluated under assumption of the monotony of the perturbed electron density profile . therefore , to evaluate the perturbation amplitude we processed the ionograms by standard polan program ( available at ) , which is traditionally used for multicusp analysis . for automatic processing of the data we developed a simple programset used to convert ionogrames into profiles of plasma frequency .the automatic processing was easy to be done due to high quality of irkutsk ionogrames , already filtered out from neighboring public radiostations by original and very powerful technique .the algorithm for obtaining electron density profiles from ionograms consists of 3 stages . at the 1st stage ,the ionogram is filtered out from rare dot - like noise of different nature . at the 2nd stage, the track is made over the data by dividing the frequency interval of significant signals into 50 subintervals , that is necessary for stable work of polan program . in each subintervalwe find the median point over the height and frequency . at the 3rd stage the median points , collected over subintervalsforms the 50-points track , are used as a source for polan calculations of plasma frequency profile .some results of this automatic processing are shown in fig.[fig:3b ] . in the figurethe plasma frequency profiles are shown , and their differential over the frequency . from fig.[fig:3b ] one can see that the main perturbations associated with multicusp can be observed from 140 km up to f2 maximum ( for example , fig.[fig:3b]a ) , so for their observations the ionosonde must operate in the appropriate mode , covering reflection heights from 100 - 140 km to f2 maximum .as one can see , the surface seismic waves lead to formation of mid - scale short - lived plasma frequency variations that disturb initial , relatively smooth ionospheric profile .the amplitude of these effects is relatively small , and more evident at ionograms ( that non - lineary depends on the plasma frequency profile ) , than in the plasma profile itself .another effect , that accompanied some of the earthquakes , was the change of sporadic e - layer structure , manifested in the form of bifurcation after the passage of the surface wave .fig.[fig:4 ] shows an example of this effect for the earthquake 07/12/2015 ( mw7.2 tajikistan ) .fig.[fig:5]a - c , e - g , i - k show the most characteristic ionograms for the earthquakes 25/10/2013 ( mw7.1 off east coast of honshu , japan ) , 16/02/2015 ( mw6.8 near east coast of honshu , japan ) , 07/12/2015 ( mw7.2 tajikistan ) . to emphasize this effect , we integrated amplitude at ionograms over frequency for each given effective height .this technique is close to the height - time - intensity(hti ) technique .dependence of the resulting amplitude on the effective height for different earthquakes ( when we observe this effect ) is shown in fig.[fig:5]d , h , l .the figure shows that the main effect is the appearance of an additional e - sporadic layer overlying the usual e - sporadic layer on the ionogram .it arises approximately 10 - 15 minutes after the passage of seismic disturbance at the point of observation and lasts about 20 minutes . after arising, the secondary reflection gradually decreases down to the height of the main e - sporadic track . at the same time with a decrease of the reflected signal height its amplitude is increased .examples of tec variations in the vicinity of the epicenters of all considered earthquakes ( table [ tab:2 ] ) are shown in fig.[fig:6 ] .we selected the pairs `` satellite - receiver '' with rays closest to the earthquake s epicenter and with larger tec effect ( to take into account aspect dependence of tec effects ) .the tec variations at those rays are shown in fig.[fig:6 ] . to estimate the efficiency of each earthquake for generating 2 - 10min tec variations, we made a preliminary analysis of all the gps data at the nearest gps - stations .the results of the analysis are summarized in table [ tab:2 ] .the table also shows the local solar time of the main shock in the epicenter of the earthquake ( `` lst '' ) , the focal mechanism of the earthquake ( `` fault type '' ) and background geomagnetic conditions on the day of the earthquake ( `` m.disturbance '' ) .the information about focal mechanisms of the earthquake is obtained from the the global cmt project(http://www.globalcmt.org ) .the geomagnetic conditions are estimated from dst and ae indexes , obtained from the international data centre in kyoto ( http://wdc.kugi.kyoto-u.ac.jp/wdc ) . in the tablewe mark the days with disturbed geomagnetic conditions ( `` disturbed '' ) and with magnetic storms ( , `` storm '' ) , based on the behavior of these indexes . from the table [ tab:2 ]one can see that during most part of the earthquakes the geomagnetic conditions were quiet .so ionospheric irregularities related with geomagnetic disturbances has not prevent us to detect tec - gps response to earthquakes .we classify tec disturbances generated by the earthquakes into 3 following categories .`` strong response '' has relatively high amplitude ( tecu ) ( fig.[fig:6]a , l , m , n , p ) and recorded at many rays `` receiver - satellite '' .such responses are registered after five earthquakes : 11/03/2011 , 25/04/2015 , 26/04/2015 , 12/05/2015 , 16/09/2015 .the magnitude of these earthquakes varies from 6.7 to 9.0 , and all of them had reverse fault focal mechanism .it is important to note about very successful geometry of the measurements during all of these earthquakes : the most part of the `` receiver - satellite '' rays were close to or directly above the epicenter . in this categorythe most powerful earthquake 11/03/2011 ( tohoku , japan ) should be emphasized .it causes the most intense and prolonged ionospheric disturbances .ionospheric effects of this earthquake are investigated in a huge amount of papers ( see for example ) . recently, more attention is paid to modeling of these effects and to the study of the propagation of the disturbances at large distances .`` weak response '' is the response that has relatively low amplitude ( 0.1 - 0.15 tecu ) and recorded at several ( 2 to 5 ) rays satellite - receiver. the shape and amplitude of such disturbances is high enough to detect it from the background tec variations level . in this categorythe deep earthquake 24/05/2013(okhotsk sea earthquake , fig.[fig:6]e ) should be emphasized .it was very powerful earthquake ( mw8.3 ) with a predominance of vertical displacements in the epicenter ( normal fault ) .it attracted the attention of seismologists , because felt at unusually large ( 10,000 km ) distance from the epicenter .the earth s surface displacement in the sea of okhotsk area caused by this earthquake were also studied using gps - data . however , due to the very large depth of the epicenter , the impact of the okhotsk sea earthquake to the ionosphere looks very weak .perhaps that is why the ionospheric effects of the earthquake were not discussed too much in the literature .we could not find any work devoted to the gps - tec or ionospheric perturbations caused by this event on nearby region . during the present study we detected some tec perturbations , most likely associated with the okhotsk sea earthquake , but only at two rays ( mag0-g06 , mag0-g16 ) , closest to the epicenter of the earthquake .`` no response '' category in the table [ tab:2 ] means that we were unable to identify any significant tec disturbances related to the earthquake at any of the rays `` receiver - satellite '' ( fig.[fig:6 ] b , c ) .the effects were not observed after two earthquakes : 14/02/2013 and 16/04/2013 . in our opinionthe disturbance observed in 14/02/2013 ( fig.[fig:6]b ) is not a response to the earthquake , but should be associated with the intersection by the ray `` receiver - satellite '' by the sharp electron density gradient , perhaps the highlatitude trough .the analysis of the original ( unfiltered ) series of tec variations leads us to this conclusion .as the analysis has shown , the series have a bend ( `` hook '' ) , which is a characteristic for the cases when the beam crosses the steep gradient of the electron density , for example , the terminator or boundary of ionospheric trough . to `` no response '' category we also attributed the events when we detect a weak perturbations at one or two rays with the shape close to the shape associated with earthquake - generated perturbations usually ( fig.[fig:6 ] d , k ) .however , due to a number of reasons ( the absence of such disturbances on the other rays , a significant propagation velocity exceeding sound speed , etc . )there is no assurance that these perturbations can be caused by an earthquake ( for example , in case of 20/04/2013 and 20/04/2015 ) .no gps - tec response to the earthquake may have several causes .first of all , the earthquakes with small magnitude and with predominance of horizontal displacement in the focus ( strike - slip faults ) that produce very small effects with amplitudes lower than background effects . as it was shown in , after the earthquakes with magnitudes appreciable tec wave - like disturbances are not observed , and in the case of strong earthquakes ( ) responses are more pronounced after the events with substantial vertical component in the focus ( normal or reverse fault ) .another important cause is the geometry of gps - measurements . in most cases ,the lack of response at the beam path `` receiver - satellite '' may be related with a large distance from the ionospheric point to the epicenter ( the mark `` rays far away '' in table [ tab:2 ] ) . for `` strong response '' and `` weak response '' categories we estimated the propagation velocity of disturbances .the estimate is done for the first impulse response .the velocity is calculated from a simple linear propagation model , where is the line - of - sight distance between the epicenter and the ionospheric point at which the response was recorded , is the delay between the tec variation maximum moment and the moment of the earthquake ( shown in table[tab:2 ] ) .the resulting values of are summarized in table [ tab:2 ] .the calculated propagation velocity varies between 150 and 600 m / s .this indicates that these perturbations can be associated with propagation of internal atmospheric waves .as the analysis of gps - tec measurements shows above , near the epicenter the most part of earthquakes were accompanied by tec - variations ( see fig.[fig:7]a ) . from the other side , the multicusp effect at large distancesaccompanies only several earthquakes .so the question naturally arises , how can we estimate the effectiveness of a given surface seismic wave for the formation of those or other effects in the ionosphere . to estimate this, we introduce the class of rayleigh wave acoustical efficiency ( rwaec ) that is determined by the amplitude of the local seismic vibrations below the point of ionospheric observations .it can be shown that for distributed acoustic wave source the mach cone amplitude can be estimated by multiplying mach cone amplitude from moving point source to the acoustic radiation pattern of distributed source : where is the amplitude of acoustic signal in the shock wave , generated by dot - like ( isotropic ) acoustic source moving with supersonic speed ; is the acoustic radiation pattern of the distributed source itself as a function of zenith angle , calculated from relation between sound speed and supersonic source speed ( supposed to be 3.5km / s ) .the radiation pattern is defined by the spatial structure of the propagating seismic wave .this approximation does not take into account the phase radiation pattern .the exact spatial wave structure is unknown for us , but we can assume that the wave has some spatial shape , that moves without dispersion with the supersonic speed of seismic wave .based on this approximation we can use temporal variations of the seismic signal at a given station to estimate the spatial shape of an equivalent acoustic source . in this casewe can calculate the radiation pattern associated with the spatial distribution of seismic oscillations , and estimate the amplification of the mach cone as a function of seismic signal shape .it should be noted that since the seismic wave velocity is much higher than acoustic sound speed in the atmosphere , one needs to take into account the radiation pattern only at the angles close to the perpendicular to the earth s surface ( i.e. perpendicular to the radiation plane ) .the principle of the formation of a mach cone with taking into account the acoustic signal radiation pattern is illustrated in fig.[fig:7]b .the intensity of the acoustic signal in the far zone of the acoustic antenna ( that is approximately valid for this case ) is defined by radiation pattern . the radiation pattern of distributed sound source in its far field zoneis defined as : where is vibrational velocity of the sound source ( we suppose the movements to be strictly vertical ) . in one - dimensional , steady - state case , we can estimate the vertical velocity of the surface oscillations from the experimental data as in this case , the wave vector of the radiation is close to the perpendicular to the plane of the source and their scalar product is small . in the one - dimensional case , for plain earth approximation , along the direction of the rayleigh wave motion ( this case corresponds to a significant distance between observation point and the epicenter of the earthquake , when the seismic wave can be considered having not spherical but plane front ) , the acoustic radiation pattern can be estimated from one - dimensional spectrum : where is a wavenumber projection to the acoustic source plane .fig.[fig:7]c shows examples of radiation pattern , calculated for tohoku ( 11/03/2013 ) earthquake for two wavelengths - ( black line ) and ( red line ) , as a function of zenith angle .the dashed line marks the direction of shockwave propagation . as one can see , the antenna pattern is different for different wavelengths .so we can find the wavelength , for which the amplitude of acoustical signal becomes maximal in shockwave direction .the amplitude corresponds to the maximum of the spectrum ( [ eq : spectrum ] ) .so searching the wavelength most effective for generating acoustical signal in the first approximation corresponds to the search of maximum in the spectrum ( [ eq : spectrum ] ) . thus , for fixed height of the expected effect and excluding the effects of the sound propagation in the neutral atmosphere and the energy transformation from the neutral component to the charged one , the spectral energy class of the acoustic signal caused by seismic vibrations can be estimated from the maximal amplitude of radiation pattern , and consequently from the spectrum of the derivative of seismic vibrations like : where in nanometers . for calculating the spectrum in ( [ eq: kw ] ) we used 8192 and 16384 point fast fourier transform .fig.[fig:7]d shows a comparative analysis of the spectra of the derivative of the vertical vibrations during several earthquakes ( 11/03/2011 , 24/05/2013 and 12/05/2015 ) that are close in irkutsk local solar time , as well as close in season .one can see that the spectra ( and acoustic radiation patterns ) differs significantly , and this can explain the difference in the amplitude of the effect , observed in the ionosphere .the fig.[fig:7]e and table [ tab:3 ] summarize the main effects of earthquakes observed by ifmci as a function of local solar time and rayleigh wave acoustic efficiency class .one can see from the fig.[fig:7]e that at night multicusp effect is not observed .this can be due to the fact that for nighttime low electron density the standard ionograms start from 250 km altitude .for relatively weak multicusp is not observed even in the daytime .for relatively high multicusp is observed nearly regularly at daytime [ 07:00 - 17:00]lst .it should be noted that at the 16:00 and 01:00 lst multicusp is not observed , although tracks observed at ionograms allow us to detect such effect .we can make a conclusion that to observe multicusp in the f - layer is too difficult at night .this can be explained qualitatively by the need of more energy for seismic oscillations at nighttime to generate waves in high f - layer than at daytime to generate the effects in the lower f1 or e - layer .this is due to the fact that lower electron density produces less neutral - electron collisions , that are the main agents for energy transfer from neutral acoustic wave to electron density variations .to verify our conclusions about the power and daily features of multicusp observations we also analyzed the results from some of the most detailed papers .in particular , we calculated index for the observations of ionospheric multicusp reported in ( ) .the calculation of was made from the seismic vibrations observed at seismic stations nearby the locations of observed ionospheric effects ( the aru seismic station ( sverdlovsk region , russia ) , tato seismic station ( taiwan ) and nkc seismic station(czech republic ) ) .the results are also summarized in the table [ tab:3 ] and fig.[fig:7]e .these cases are marked by asterisks and confirm the obtained results .in the morning , evening and night time ( lst 15:00,lst 07:00 ) we observed cases of bifurcation effect in the sporadic e - layer .this track separation can be explained by the formation of an additional horizontal layer above the regular sporadic - e that causes the appearance of an additional reflection point .one can assume several variants of how to explain the observed bifurcation effect in the sporadic e - layer .we assume that we observe a downward movement of irregularities due to translucency of sporadic e - layer , by analogy with . in this case, the downward movement occurs with nearly wind speeds .the presented observations of gps - tec disturbances in the vicinity of earthquake epicenters confirm the data obtained in previous studies . with the increasing of the earthquake magnitude , in general , an increase in tec amplitude perturbationsis observed as well .the tec response is also affected by the mechanism of the earthquake source : after the earthquake , in the epicenter of which a vertical displacements are dominated , the tec variations are intensified and are detected at a large number of rays `` satellite - receiver '' .the geometry of measurements also plays an important role : the absence of rays `` satellite - receiver '' near the epicenter makes it difficult to detect responses even after earthquakes with magnitudes mw .it may also be noted that after strong earthquakes ( mw ) , in addition to the first pulse associated with the main shock , a number of secondary vibrations is observed .they are caused , in our opinion , by the generation of eigen oscillations of the atmosphere with different periods ( igws ) .the dependence of tec effects on the local time has not been identified ( see table[tab:2 ] ) .the diurnal variation of effects suggests that generation of igws more likely do not depend on the local time .the dependence of intensity of igws effects on the nature of earthquakes was discussed , for example , in .this means that the found dependence of multicusp from the local time that is not detected in the epicenter vicinity .it significantly depends on the specific mechanism of multicusp generation by the shock wave .this can be qualitatively explained by the dependence of the ion - neutral collisions mechanism on the product of the background neutral density and background electron density , that is maximal in a daytime .in the paper the statistical analysis of ionospheric effects of earthquakes that occured in 2011 - 2016 according to the irkutsk fast monostatic chirp ionosonde was made . to control the process of neutral - ionospheric interaction in the vicinity of epicenterthe data from gps - receivers were also analyzed for each of these earthquakes . to estimate the ionospheric efficiency class for seismic disturbances in the far field of epicenter , that cause propagation of the shock cone ( mach cone ) we proposed the logarithmic index ( [ eq : kw ] ) , based on finding the maximal amplitude of spectral power fluctuations . from a physical point of view, the index allows us to estimate the maximal amplitude of the shock wave ( mach cone ) based on spatial distribution of seismic oscillations and their vertical velocities .so the index depends on the amplitude of the acoustic effects associated with the passage of seismic surface wave .the analysis shows that the characteristic index value , from which ifmci can see multicusp effect in the ionosphere ( at daytime [ 7:00 - 17:00]lst ) .the bifurcation of sporadic e - layer can be observed at nighttime at .it is shown that a multicusp effect according ifmci data has a rather pronounced daily dependence , intensified in local daytime hours , with absence of characteristic dependence on fof2 .this allows us to suggest that the possibility of observing the effect is likely due not to the intensity of the primary f2 layer , but due to other mechanisms at lower heights .it is shown that when ionograms start above 250 km ( in the morning , evening and night ) the multicusp effect in irkutsk is not observed , this corresponds well with results of .it is shown that this effect is not associated with daily dependence of the generation of ionospheric disturbances in the vicinity of the earthquake epicenter , estimated by gps data .this suggests that the efficiency of generation of irregularities on the mach shock wave and in the vicinity of earthquake epicenter are apparently different . on the example of deep earthquake in the okhotsk sea ( 25/10/2013 ,depth about 600 km ) it can be assumed that the effects of rayleigh wave in the case of deep earthquakes may be more noticeable than igw effects in the vicinity of earthquake epicenter .thus , in spite of the daily dependency effects , the effects from rayleigh waves are an additional way to study the ionospheric response to earthquakes , because they can sometimes produce more strong effects than igws generated in the epicenter . it is shown that after passing the rayleigh wave sometimes can be observed a bifurcation in sporadic e - layer , observed as secondary e - layer arised and moving downward to basic sporadic e. this can be associated with formation of vertical irregularities and their dynamics under the influence of the dynamics of the neutral atmosphere . as a result of the work it is shown that fast ifmci is a very sensitive instrument for investigating of rapid ionospheric effects related to the earthquakes with during the day , which roughly corresponds to the characteristic magnitudes of distant earthquakes above .this makes the ionosonde a convenient tool for the diagnosis of various processes that occur during seismo- ionospheric interaction .the obtained results allow us to suggest and local solar time as effective parameters for searching the multicusp effects in the ionosphere related with surface seismic waves .we are greateful to istp sb ras stuff : to dr.lebedev v.p . and dr.tashilin a.v . for fruitful discussion , to ivanov d.v . and salimov b.g. for preparing ionograms for analysis .the work was done under financial support of the project # 0344 - 2015 - 0019 `` study of the lithosphere - atmosphere - ionosphere system in extreme conditions '' of the program of presidium of ras , grant nsh-6894.2016.5 of the president of state support for leading scientific schools . .cross corresponds to measurement location ( ifmci and tly seismic station ) .b ) geometry of gps measurements in 2013 - 2015 .circles correspond to earthquakes epicenters , crosses and diagonal crosses correspond to the gps stations .the same colors of circle and crosses correspond to the same event . ].list of seismic disturbances , participated in the analysis , and the corresponding earthquake ( according to the european - mediterranean seismological centre http://www.emsc-csem.org ) . [ cols="^,^,^,^,^,^",options="header " , ] 58 afraimovich e.l .( 2000 ) 35(6):14171424 afraimovich , e. l. , perevalova , n. p. , plotnikov , a. v. , and uralov , a. m. ( 2001a ) ., 19(4):395409 .afraimovich , e.l . ,kosogorov , e.a . ,lesyuta , o.s . ,yakovets , a.f . , ushakov , i.i.(2001b ) 19(7):723731 akchurin a.d . ,bochkarev v.v ., ildiryakov v.r ., usupov k.m .( 2011 ) gp1.23 .artru , j. , ducic , v. , kanamori , h. , lognonn , p. , and murakami , m. ( 2005 ) . ., 160:840848 .artru , j. , farges , t. , and lognonn , p. ( 2004 ) .. , 158:10671077 .artru , j. , lognonn , p. , and blanc , e. ( 2001 ) . ., 28:697700 .astafyeva , e. , heki , k. , kiryushkin , v. , afraimovich , e. , and shalimov , s. ( 2009 ) .. , 114(a13):10307 .astafyeva , e. , lognonn , p. , and rolland , l. ( 2011).first ionospheric images of the seismic fault slip on the example of the tohoku - oki earthquake . , 38(22):l22104 astafyeva , e. , rolland , l. , and sladen , a. ( 2014 ) .strike - slip earthquakes can also be detected in the ionosphere ., 405(0):180 193 .astafyeva , e. , shalimov , s. , olshanskaya , e. , and lognonn , p. ( 2013 ) . ., 40:16751681 .berngardt , o. , kotovich , g. v. , mikhailov , s. y. , and podlesnyi , a. v. ( 2015 ) .dynamics of vertical ionospheric inhomogeneities over irkutsk during 06:00 - 06:20ut 11/03/2011 caused by tohoku earthquake . , 132:106-115 .blanc , e. ( 1985 ) ., 3:673687 .calais , e. and minster , j. b. ( 1995 ) .gps detection of ionospheric perturbations following the january 17 , 1994 , northridge earthquake ., 22(9):10451048 .chum , j. , hruka , f. , zednk , j. , and latovika , j. ( 2012 ) .ionospheric disturbances ( infrasound waves ) over the czech republic excited by the 2011 tohoku earthquake . , 117(a8):a08319 .chum , j. , liu , j .- y . ,latovika , j. , fier , j. , mona , z. , bae , j. , and sun , y .- y .ionospheric signatures of the april 25 , 2015 nepal earthquake and the relative role of compression and advection for doppler sounding of infrasound in the ionosphere . , 68(1):112 .haldoupis , c. ( 2012 ) . ., 168:441461 .haldoupis , c. , meek , c. , christakis , n. , pancheva , d. , and bourdillon , a. ( 2006 ) . ., 68:539557 .harris , t. j. , cervera , m. a. , meehan , d. h. ( 2012 ) , , 117:a06321 harris , t. j. , quinn , a. d. , and pederick , l. h. ( 2016 ) .the dst group ionospheric sounder replacement for jorn ., 51:563572 .jin , s. , jin , r. , and li , j. h. ( 2014 ) . pattern and evolution of seismo - ionospheric disturbances following the 2011 tohoku earthquakes from gps observations ., 119(9):79147927 . 2014ja019825 .jin , s. , occhipinti , g. , and jin , r. ( 2015 ) .\{gnss } ionospheric seismology : recent observation evidences and characteristics . , 147(0):54 64 .kakinami , y. , kamogawa , m. , watanabe , s. , odaka , m. , mogi , t. , liu , j. y. , sun , y. y. , and yamada , t. ( 2013 ) .ionospheric ripples excited by superimposed wave fronts associated with rayleigh waves in the thermosphere . , 118:905911 .kherani , e. , lognonn , p. , hebert , h. , rolland , l. , astafyeva , e. , occhipinti , g. , cosson , p. , walwer , d. , and de paula , e. ( 2012).modelling of the total electronic content and magnetic field anomalies generated by the 2011 tohoku - oki tsunami and associated acoustic - gravity waves . , 191(3):10491066 .kherani , e. , rolland , l. , lognonn , p. , sladen , a. , klausner , v. , and de paula , e. ( 2016).traveling ionospheric disturbances propagating ahead of the tohoku - oki tsunami : a case study . , 204(2):11481158 . , v. v. , afraimovich , e. , and astafyeva , e. ( 2011 ) . . , 49:227239 .kozlovsky , a. , turunen , t . ,ulich , t. ( 2013 ) , 118 , 5265 - 5276 kurkin , v.i ., laryunin , o.a . ,podlesny , a.v . ,pezhemskaya , m.d . ,chistyakova , l.v . , 2014 ,kuznetsov , v. v. , plotkin , v. v. , and khomutov , s. y. ( 1999).acoustic , electromagnetic and ionospheric disturbances during the vibroseismic sounding . , 26(13):20172020 .latovika , j. ( 2006).forcing of the ionosphere by waves from below . , 68(35):479 497 .latovika , j. , bae , j. , hruka , f. , chum , j. , indelov , t. , horlek , j. , zednk , j. , and krasnov , v. ( 2010 ) .. , 72:12311240 .liu , j .- y . , chen , c .- h ., lin , c .- h . , tsai , h .- f . ,chen , c .- h . , and kamogawa , m. ( 2011).ionospheric disturbances triggered by the 11 march 2011 m9.0 tohoku earthquake . , 116(a6):a06319 .liu , j. y. , chen , c. y. , and sun , y. y. ( 2013) .. in _ egu general assembly conference abstracts _ ,volume 15 of _ egu general assembly conference abstracts _ , 6097 .liu , j. y. and chen , c. h. and sun , y. y. and tsai , h. f. and yen , h. y. and chum , j. and latovika , j. and yang , q. s. and chen , w. s. and wen , s.(2016).43(4 ) , 17591765 lognonn , p. , clevede , e. , and kanamori , h. ( 1998 ) .. , 135:388406 .maruyama , t. and shinagawa , h. ( 2013) .. in _ egu general assembly conference abstracts _ , volume 15 of _ egu general assembly conference abstracts _maruyama , t. and shinagawa , h. ( 2014).infrasonic sounds excited by seismic waves of the 2011 tohoku - oki earthquake as visualized in ionograms ., 119(5):40944108 .maruyama , t. , tsugawa , t. , kato , h. , saito , a. , otsuka , y. , and nishioka , m. ( 2011).ionospheric multiple stratifications and irregularities induced by the 2011 off the pacific coast of tohoku earthquake ., 63:869873 .maruyama , t. , yusupov , k. , and akchurin , a. ( 2016a).interpretation of deformed ionograms induced by vertical ground motion of seismic rayleigh waves and infrasound in the thermosphere ., 34(2):271278 .maruyama , t. , yusupov , k. , and akchurin , a. ( 2016b).ionosonde tracking of infrasound wavefronts in the thermosphere launched by seismic waves after the 2010 m8.8 chile earthquake . , 121(3):26832692 .matsumura , m. , saito , a. , iyemori , t. , shinagawa , h. , tsugawa , t. , otsuka , y. , nishioka , m. , and chen , c. h. ( 2011 ) .. , 63:885889 .munro , g.h ., heisler , l.h .( 1956).,9(3 ) : 343358 nishitani , n. , ogawa , t. , otsuka , y. , hosokawa , k. , and hori , t. ( 2011 ) .. , 63:891896 .occhipinti , g. , kherani , e. , and lognonn , p. ( 2008 ) .. , 173:753765 .perevalova , n. , sankov , v. , astafyeva , e. , and zhupityaeva , a. ( 2014).threshold magnitude for ionospheric \{tec } response to earthquakes . , 108:77 90 .perevalova , n. , shestakov , n. , voeykov , s. v. , takahashi , h. , and guojie , m. ( 2015 ) .ionospheric disturbances in the vicinity of the chelyabinsk meteoroid explosive disruption as inferred from dense gps observations ., 42(16):65356543 .podlesny , a.v . ,kurkin , v.i ., medvedev , a.v . , ratovsky , k.g .( 2011 ) p.hp1-2 .( http://www.ursi.org/proceedings/procga11/ursi/hp1-2.pdf )podlesnyi , a.v . ,brynko , i.g . ,kurkin , v.i . ,berezovsky , v.a . ,kiselyov , a.m. , petuchov , e.v ., , 4 , 24 - 31 , ( http://vestnik.geospace.ru/php/download.php?id=uplf41dc6e89d6cba19d27b90397187187ac.pdf ) podlesnyi , a.v . , lebedev , v.p . ,ilyin , n.v . ,khakhinov , v.v.(2014a ) , 19(1):063070 podlesny , a.v . ,kurkin , v.i . ,laryunin , o.a . ,pezhemskaya , m.d . ,chistyakova , l.v .( 2014b ) , gp2.27 ( ) .pokhotelov , o.a . , parrot , m. , fedorov , e.n . ,pilipenko , v.a . ,surkov , v.v . ,gladychev , v. a. ( 1995 ) .response of the ionosphere to natural and man - made acoustic sources , 13(11):11971210 ponyatov , a. a. , uryadov , v. p. , ivanov , v. a. , ivanov , d. v. , chernov , a. g. , shumaev , v. v. , cherkashin , yu . n.(1999).,42(4):269277 reinisch , b. w. , galkin , i.a ., khmyrov , g. m. , kozlov , a. v. , bibl , k. , lisysyan , i. a. , cheney , g. p. , huang , x. , kitrosser , d. f. , paznukhov , v. v. , luo , y.,jones , w.,stelmash , s. , hamel , r.,grochmal , j. ( 2009 ) , 44(1):rs0a24 .rolland , l. ( 2011 ) .three - dimensional numerical modeling of tsunami - related internal gravity waves in the hawaiian atmosphere ., 63:847851 .shinagawa , h. , iyemori , t. , saito , s. , and maruyama , t. ( 2007 ) . a numerical simulation of ionospheric and atmospheric variations associated with the sumatra earthquake on december 26 , 2004 ., 59(9):10151026 .shinagawa , h. , tsugawa , t. , matsumura , m. , iyemori , t. , saito , a. , maruyama , t. , jin , h. , nishioka , m. , and otsuka , y. ( 2013 ) .two - dimensional simulation of ionospheric variations in the vicinity of the epicenter of the tohoku - oki earthquake on 11 march 2011 ., 40(19):50095013 .smaryshev , m.d .( 1973).directivity of hydroacoustical antennas(in russian ) .steblov , g. m. , ekstrm , g. , kogan , m. g. , freymueller , j. t. , titkov , n. n. , vasilenko , n. f. , nettles , m. , gabsatarov , y. v. , prytkov , a. s. , frolov , d. i. , and kondratyev , m. n. ( 2014).first geodetic observations of a deep earthquake : the 2013 sea of okhotsk mw 8.3 , 611 km - deep , event . ,41(11):38263832 .. tang , l. , zhang , x. , and li , z. ( 2015 ) .observation of ionospheric disturbances induced by the 2011 tohoku tsunami using far - field gps data in hawaii .ye , l. , lay , t. , kanamori , h. , and koper , k. d. ( 2013).energy release of the 2013 mw 8.3 sea of okhotsk earthquake and deep slab stress heterogeneity ., 341(6152):13801384 .zhan , z. , kanamori , h. , tsai , v. c. , helmberger , d. v. , and wei , s. ( 2014).rupture complexity of the 1994 bolivia and 2013 sea of okhotsk deep earthquakes . , 385:8996 .zherebtsov , g.a .iris / ida seismic network , http://dx.doi.org/doi:10.7914/sn/ii global seismograph network ( gsn - iris / usgs ) ( gsn ) , http://dx.doi.org/doi:10.7914/sn/iu synthetic seismograms network , http://www.fdsn.org/networks/detail/sy/ polan program , http://www.ursi.org/files/commissionwebsites/inag/uag_93/uag_93.html
based on the irkutsk fast monostatic chirp ionosonde data we made a statistical analysis of ionospheric effects for 28 earthquakes which appeared in 2011 - 2016 years . these effects are related with surface ( rayleigh ) seismic waves far from epicenter . the analysis has shown that nine of these earthquakes were accompanied by vertical midscale ionospheric irregularities ( multicusp ) . to estimate the ionospheric efficiency of the seismic waves we proposed new index . the index estimates the maximal amplitude of the acoustic shock wave generated by given spatial distribution of seismic vibrations and related with maximal spectral power of seismic oscillations . based on the analysis of experimental data we have shown that earthquake - related multicusp is observed mostly at daytime [ 07:00 - 17:00]lst for . the observations of intrinsic gravity waves by gps technique in the epicenter vicinity do not show such a daytime dependence . based on 24/05/2013 okhotsk sea earthquake example , we demonstrated that deep - focus earthquakes can produce strong multicusp far from the epicenter , although do not produce significant gps ionospheric response in the epicenter vicinity . three cases of sporadic e bifurcation in far epicentral zone were also detected and discussed .
current - day genomes are the result of generations of evolution .one of the marks of evolution is the existence of protein families .these families comprise groups of proteins that share sequence similarity and perform similar biological functions .the most likely explanation for the similarity in sequence and function is that all the proteins in a family evolved from a single common ancestor .the size of a family , defined here as the number of proteins in a family for a particular species , evolves over time through processes that increase the physical size of an organism s genome .genomes in many major lineages are thought to have undergone ancient doublings one or more times .it is thought that genome doubling can provide an evolutionary advantage by permitting redundant genes to evolve rapidly and perform different biological roles , potentially allowing entire pathways to acquire more specific function . at finer scales , chromosomal regions or individual genes may be may be duplicated or lost through evolution . even without physical loss, protein coding regions may suffer loss of function and cease to be expressed , leading to the existence of pseudogenes .previous studies have detected patterns supporting growth and loss of genetic information .evolutionary processes consisting of duplication and mutation can introduce long - range , power - law correlations in the sequences of individual genes ; reports of such correlations in intron - rich regions sparked considerable interest .in contrast to studies of individual gene sequences , we developed a model to explain the evolution of the physical size of a genome . in our model ,a speciation rate allowed genome size to increase or decrease , and an extinction rate removed individual species .the ratio of the speciation and extinction rates yielded scaling laws for the distribution of genome sizes : exponential scaling when the amount of genetic material lost or gained was constant , and power - law scaling leading to a self - similar distribution when the change in genetic material was proportional to the existing size .closed - form approximations agreed with simulation results and explained observations reported by others .here we use related models to explore size of gene families .processes that add and remove genetic material are presented in sec .ii . in the first model ,we assume that duplication occurs on the level of individual genes . in the second model , we assume that these events duplicate an entire genome .closed - form solutions are provided for the size distributions of gene families .next , in sec .iii , we present results from analysis of gene families in sequenced genomes .these results rely heavily on the clusters of orthologous groups ( cogs ) database , which identifies gene families that span eight individual unicellular species including eubacteria , archaebacteria , cyanobacteria , and eukaryots .we discuss which evolutionary model is most consistent with our observations in sec .for a single organism , let be the number of gene families that contain genes .the total number of families is .we describe two models for the increase or decrease of the number of genes in the family . in modeli , we assume that each gene in the family evolves independently .each gene duplicates with rate and each gene is lost with rate . with each generation , the change in the number of families of size is after sufficient time , the distribution reaches equilibrium values .detailed balance indicates that the number of families increasing from size to should equal the number of families decreasing from size to size , the resulting expression for the populations is ,\ ] ] where we have defined as .alternatively , normalizing by the families with a single member , we have in addition to describing dynamics when each gene is duplicated individually , this model can also represent a system in which large genomic regions are duplicated or lost , provided that only one member of the family is present in the duplicated region . if , for example , a single chromosome is duplicated , this model could apply .the populations predicted by model i are shown as black lines in fig .[ f : model ] for three choices of the parameter : 0.1 ( thin black line ) , 0.3 ( medium black line ) , and 0.9 ( thick black line ) . as the value of increases ,the distribution of families shifts to larger sizes .the shape of the distribution changes from a straight line on the log - log plot at small , characteristic of a power - law distribution , to a curved line at larger , characterstic of the faster decay of an exponential distribution . in modelii , we assume that genome duplication dominates the evolutionary process. each genome can double in size with probability or be reduced by half with probability . writing the size of a family after doublings as ,the evolution of at each generation is again relying on detailed balance , we find that , with as before . for normalization , we assume that , yielding to change variables from to , we make an approximation that the discrete values of and may be replaced by a continuous distribution .the distribution for is then , where , giving n^{(\ln \alpha / \ln 2 ) - 1}.\ ] ] because we used a continuous distribution to derive this result , the normalization is not exact .the power - law form of the distribution , however , is accurate , and simple summation may be used to define the normalization constant .alternatively , the distribution may be defined relative to the number of families of size 1 , or results for are shown as grey lines in fig .[ f : model ] for three values of : 0.1 ( thin grey line ) , 0.3 ( medium grey line ) , and 0.9 ( thick grey line ) . as these are power - law distributions , they are straight on a log - log plot .the distribution favors larger family sizes as increases .to investigate the size distributions of gene families in nature , we analyzed the contents of the cog database .this database uses essentially unsupervised sequence - similarity comparisons to group proteins into families of orthologs and paralogs .the current release includes 8328 proteins from eight sequenced genomes ( e. coli , h. influenzae , h. pylori , m. genitalium , m. pneumoniaa , synechocystis , m. jannaschii , and s. cerevisiae ) and assigns them to 864 individual families . only proteins with orthologs in at least three species are included in the database . using this database , we computed the number of families of size , , for each species , then normalized the result by for the same species .the results of this analysis are shown in fig .[ f : all ] . as seen in fig .[ f : all ] , all the species show power - law behavior for as a function of for families of size 10 or smaller .the linear trend indicates that model ii , duplication of the entire genome , is more likely than model i , in which individual genes are duplicated .we explore the linear trend more quantitatively by performing a least - squares fit of the data for each model .the quantity we minimize is the rms error for the log - transformed data , ^ 2},\ ] ] with from eq .[ e : f1 ] or eq .[ e : f2 ] .as noted in the summation , we considered only family sizes with or more ; the total number of family sizes used is .the results of the fit are detailed in table [ t : alpha ] , along with the number of family sizes that contributed to the fit .the model with the smaller rms for the fit is also indicated .as seen in table [ t : alpha ] , model ii ( complete genome duplication ) provides a consistently better fit to the data than does model i ( individual gene duplication ) .in particular , when all of the protein families for a given organism are considered , each of the eight organisms shows a better fit with model ii than with model i. in table [ t : alpha ] the fit values for are also shown for the functional classes defined in the cog database : information storage and processing , cellular processes , metabolism , and poorly characterized .these individual classes are also fit better by model ii than by model i. in e. coli , h. influenzae , h. pylori , m. pnuemoniae , and synechocystis , at least three of the four classes are fit better by model ii ; in m. genitalium , there are not enough protein families for adequate predictions of . only in s. cerevisiaedoes model i appear to provide a slightly better fit to the distribution of family sizes for two classes , information storage and processing and cellular processes . one possible explanation for the better performance of model i for s. cerevisiae is that gene families grow through the duplication of chromosomes , rather than the duplication of individual genes or entire genomes .the distinction between the genome and individual chromosomes is not applicable to the other organisms , which have a single chromosome .a trend evident in table [ t : alpha ] is that for cellular processes ( molecular chaperones , outer membrane , cell wall biogenesis , secretion , motility , inorganic ion transport and metabolism ) is typically larger than for information storage and processing ( translation , ribosomal structure and biogenesis , transcription , replication , repair , recombination ) and for metabolism ( energy production and conversion , carbohydrate metabolism and transport , amino acid metabolism and transport , coenzyme metabolism , lipid metabolism ) .protein families for cellular processes are therefore biased towards larger sizes , while families for information storage and processing and metabolism are biased toward smaller family sizes .this would imply that , in either model , a duplication of cellular process proteins is more likely to be retained than duplications of other functions .this suggests that cells can tolerate changes to cellular process pathways more readily than to other pathways .the relative performance of model i and model ii according to protein family functional class is summarized in table [ t : summary ] . when all classes are considered , model ii clearly provides a better explanation of the observed family sizes .when classes are considered separately , model ii provides a better explanation for three classes ( information storage and processing , metabolism , and poorly characterized functions ) , while model i provides a better explanation only for cellular processes .the fits provided by model i and model ii are shown in fig .[ f : fitorg ] for e. coli and s. cerevisiae .the observed family size distributions are shown as points and the best fits as lines , grey for model i and black for model ii .the top pair of panels shows the results when all protein families are considered . for families up to size 10 ,the distributions from both organisms clearly follow the power - law prediction of model ii . for the separate protein classes , the e. coli family sizes continue to follow the power - law prediction of model ii .as mentioned previously for s. cerevisiae , however , the fit to model ii is not good for the storage and processing and cellular processes classes .the size distribution decays much more rapidly than model ii predicts .we have investigated the size distribution of protein families . for a selection of single - celled organisms with sequenced genomes , we find that the number of families with members follows a power - law distribution as a function of .this behavior suggests that evolution increases protein diversity through duplication of entire genomes , balanced occasionally by the loss of large amounts of genetic information .it is less likely that protein diversity is increased through the duplication of individual genes , since this process would not lead to a power - law distribution .the power - law we find is that , where is the number of families of size .the exponent varies from to depending on species . in our theory, this exponent measures the ratio of the rate of genome duplication to the rate of gene loss .the behavior we obtain for all species indicates that the rate of genome duplication , relative to the rate of gene loss , is approximately the same for each species .this points to the ancient origin of the cellular machinery responsible for the duplication of dna .different classes of genes evolve at slightly different rates .families that perform cellular processes tend to be larger than average .supplementing these functions might provide a disproportionate selective advantage .also , the remaining functions ( information storage and processing and metabolism ) could represent core cellular machinery that is relatively standard and requires less variability .it would be interesting to verify whether the same protein family size distributions are observed in multicellular plants and animals .one might expect that genome duplication would be supplanted by chromosome duplication , which would shift the family size distribution from a power law to a steeper , almost exponential decay .some evidence in this direction is already provided with the s. cerevisiae data presented in sec .[ s : results ] . with the c. elegans sequence reported , the d. melanogaster sequence promised within a year , and a rough draft of the h. sapiens genome imminent , this question might soon be answered .m. a. huynen and p. bork , measuring genome evolution , proc .usa 95 : 5849 - 5856 ( 1998 ) .m. pellegrini , e. m. marcotte , m. j. thompson , d. eisenberg , and t. o. yeates , assigning protein functions by comparative genome analysis : protein phylogenetic profiles , proc .usa 96 : 42858 ( 1999 ) .wolfe , k. h. , and shields , d. c. 1997 .molecular evidence for an ancient duplication of the entire yeast genome .nature 387 , 108 - 713 .paterson , a. h. , et al .1996 . toward a unified genetic map of higher plants , transcending the monocot - dicot divergence .nature genetics 14 , 380 - 382 .ahn , s. and tanksley , s. d. 1993 .comparative linkage maps of rice and maize genomes .usa 90 , 7980 - 7984 .gaut , b. s. and doebley , j. f. 1997 .dna sequence evidence for the segmental allotetraploid origin of maize .usa 94 , 6809 - 6814 .moore , g. et al . , 1995 .grasses , line up and form a circle .current biology 5 , 737 - 739 .atkin , n. b. and ohno , s. 1967 .dna values of four primitive chordates .chromosoma 23 , 10 - 13 .hinegardner , r. 1968 .evolution of cellular dna content in teleost fishes .american naturalist 102 , 517 - 523 .ohno , s. , wolf , u. , and atkin , n. b. 1968 .evolution from fish to mammals by gene duplication .hereditas 59 , 169 - 187 .t. r. gregory and p. d. herbert , the modulation of dna content : proximate causes and ultimate consequences , genome res . 9 : 31724 ( 1999 ) .t. galitski , a. j. saldanha , c. a. styles , e. s. lander , and g. r. fink , ploidy regulation of gene expression , science 285 : 2514 ( 1999 ) .p. hieter and t. griffiths , polyploidy more is more or less , science 285 : 210211 ( 1999 ) .w. li , spatial spectra in open dynamical systems , europhys .10 : 395400 ( 1989 ) .w. li and k. kaneko , long - range correlation and partial spectrum in a noncoding dna sequence , europhys .17 : 655660 ( 1992 ) .peng , s. v. buldyrev , a. l. goldberger , s. havlin , f. sciortino , m. simons , and h. e. stanley , long - range correlations in nucleotide sequences , nature 356 : 168170 ( 1992 ) .r. f. voss , evolution of long - range fractal correlations and 1/f noise in dna base sequences , phys .68 : 38053808 ( 1992 ) .s. nee , uncorrelated dna walks , nature 357 : 450 ( 1992 ) .v. v. prabhu and j .- m .claverie , correlations in intronless dna , nature 359 : 782 ( 1992 ) .w. li and k. kaneko , dna correlations , nature 360 : 635636 ( 1992 ) . c. a. chatzidimitriou - dreismann and d. larhammar , long - range correlations in dna , nature 361 : 2123 ( 1993 ) .cddddrc + & & & & better model + & & rms & & rms & & + + & 0.84 & 0.39 & 0.50 & 0.16 & 17 & ii + information & 0.77 & 0.55 & 0.47 & 0.39 & 6 & ii + cellular processes & 0.84 & 0.20 & 0.66 & 0.12 & 7 & ii + metabolism & 0.81 & 0.32 & 0.53 & 0.17 & 10 & ii + poorly characterized & 0.89 & 0.39 & 0.64 & 0.25 & 8 & ii + + & 0.56 & 0.22 & 0.31 & 0.09 & 8 & ii + information & 0.36 & 0.10 & 0.25 & 0.12 & 4 & i + cellular processes & 0.73 & 0.14 & 0.54 & 0.03 & 4 & ii + metabolism & 0.53 & 0.16 & 0.34 & 0.04 & 5 & ii + poorly characterized & 0.56 & 0.10 & 0.41 & 0.05 & 5 & ii + + & 0.54 & 0.32 & 0.30 & 0.13 & 7 & ii + information & 0.47 & 0.25 & 0.30 & 0.14 & 5 & ii + cellular processes & 0.48 & 0.01 & 0.38 & 0.09 & 4 & i + metabolism & 0.33 & 0.15 & 0.26 & 0.08 & 3 & ii + poorly characterized & 0.73 & 0.39 & 0.49 & 0.25 & 5 & ii + + & 0.21 & 0.14 & 0.15 & 0.05 & 3 & ii + information & 0.11 & 0.00 & 0.11 & 0.00 & 2 & tie + cellular processes & 3.12 & 0.00 & 3.12 & 0.00 & 1 & tie + metabolism & 0.12 & 0.00 & 0.12 & 0.00 & 2 & tie + poorly characterized & 0.39 & 0.08 & 0.32 & 0.03 & 3 & ii + + & 0.75 & 0.57 & 0.41 & 0.31 & 7 & ii + information & 0.54 & 0.18 & 0.42 & 0.13 & 4 & ii + cellular processes & 0.70 & 0.03 & 0.62 & 0.07 & 4 & i + metabolism & 0.53 & 0.21 & 0.34 & 0.08 & 5 & ii + poorly characterized & 0.64 & 0.07 & 0.56 & 0.11 & 4 & i + + & 0.26 & 0.18 & 0.19 & 0.10 & 3 & ii + information & 0.20 & 0.09 & 0.15 & 0.00 & 3 & ii + cellular processes & 0.42 & 0.00 & 0.33 & 0.00 & 2 & tie + metabolism & 0.23 & 0.22 & 0.17 & 0.14 & 3 & ii + poorly characterized & 0.39 & 0.08 & 0.32 & 0.03 & 3 & ii + + & 0.73 & 0.37 & 0.42 & 0.14 & 10 & ii + information & 0.49 & 0.18 & 0.33 & 0.09 & 5 & ii + cellular processes & 0.83 & 0.08 & 0.70 & 0.12 & 6 & i + metabolism & 0.54 & 0.23 & 0.32 & 0.08 & 6 & ii + poorly characterized & 0.77 & 0.27 & 0.55 & 0.15 & 8 & ii + + & 0.82 & 0.25 & 0.57 & 0.17 & 12 & ii + information & 0.84 & 0.20 & 0.76 & 0.22 & 6 & i + cellular processes & 0.95 & 0.16 & 0.92 & 0.16 & 5 & i + metabolism & 0.72 & 0.14 & 0.50 & 0.10 & 9 & ii + poorly characterized & 0.95 & 0.13 & 0.85 & 0.09 & 8 & ii + +
current - day genomes bear the mark of the evolutionary processes . one of the strongest indications is the sequence homology among families of proteins that perform similar biological functions in different species . the number of proteins in a family can grow over time as genetic information is duplicated through evolution . we explore how evolution directs the size distribution of these families . theoretical predictions for family sizes are obtained from two models , one in which individual genes duplicate and a second in which the entire genome duplicates . predictions from these models are compared with the family size distributions for several organisms whose complete genome sequence is known . we find that protein family size distributions in nature follow a power - law distribution . comparing these results to the model systems , we conclude that genome duplication is the dominant mechanism leading to increased genetic material in the species considered . 8.5 in corresponding author joel s. bader , curagen , 555 long wharf drive , new haven , ct , 06511 . tel . ( 203)401 - 3330x236 ; fax ( 203)401 - 3351 ; email jsbader.com
reinforcement learning ( rl ) provides a neuropsychological and cognitive science perspective to animal behavior and sequential decision making .recent studies in cognitive science have also demonstrated analogies between the dopaminergic neurons in brains and temporal difference ( td ) reinforcement learning algorithms .other than the nature derived inspiration , several successful implementations of reinforcement learning ( rl ) in controlling dynamic robotic systems for manipulation , locomotion and autonomous driving , , have proven the previously theoretical concept to be applicable in real time control of physical systems . many of these methods use specialized policy structures to represent policies in order to put a cap on the number of iterations that are needed for optimizing the behaviour . though efficient there is a loss of generality in adopting such an approach as it constricts the policy space to some specific trajectories .thus , non - linear function approximators like neural networks are used to parametrize the policy .this removes the requirement of using hand engineered policy representations and human supplied demonstrations to initialized them .moreover , the use of higher number of parameters also theoretically ensures learning of complex behaviours that would nt have been possible with linear man made policies .+ another important development in the field of rl has been indirectly borrowed from enormous successes of deep convolutional neural networks(cnn ) in image feature extraction .a direct implication of cnns in reinforcement learning was the use of image pixels as states instead of joint parameters , which was widely in practice in rl landscape .use of such an expressive parametrization also enabled learning of value function and policies that were previously deemed complicated .the paper by riedmiller demonstrated that neural networks can effectively be used as q - function approximators using neural fitted q - iteration algorithm .later introduction of convolutional networks by mnih et al . turned neural networks based q learning as a base for drl .some of the ideas that were introduced like mini batch training and concept of target networks were pivotal to the success of non - linear rl methods .but , the initial algorithms were used to play classic atari 2600 games with pixels as inputs and discrete actions as policy .the result were extraordinary with the artificial agent getting scores that were higher than human level performance and other model based learning methods .attempts have been made to use deep q - learning ( dqn ) for high dimensional robotics tasks but with a very little success .this is essentially because of the fact that most of the physical control tasks have high dimensional action spaces with continuous real valued action values .this posed a problem for introducing dqns in manipulation tasks as they act as value function approximators and an additional iterative optimization process is necessary to use them for continuous spaces .the algorithms falling under this class are categorized into a group called discrete action space algorithms(das ) as they are efficient only in discrete action domains .+ another approach to parametrization of rl policies is to encode the policy directly and search for optimal solutions in the policy space .these methods known as policy search methods are popular as it gives an end - to - end connection between the states and feasible actions in the agent environment .the parameters can then be perturbed in order to optimize the performance output .the advantage of this process over the earlier value approximation methods is that the policies are integrated over both action and state space , thus the search is more comprehensive than q - learning . andit also solves the discrete action problem as the output policy , is a stochastic distribution over the action given a particular state .thus , the policy representation provides probabilities over over action in a continuous space .this class of continuous action algorithms are grouped into continuous action space algorithms(cas ) .they include policy gradient and policy iteration algorithms that encode the policy directly and search over entire policy space . initially developed and experimented on low dimensional state spaces , cas algorithms have been integrated into cnn architecture in algorithms like deep deterministic policy gradients ( ddpg ) .+ the cas rl algorithms can further be divided into two subcategories , stochastic continuous action space(scas ) and deterministic continuous action space(dcas ) algorithms .the main difference between both of the methods is basically the sample complexity .even though stochastic policy gradient methods provide a better coverage of the policy search space , they require a large number of training samples in order to learn the policy effectively .this is quite infeasible in robotic applications as exploration and policy evaluation comes at a price in such domains .several methods like natural policy gradients and trust region policy gradients were developed in order to make policy search effective by adding additional constraints on the search process to restrict the agent to explore only promising regions .but , the discovery of deterministic policy gradients has led to an easier method whose performance surpasses stochastic policy algorithms as proven empirically by silver et al . + the most important contribution of this paper is the organisation of the assortment of drl algorithms on the basis of their treatment of action spaces and policy representations .present drl methodologies in literature are classified into the groups , das , cas , scas and dcas whose details has been described above . followingsections include a background of the evolution of reinforcement learning and preliminaries laying a foundation to understand the algorithms and description of some of the basic algorithms encompassing drl .experiments and real time implementations associated with these methods are also described to give an insight into the practical complexity of implementing these algorithms on physical robots / simulations .all of the reinforcement learning methods studied in this paper are basically control problems in which an agent has to act in a stochastic environment by choosing action in a sequential manner over several time steps , with an intention to maximise the cumulative reward .the problem is modelled as a _ markov decision process _( mdp ) which comprises of a state space , an action space , an initial state distribution with density , a stationary transition dynamics model with density that satisfies the markov property for any trajectory in the state - action space and a reward function .a policy can be defined as the mapping of state to action distributions and the objective of an mdp is to find the optimal policy .generally a policy is stochastic and is denoted by , where is the probability distribution of performing that action and is a vector of parameters that define the policy , . a deterministic policy on the other handis denoted by and is a discrete mapping of .+ a agent uses the policy to explore the environment and generate trajectories of states , rewards and actions , .the total return or performance is determined by calculating the total discounted reward from time step onwards . value function of a particular state is defined as the expected total discounted reward if an agent were to initiate from that particular state and generate trajectories thereafter .\ ] ] the action - value function on the other hand is defined as the expected discounted reward if the agent takes an action from a state and follows the policy distribution thereafter .\ ] ] the agent s overall goal is to obtain a policy that results in maximisation of cumulative discounted reward form the start state .this is denoted by finding appropriate for the performance objective $ ] . + the density of the state after transitioning for t time steps from initial state given by .the discounted state distribution is then given by . the performance objective can be represented as a unified expectation function , \end{aligned}\ ] ] early reinforcement learning(rl ) algorithms for prediction and control were focused on the process of refinement of optimal policy evaluation techniques and reduction of computational complexity of the approaches .this led to the emergence of exploration vs exploitation techniques , on - policy and off - policy approaches , model free and model based and various pac(probable approximate correct ) methods .although the algorithms were feasible computationally and showed convergence to optimal policies in polynomial time , they posed a major hindrance when applied to generate policies for high dimensional control scenarios like robotic manipulation .two techniques stand out from the newly developed rl methodologies , namely function approximation and policy search .the philosophy of these two approaches is to parameterize the action - value function and the policy function .further , gradient of the policy value is taken to search for the optimal policy that results in a global maxima of expected rewards . moreover , due to the hyper dimensional state - space and continuous action - space the robot operates in , policy search methods are the most viable and possible the only method considered suitable for robotics control .application of rl in robotics have included locomotion , manipulation and autonomous vehicle control .most of the real world tasks are considered episodic and it is also hard to specify a concise reward function for a robotic task .this problem is tackled by the use of a technique called learning by demonstration or apprenticeship learning .one of the methods to solve the uncertain reward problem is inverse reinforcement learning where the reward function is updated continuously and an appropriate policy is generated in the end another effective method to model the policies is the use of motor policies to represent stochastic policy , that is inspired from the works of kober and peters .they devised an expectation maximization ( em ) based algorithm called policy learning by weighing exploration with returns(power ) . when learning motor primitives , they turn this deterministic mean policy into a stochastic policy using additive exploration in order to make model - free reinforcement learning possible . here, the motor primitives are derived from the concept of dynamic motor primitives ( dmps ) that describe movement as a set of differential equations such that any external perturbation is accommodated without losing the motion pattern .certain other approaches like guided policy search also introduced more versatile policy representations like differential dynamic programming ( ddp ) .these policy have been used for generating guiding samples to speed up the learning process in non linear policies .this gives more flexibility and generalization than earlier structured policy approaches .but , even though such hybrid model based and specialized policy methods work well in robots , there has always been an interest towards learning policies end - to - end from visual stimulus .thus , convolutional architectures have been introduced into the domain of rl and motor control , known as visuo - motor control policy networks .many of the rl methods demonstrated on physical robotic systems have used relatively low - dimensional policy representations , typically with under one hundred parameters , due to the difficulty of efficiently optimizing high - dimensional policy parameter vectors .but the paper by mnih et al . introduced an effective approach to combine larger policy parameterizations by combining deep learning and reinforcement learning .this concept of generating efficient non - linear representations is transferred into robotic tasks of grasping and continuous servoing in some recent research carried out by levine et al. and kober et al .end - to - end learning of visuo - motor policies is made possible with such an approach which in turn learns the features form the observations that are relevant for the specific task .one of the problems that was encountered with neural network learning of policies was the convergence of some weights to infinity when trained with similar instances of input observations .solving of this difficulty using experience replay methods constituting randomization of the episodes gave the necessary boost to rl in real life control problems .the current state of the art in deep - reinforcement learning includes the algorithms employed by google deepmind research namely dqn ( deep q network ) for discrete actions and deep deterministic policy gradients ( ddpg ) for continuous action spaces .dqn is a simple value approximation method while ddpg uses a underlying actor - critic framework for policy evaluation .efficacy of both of these methods have been demonstrated empirically for performing complex robotic manipulation tasks like door opening and ball catching .the deep reinforcement learning algorithms prevailing currently are structured according to the topology in fig .the initial problem of planning in continuous and high dimensional state spaces can be considered solved due to the extensive use of neural networks with large number of parameters for function / policy modelling .but , the problem at hand now is the mapping of these continuous states to high - dimensional continuous action spaces .the present concentration in the drl community based on this issue and hence , it seems quite apt to organise the various learning approaches based on this ground .moreover it also demonstrates the capabilities and limitations of the prevalent algorithms quite clearly .+ the methods are divided into two sections namely , discrete action space(das ) approaches and continuous action space(cas ) approaches .further , cas methods are divided into stochastic continuous action space(scas ) and deterministic continuous action space(dcas ) methods .the various algorithms that come under the purview of das are deep q - networks , duelling networks , normalized advantage function and related function approximation approaches to decision making .cas mostly include policy search approaches that parametrize the policy directly and optimized it using evaluation and gradient based approaches .cas is further branched into scas methods where cnn are used to estimate a stochastic policy and dcas methods which predicts deterministic policies .even though this demarcation provides a near comprehensive description of the drl methods , it misses out on several other rl approaches like likelihood ratio maximisation , black box methods , model - based methods which are not directly related to drl .the dqn architecture was the first successful integration of deep learning with q - learning framework .q - learning forms the base of most of the model - free rl algorithms .it includes exploration of the environment using a behaviour policy and learn the q - function for the possible action - state pairs using the experience that is gathered from the exploration .the following equation described q - learning , where is the learning rate and the observations that are obtained after exploration be , where is the action taken , is the rewards received and is the next observed state . only difference between naive q - learning and dqn is the use of cnn as function approximators instead of linear approximators .the use of hierarchical networks enables the use of continuous high dimensional images as states which estimates the optimal action - value function .rl was considered to be unstable when using non - linear approximators such as a neural network , which is because of the correlations present in the sequence of observations and the correlations between the action - values and the target values . in order to solve this , mnih et al .devised a method of asynchronous training of the q - network called experience replay . here ,the experience is stored in a pool and mini - batches of experiences are accessed during training uniformly .this is then used to optimize the loss function , \ ] ] fig.2 describes the architecture of the q - network that consists of 3 convolutional layers , with filter sizes 32x8x8;stride 4 , 64x4x4;stride 4 and 64x3x3;stride 2 .the final two layers are fully connected layers with 512 neurons and outputs are discrete in the number of actions considered .the activations chosen are rectified linear units . the second important contribution of dqn other than replay buffer was the use of target networks for generating target values for the network s loss function .this helps to reduce oscillations during training and leads to easy convergence .the target network is updated with the online q network after a specific number of time steps .+ execution of this methods is limited to agents requiring discrete action space but , some early works have embedded the dqn technique to learn optimal actions from visual inputs .zhang et al . have utilized the exact same architecture to learn the optimal control policies for a baxter robot arm ..4 .4 instead of controlling the entire 7dof of the robot arm , only the 4dof shown in the fig.3(a ) simulation are controlled .the actions are discretized into nine distinct outputs , that include going up , going down or staying put in denominations of 0.02 rad . after training, the network was used to control a real robotic arm with marginal success as it was prone to discrepancies in the input image .moreover , training in simulation and transferring the control system to real - time robots proved to be detrimental for safety and performance .+ double deep q - networks are an improved version of dqn that was first introduced by hasselet et al . . in q - learning and dqn, the max operator utilizes the same values for both behaviour policy and evaluation of actions .this in turn gave overestimating value estimates . in order to mitigate this, ddqn uses the target as : duelling architecture , is a model - free algorithm developed by wang et al . draws its inspiration from residual rl and the concept of advantage learning and updating by baird . in advantage learning instead of estimation of action - value function , an advantage functionis calculated which is defined as the rate of increase of reinforcement when a particular action is taken .the prime importance of advance learning is that the advantage values have higher variance that leads to easy convergence .moreover , the policy does nt change discontinuously with changing values .the duelling architecture maintains both and with a single deep model and a simple output operation combines both these output to get back the value . asthe output is same as ddqn and dqn , this network can be trained with any value iteration method . + considering the duelling network described in fig . 4 where one stream outputs and other . and denote the convolutional network parameters. the last module is implemented using a forward mapping function : the architecture was used to train an artificial agent learn the 57 games in atari arcade learning environment from raw pixel observations .the final acquired rewards were compared with that of human performance and dqn networks .duelling networks performed 75% better than the naive q - networks as reported in the paper . for applications withdiscrete action robotic tasks duelling architecture can be used for better performance , though a concrete application is missing from literature .gu et al . proposed a model free approach that used q - learning to plan in continuous action spaces with deep neural networks , which they refer as normalized advance functions ( naf )the idea behind naf is to describe q function in a way such that its maximum , can be obtained easily and analytically during the q - learning update .the inherent processes are equivalent to that of duelling networks as a separate value function and advantage term are estimated .but , the difference is that the advantage in this case is parametrized as a quadratic function of non - linear features of the state : is a state - dependent , positive definite square matrix that is parametrized by , where is a lower triangular matrix whose entries come from the linear activations of the neural network .the rest of the network architecture is similar to that of dqn by mnih et al .the paper also explored the use of a hybrid model based method by generating imagination rollouts from fitted dynamics model .this incorporated the inclusion of synthetic experience data from the fitted local linear feedback controllers and including them in the replay buffer of on - policy exploration of q - learning .+ the algorithms was tested in several robotic environments as shown in fig.5 .the environments include the mujoco simulator tasks from todorov et al .that include 3dof robotic manipulation tasks where an arm gets reward based on the distance between the end effector and the object to be grasped .it also includes a sex joint 2d swimmer and a four legged ant .policies learnt with this method showed more precise completion of tasks as compared to deep policy gradient methods .stochastic policy gradient methods parametrize the policy directly rather than trying to optimize the value functions .these are one of the most popular class of continuous action rl algorithms .the central idea behind these algorithms is to adjust the parameters of the policy in the direction of the gradient of the performance , i.ethe fundamental theorem underpinning these algorithms is the . \end{aligned}\ ] ] the interesting aspect of this theorem is that even though the state distribution depends on the network parameters , the policy gradient does nt depend on the gradient of the distribution .but , one of the issues that these algorithms have to address is the estimation of the function as evident from the above equation . even though policy gradient algorithms provide an end - to - end method for policy search , it is rarely used in robot policy optimization tasksthis is because of the high sample complexity of such algorithms .policy gradients use on - policy exploration policy and that results in it needing a large number of training data to learn , that is infeasible for robots . above figuredepicts a stochastic policy gradient algorithm that is used by levine et al . for autonomous grasping of objects in cluttered environments .the input is a monocular image showing objects and robot end effector and the robot actions in the 7th layer of the deep network .the output is the probability distribution of the action given the particular state .the network takes 800,000 labelled images to train which gives a clear indication of the sample complexity of scas methods .actor - critic methods are widely used architectures that are again based on the policy gradient theorem .as depicted from the policy gradient equation , the term is missing from the gradient and needs to be calculated . hence , the critic network estimates this value in order to find the derivatives of the actor network, .trpo is a policy optimization algorithm that restricts the search space of the policy by applying constraints on the output policy distributions .this is done by penalizing the network parameters using a kl divergence loss function upon the parameters , .intuitively this constraint does nt let large scale changes to occur in the policy distribution and hence , helps in early convergence of the network .the above figure depicts the networks that were used to control the swimmer and hopper tasks in mujoco environments .the input state space consisted of joint angles and robot kinematics and the rewards were linear functions .the deterministic policy gradient algorithm ( dpg ) is derived from its counterpart stochastic policy gradient algorithm and is dependent of a similar deterministic policy gradient theorem . in continuous action spaces ,greedy policy improvement becomes problematic and needs global optimization during policy improvement step . as a resultthe it is more computationally tractable to update the policy parameters in the direction of the gradient of the q function . \ ] ] here , is the deterministic policy , is the learning rate and are the policy parameters .chain can be applied to the above equation in order to get the policy gradient equation : \ ] ] the above update rule can be incorporated into a deep neural network architecture where the policy parameters are updated using stochastic gradient ascent . to realisethis an actor - critic method is necessary .the critic estimates the action - value function while the actor derives its gradients from the critic to update its parameters .the gradient of policy parameters is the product of the gradient of q value with respect to action and the action with respect to the policy parameters .8 shows the deterministic actor critic network .this is also the basis of ddpg ( deep deterministic policy gradient algorithm ) which performs better than any other continuous action algorithm .methods such as naf and ddpg have been used for learning complex robotic manipulation tasks in real time .the authors trained a 7dof jaco arm for reaching and door opening tasks without any policy initializations and demonstrations .they used deep network architectures with a 20 dimensional state space consisting of the joint angles , velocities and the end effector pose . the reward function for the reaching task was the distance between the end effector and the object , whereas for door opening the reward was the sum of distance to the door knob and the degrees of rotation of the knob .another significant contribution of this paper was the use of asynchronous leaning by parallelizing the data collection process .it was proved that using multiple robots reduces the training times by a factor of the number of robots .algorithmic ideas , theories and implementation details of several deep reinforcement learning algorithms have been delineated in detail . it can be concluded that for the purpose of robotic manipulation continuous action domain algorithms are the most fruitful and applicable .further , it can be observed that there is a trend towards exploration of sample efficient and time efficient algorithms , having solved both continuous state and action space problems .breakthroughs in these domains will have significant impacts in the field of robotics learning .+ also , as demonstrated from current state of the art in drl , the approaches fail to handle complex policies. a reason could be that complicated policies require more samples to learn and even a sophisticated reward function .this observation highlights a void in rl in robotics .there is a need to learn highly complicated reward functions and methods to represent highly skilled behaviours and skills .this are of inverse reinforcement learning needs to be paid more attention while learning policies using drl .after all , complexity of the reward function is proportional to the policy complexity . + reinforcement learning is an evolved form of the cognitive architecture sore .there is a need to reconnect the new drl approaches to its roots in cognitive science .the problems in drl might find extremely useful insights from theories and empirical evidence from cognitive psychology .+ one of the important drawbacks of drl algorithms and visuo - motor architectures are the lack of capability of transfer learning .it is difficult to transfer skills and use the knowledge of already learnt policies to learn even complicated policies .a mechanism needs to be developed so that policies does nt have to be learnt from scratch , but can be inherited .+ many problems with temporally spread out rewards lead to credit assignment problem in rl .thus , the reward structure too needs to be redesigned .there have been several works in incorporating intrinsic motivation in reinforcement learning as a method to induce temporal abstractions in agents .these setups known as semi - markovian decision process can be used to learn hierarchical planning actions by learning step by step about the task at hand , just as a human does .+ another most important aspect of drl that hasnt been touched upon in the main body is an approach known as guided policy search ( gps ) .this is because of it incipient stages in drl currently , but the approach holds significant potential in learning robotic tasks with minimal trials .the central idea behind the algorithm is to mix model based and model free algorithms and use linear models to generate samples in order to guide the learning process .this seems like a valid assumption as humans / animals do nt always learn actions from scratch , but take advantage of already well developed models of their body and physics .mnih , v. , kavukcuoglu , k. , silver , d. , rusu , a.a . ,veness , j. , bellemare , m.g . , graves , a. , riedmiller , m. , fidjeland , a.k . ,ostrovski , g. and petersen , s. , 2015 .human - level control through deep reinforcement learning .nature , 518(7540 ) , pp.529 - 533 .riedmiller , m. , 2005 , october .neural fitted q iteration first experiences with a data efficient neural reinforcement learning method . in european conference on machine learning ( pp .317 - 328 ) .springer berlin heidelberg .
the focus of this work is to enumerate the various approaches and algorithms that center around application of reinforcement learning in robotic manipulation tasks . earlier methods utilized specialized policy representations and human demonstrations to constrict the policy . such methods worked well with continuous state and policy space of robots but failed to come up with generalized policies . subsequently , high dimensional non - linear function approximators like neural networks have been used to learn policies from scratch . several novel and recent approaches have also embedded control policy with efficient perceptual representation using deep learning . this has led to the emergence of a new branch of dynamic robot control system called deep reinforcement learning(drl ) . this work embodies a survey of the most recent algorithms , architectures and their implementations in simulations and real world robotic platforms . the gamut of drl architectures are partitioned into two different branches namely , discrete action space algorithms(das ) and continuous action space algorithms(cas ) . further , the cas algorithms are divided into stochastic continuous action space(scas ) and deterministic continuous action space(dcas ) algorithms . along with elucidating an organisation of the drl algorithms this work also manifests some of the state of the art applications of these approaches in robotic manipulation tasks .
forecasting from time series data necessarily involves an attempt to understand uncertainty ; volatility or the standard deviation is a key measure of this uncertainty and is found to be time - varying in most financial time series .the seminal work of engle , that first treated volatility as a process rather than just a number to estimate , led to tremendous efforts in devising dynamical volatility models in the last two decades .these are of great importance in a variety of financial transactions including option pricing , portfolio and risk management . excess volatility ( well beyond what can be described by a simple gaussian process ) and the associated phenomenon of clustering are believed to be the key factors underlying many empirical statistical properties of asset prices , characterized by a few key `` stylized facts '' described later .a good measure of volatility clustering ( roughly speaking , large and small changes in asset prices are often followed by large and small changes respectively ) is thus important for understanding financial time series and for constructing and validating a good volatility model .the most popular characterization of volatility clustering is the correlation function of the instantaneous volatilities evaluated at two different times , which shows persistence up to a time scale of more than a month .it has also been established that there is link between asset price volatility clustering and persistence in trading activity ( for an extended empirical study on this , see ref .however , the underlying market mechanism for volatility clustering is not clear .the aim of our paper is not to elucidate the mechanism for volatility clustering , but to introduce a more direct measure of it .specifically , we propose that the _ conditional _ probability distribution of asset returns over a period ( given the return , , in the previous time period ) can be fruitfully used to characterize clustering .this is a direct measure based on return over a time lag instead of instantaneous volatility and we believe is more relevant to volatility forecasting .we analyze stock market data using this measure , and we and have found that the conditional probability can be well described by a scaling relation : .this scaling relation characterizes both fat tails and volatility clustering exhibited by financial time series .the fat tails are described by a universal scaling function .the functional form of the scaling factor , on the other hand , contains the essential information about volatility clustering on the time scale under consideration .the scaling factors we obtain from the stock market data allow us to identify regimes of high and low volatility clustering .we also present a simple phenomenological model which captures some of the key empirical features .the key `` stylized facts '' about asset returns include the following : the unconditional distribution of returns shows a scaling form ( fat tail ) .the distribution of returns in a given time interval ( defined as the change in the logarithm of the price normalized by a time - averaged volatility ) is found to be a power law with the exponent for u.s .stock markets , well outside the lvy stable range of 0 to 2 .this functional form holds for a range of intervals from minutes to several days while for larger times the distribution of the returns is consistent with a slow crossover to a normal distribution .another key fact is the existence of volatility clustering in financial time series that is by now well established ; it can be seen , for example , in the absolute value of the return , which shows positive serial correlation over long lags ( the taylor effect ) .this long memory in the autocorrelation in absolute returns , on a time scale of more than a month , stands in contrast to the short - time correlations of asset returns themselves .fat tails have been the subject of intense investigation theoretically from mandelbrot s pioneering early work using stable distributions to agent - based models of bak _et al._ and lux ( see ref . for a survey of research on agent based models used in finance ) .the key problem is to elucidate the nature of the underlying stochastic process that gives rise to both volatility clustering and the power - law ( fat ) tails in the distribution of asset returns .in an effort to seek a direct quantitative characterization of clustering we consider , the probability of the return in a time interval of duration , conditional on the absolute value of the return in the previous interval of the same duration .( we emphasize that the probability is not conditioned on the value of the return at an instant . ) by varying , we can check volatility clustering on different time scales .there is a growing literature on conditional measures of distribution for analyzing financial time series ( for a review , see ref . and references therein ) .for example , the conditional probability of return intervals has been used recently to study scaling and memory effects in stock and currency data .we have analyzed both the high frequency data and daily closing data of stock indices and individual stock prices using the conditional probability as a probe .here we only present results of our analysis of high frequency data of qqq ( a stock which tracks the nasdaq 100 index ) from 1999 to 2004 and daily closing data of the dow jones industrial average from 1900 to 2004 .we emphasize that the properties of the financial time series we present are rather general : we have checked that the same properties are also exhibited in other stock indices and future data ( for example , the hang seng index , russell 2000 index , and german government bond futures ) as well as individual stocks .we have checked , as was found in the previous studies , that the probability distribution of the returns in the time intervals days for djia exhibits a fat power - law tail with an exponent close to ; this appears to be true for most stock indices and individual stock data .minutes in qqq .different curves correspond to 10 different absolute values of the return in the previous interval , which are groups of bins centered at values ranging from to . the larger the value of , the large the width of the distribution .( b ) the conditional probability distribution of return of qqq ( shown in ( a ) ) , when scaled by a scale factor , collapses to a universal curve . is the absolute value of the return in the previous interval .the tail of the probability distribution can be described by a power law with the exponent approximately equal to .,width=321 ] we calculate , by grouping the data into different bins according to the value of . in figure 1(a ) we display for minutes for different values of .it is clear from the figure that there is a positive correlation between the width of and .what is more interesting is that , when is scaled by the width of the distribution ( the standard deviation of the conditional return ) , , the different curves of conditional probability collapse to a universal curve : .evidence for this is displayed in fig .note that on the time scales we have analyzed , the probability distribution is symmetric with respect to .consequently , in fig .1(b ) we have only displayed the absolute value of the return .the data collapse is good for a wide range of , and the curves display a power - law tail with a well - defined exponent of approximately .we examine next the behavior of the scale factor on .2 shows a plot of the scale factor vs. for different values of .it can be seen from the figure that there is a crossover value : for , is almost constant , while for , increases with .the degree of the dependence of on can be taken as an indication of strength of volatility clustering .if there is no volatility clustering will not depend on .note that there is a strong clustering at small .as increases , the strength of clustering gradually decreases , indicating a crossover to the non - clustering regime . as increases beyond the time scale of volatility clustering , the clustering disappears .this crossover can not be seen in the qqq data as the time scales involved are small .our analysis of djia data show an indication of such crossover at the time scale of a few months . in this paper , we do not separate the cases of positive and negative returns in the previous time interval .thus we do not show explicitly the well - known leverage effect , first expounded by black .we have checked that the scaling and data collapse we obtained are equally valid when we separate out the cases of positive and negative returns in the previous interval . the leverage effect is reflected in the scaling factor , which shows for in the real data . vs ( the absolute value of the return in the previous interval ) for different values of , arising from analysis of qqq data .the dependence is seen to be almost linear for sufficiently large ,width=321 ] figure 3 shows that the same scaling form is also exhibited in djia data .we have checked that the data collapse extends also to data for different values of in addition to different values of displayed here . and days in djia data .the data corresponding to have been shifted to the right for easy viewing .different curves correspond to 8 different absolute values of the return in the previous interval .the inset shows the dependence of the width on .the tail of the probability distribution can be described by a power law with the exponent approximately equal to .,width=321 ] the data collapse we have displayed for different and different , the power - law behavior including the value of the exponent , and the behavior of the scale factor which encapsulates features of volatility clustering are the same across data from several other stock indices listed earlier and individual stocks .this empirical universality can be stated as here is a universal function describing the universal fat tail in the distribution . satisfies constant as , as , and .the dependence of on , on the other hand , describes the volatility clustering at the time scale . if is a constant ( independent of ) , then does not depend on , and there is no volatility correlation or clustering .the conditional probability distribution contains information about the conditional average of the moments of the distribution as well as various volatility correlation functions such as .given the scaling form we can evaluate these averages and correlation functions in terms of , which is itself given by .in particular , we have the moments of the conditional probability distribution given by ( is a universal constant ) and , where is the unconditional probability distribution of the return .we believe that this scaling form provides a new and rather complete measure of volatility clustering .in the following we will provide the outline of a model that captures the key features exhibited in the conditional probability distribution of stock market data . in a stochastic volatility model ,the one - step asset return at time is written as , where is a gaussian random variable with zero mean and unit variance and is magnitude of the price change .for the relatively short time scales we are interested in we have set the intrinsic growth rate to zero .the distribution of depends on the dynamics of : slow changes in lead to volatility clustering .there exist a few classes of volatility models that have been used to describe the dynamics of .these include the widely used models based on garch - like processes , and more recently , the models based on a multifractal random walk ( mrw ) that will be discussed later . in our model ,the dynamics of is specified via the random variable , with . in order to describe both the behavior of probability distributions and temporal correlationswe have devised the following model for the evolution of .the time evolution of the variable is assumed to be independent of the change in and executes a random walk with reflecting boundaries : we enforce the condition ; thus is the minimum value of . an upper bound in , , can also be incorporated without affecting the scaling behavior of the model .we typically choose .the change in , is given by \eta_{t - i } + k(1)\eta_t \nonumber \\ & & -k(n_c+1)\eta_{t - n_c } \}\,-\,\beta\overline{\eta}\,.\end{aligned}\ ] ] in the preceding are independent random variables that assume the value with probability and with probability .this asymmetry builds in the tendency to decrease the volatility .the mean value of , is denoted by .we comment on the implications of the different terms next .we focus on the limit and first since it is amenable to analytic investigation ; this model is related to a model discussed in ref .note that this limit already builds in volatility clustering as it takes many steps to change significantly .it is easy to show that , the steady - state probability distribution of is given by , where .the distribution of is then given by a power - law , .this mechanism for generating a power - law distribution was first noted by herbert simon in 1955 . we have studied this limiting case of the model numerically and find that many features of the conditional probability distribution exhibited by the real data including the power law and scaling behaviors are reproduced .we can show analytically that the conditional probability distribution exhibits scaling collapse , and that scale - invariant behavior with a power law tail ( with the exponent if we choose ) exists for , where .the numerical data in fact show a somewhat larger range of power - law behavior .the re - scaling factor required for data collapse is simply proportional to from our analysis , as we have observed from the real data and from numerical simulations of model when is not too small .the simple limit captures important features of volatility clustering reflected in conditional probability distributions . the second term in eq .( [ dn ] ) is based on the multifractal random walk model that builds in long - time correlations via a logarithmic decay of the log volatility correlation .this term allows us to reproduce the more subtle temporal autocorrelation behavior observed in the data and follows the implementation in ref .the long - term memory effects are incorporated by making the change in depend on the steps at earlier times with a kernel given by ( this corresponds to the mrw part of the model given by ) and allowing memory up to time steps , chosen to be in our simulations .the final term allows us to control the rate of drift to lower values of .we have simulated this model with ( ) and for and displayed the results for in figure 4 .the model with the stated parameters reproduces the fat tail in the unconditional probability distribution for observed in the data .the non - universal scale factor is similar to those found from our empirical analysis .we have also checked that this model retains the same temporal behavior in the log - volatility correlation exhibited by the pure mrw model .thus the model we have investigated is capable of reproducing both probability distributions ( conditional and unconditional ) and temporal autocorrelations .we note in passing that the model as it stands can not be used to study the leverage effect ; however , it can be modified to do so . , , and .the time lag is . the curves , corresponding to different absolute values of the return in the previous interval collapse on to a universal curve when scaled by a scale factor .the tail of the probability distribution is again described by a power law with the exponent equal to .the inset shows the dependence of on ., width=321 ] in summary , we have proposed a direct measure of volatility clustering in financial time series based on the _ conditional _ probability distribution of asset returns over a time period given the return over the previous time period .we discovered that the conditional probability of stock market data can be well described by a scaling relation , which reflects both fat tails and volatility clustering of the financial time series .in particular , the strength of volatility clustering is reflected in the functional form of the scaling factor . by extracting from market data ,we are able to estimate the future volatility over a time period , given the return in the previous period .this may be useful in modelling financial transactions including option pricing , portfolio and risk management ; all these depend crucially on volatility estimation .the clustering of activities and fat tails in the associated distribution are very common in the dynamics of many social and natural phenomena ( e.g. earthquake clustering ) .the conditional probability measure we have presented in this paper may serve as a useful tool for characterizing other clustering phenomena .r. f. engle , econometrica * 50 * , 987 ( 1982 ) .b. mandelbrot , j. business * 36 * , 394 ( 1963 ) . e. f. fama , j. business * 38 * , 34 ( 1965 ) .r. mantegna and h. e. stanley _ an introduction to econophysics _( cambridge university press , cambridge , 1999 ) .bouchaud and m. potters _ theory of financial risks : from statistical physics to risk management _ ( cambridge university press , cambridge , 2000 ) .r. cont _ quantitative finance _ * 1 * , 223 ( 2001 ) .r. f. engle and a. j. patton quantitative finance * 1 * 237 ( 2001 ) .v. plerou , p. gopikrishnan , l.a.n .amaral , x. gabaix , and h.e .stanley , phys .e * 62 * , r3023 ( 2000 ) .p. gopikrishnan , m. meyer , l. a. n. amaral , and h. e. stanley , euro . phys . j. b * 3 * , 139 ( 1998 ) .x. gabaix , p. gopikrishnan , v. plerou , and h. e. stanley , nature * 423 * , 267 ( 2003 ) .t. bollerslev , journal of econometrics * 31 * , 307 ( 1986 ) .s. taylor , _ modelling the financial time series _ ( john wiley , new york 1986 ) .p. bak , m. paczuski , and m. shubik , physica a * 246 * , 430 ( 1997 ) . t. lux and m. marchesi , nature * 397 * , 498 ( 1999 ) .t. lux , journal of economic behavior and organization * 33 * , 143 ( 1998 ) .h. simon , biometrika * 42 * , 425 ( 1955 ) .b. lebaron , in _ handbook of computational economics , vol .2 : agent - based computational economics _ , eds . l. tesfatsion & k.l .judd ( north - holland , amsterdam ) , chapter 9 ( 2006 ) .y. malevergne and d. sornette _ extreme financial risks ( from dependence to risk management ) _( springer , heidelberg 2005 ) .f. black , proc .am . statist .assoc . , 177 ( 1976 ) .e. bacry , j. delour , and j. f. muzy , phys .e * 64 * , 026103 ( 2001 ) .d. sornette , y. malevergne , j. f. muzy , risk magazine * 16*(2 ) , 67 ( 2003 ) .k. yamasaki , l. muchnik , s. havlin , a. bunde , and h. e. stanley , proc .usa * 102 * , 9424 ( 2005 ) .k. chen and c. jayaprakash , physica a * 324 * , 258 ( 2003 ) .barabasi , nature * 435 * , 207 ( 2005 ) .y. y. kagan and d. d. jackson , geophys .* 104 * , 117 ( 1991 ) .
in the past few decades considerable effort has been expended in characterizing and modeling financial time series . a number of stylized facts have been identified , and volatility clustering or the tendency toward persistence has emerged as the central feature . in this paper we propose an appropriately defined conditional probability as a new measure of volatility clustering . we test this measure by applying it to different stock market data , and we uncover a rich temporal structure in volatility fluctuations described very well by a scaling relation . the scale factor used in the scaling provides a direct measure of volatility clustering ; such a measure may be used for developing techniques for option pricing , risk management , and economic forecasting . in addition , we present a stochastic volatility model that can display many of the salient features exhibited by volatilities of empirical financial time series , including the behavior of conditional probabilities that we have deduced . pacs numbers : 89.65.gh , 89.75.da , 02.50.ey
diffusion weighted magnetic resonance imaging ( dw - mri ) is a non - invasive technique for the characterization of biological tissue microstructure . in brain white matter ,water molecules diffuse predominantly along axonal fibers .this results in an observable macroscopic orientation dependence in the dw signal , that is measured by scanning the tissue in multiple orientations and gradient strengths . to model the angular anistropy of the diffusion profile , diffusion tensor imaging ( dti ) is widely used , but this has the limitation that only a single fiber direction can be estimated per voxel .it is estimated in that more complex fiber configurations occur in approximately 90% of the white matter voxels . to overcome this ,high angular resolution diffusion imaging ( hardi ) techniques are used , that can describe more complex ( crossing ) fiber configurations .an overview of hardi techniques can be found in .here we use the method of constrained spherical deconvolution ( csd ) , that from the initial diffusion data constructs a fiber orientation distribution ( fod ) , which models the distribution of fibers along different directions .tractography methods are often used in the dw - mri pipeline to provide insight in the structural connectivity of the white matter bundles . independently of the model used for interpreting the dw - mri data , noise originating from the scanner , acquisition artifacts andpartial volume effects are likely to result in spurious ( aberrant ) fibers in the tractography output . to improve the data on which the tractography is performed, different regularization methods can be used .methods exist that apply filtering for the reduction of noise directly on the dw - mri data , other methods aim to regularize the dti tensor fields . on hardi data the regularization can be performed on individual voxels or in combination with the local spatial information .we introduce two new strategies based on the same underlying principle to improve fiber alignment in tractography results , in order to have more reliable information on the structural connectivity of brain .first we perform contextual regularization to the fod obtained with csd , see fig .[ fig : csdenhpipeline]a , and secondly we introduce a fiber to bundle coherence ( fbc ) measure that can be applied to any fiber bundle to classify and remove spurious fibers , see fig . [fig : csdenhpipeline]b .both approaches are based on a partial differential equation ( pde ) framework introduced in , where the fokker - planck equation of a stochastic process for enhancement of elongated structures is considered .these type of pde - based enhancement methods have been widely used for the processing of 2d - images . in this framework ,images are represented in the extended space of positions and orientations via a stable invertible orientation score , that associates to every location an orientation distribution of the local image features ( lines and contours ) . then , the stochastic processes for contour completion and contour enhancement ( see fig . [fig : kernelvis]a ) on this extended space induce crossing preserving completion and enhancement of lines ., projected on the xy - plane . *b. * the contour enhancement kernel arises from the accumulation of infinitely many sample paths .the gray - scale contours indicate the marginal of the kernel , obtained by integration over , the red glyphs are polar graphs representing the kernel at each grid point . * c. * the contour enhancement kernel oriented in the positive -direction in can be visualized on a grid with glyphs that in this case are spherical graphs.,scaledwidth=95.0% ] the dw - mri data that we use is naturally defined on the coupled space of 3d positions and orientations . as in the 2d case ,crossing preserving enhancement of line structures is required , for which we use the 3d extension of the 2d stochastic process for contour enhancement , introduced in .the linear pde corresponding to this stochastic process can be solved by convolution of the initial condition with the kernel of the pde .this kernel is also a function on the position - orientation space and can be seen as a transition distribution from the origin ( in position and orientation ) to neighboring elements . from the stochastic point of view , the kernels arise as limits of the accumulation of infinitely many sample paths drawn from the stochastic process , illustrated in fig .[ fig : kernelvis]a . for mathematical details of the underlying stochastic processes of the pdes , see .the general idea needed for this article is sketched in fig .[ fig : kernelvis ] . in figs .[ fig : kernelvis]b and [ fig : kernelvis]c we show the contour enhancement kernel using glyph visualization on a grid , each glyph being a polar ( red , 2d ) or spherical ( blue , 3d ) graph plot where in every orientation the ( spherical ) radius is proportional to the value of the kernel .this type of visualization is used throughout the paper for functions defined on the space of positions and orientations .recently , many authors demonstrated the advantages of contextual processing of dw - mri data .the general rationale behind contextual processing is to include alignment of local orientations and their surroundings ( i.e. the context ) on the coupled space of positions and orientations . for this alignment of local orientations , roto - translationsare needed , which imposes a non - euclidean structure in the pde - based processing as we explain in section [ se : enhancement ] .more details on the embedding of in the roto - translation group can be found in .this demonstrates how either the completion or enhancement pdes can be used to extrapolate dti information to increase the angular resolution and resolve some fiber crossings .this idea was shown to be promising in clinical experiments , but in some cases extreme parameters had to be set to obtain clear maxima at crossings ( where dti data is inadequate ) . therefore in this paperwe introduce and test the combination of csd with contextual enhancements .the method proposed in uses an advection - diffusion equation ( that we called contour completion above ) to improve hardi data to obtain connectivity measures . in our workwe rely on a purely diffusive process , contour enhancement , which in contrast to contour completion does not suffer from singularities and is less sensitive to small perturbations of the initial conditions .this property makes the enhancement process more suited to be combined with the sharp angular distributions produced by csd . as the methods mentioned above still result in broad angular distributions , they need to be combined with some sharpening method . to this end , a geometric morphological sharpening based on erosions was presented in .another related method presented in is the so - called fiber continuity model in which purely spatial regularization is considered in combination with spherical deconvolution as alternative to the non - negativity constraint in the classical csd . in section [se : enhancement ] we demonstrate the importance of including also an angular regularization term . the first contribution of this article is to study the combination of the widely used csd method with a regularization induced by the enhancement pde acting on the fod . since the fod obtained with csd consists of sharp angular profiles , it is well - suited as an initial condition for the enhancement pde , that typically has a smoothing effect on the orientation distributions .the contextual regularization method reduces non - aligned crossings in the fod , allowing for a better alignment of fibers when tracking is applied on the enhanced fod .we show that this method is therefore useful to reduce the number of false positive fibers , but mainly to find more true positives in the tractography output .although in this paper we compare to the classical csd method , the pde enhancements can also be applied to extensions of this method .the second contribution of this article is to introduce the fiber to bundle coherence ( fbc ) measure .the motivation for this measure is that , especially probabilistic , tracking methods typically produce spurious fibers that should be removed from the tractography .in contrast to the first approach , this method serves as a post - processing tool . for the computation of the fbcwe regard the fiber bundle as a set of oriented points , by considering for every fiber point also the local tangent to the fiber .we construct a density using the enhancement pde with an initial condition that is a sum of superposed -distributions at every oriented point in the bundle .the construction of such a density from tracks relates to track density imaging and track orientation density imaging , though here the use of the contour enhancement kernels , fig . [ fig : kernelvis ] , allows to use a sparse set of fiber tracks .the fbc , a measure for spuriousness of fibers , is computed by efficient integration of this fiber - based density .fibers that are most spurious according to the fbc can be removed from the tractography , resulting in a better aligned fiber bundle .complementary to the first method , this fbc measure has the purpose to remove false positives in a tractography .section [ se : methods ] covers theory of the individual parts of the pipeline as outlined in fig .[ fig : csdenhpipeline ] , consisting of csd , pde enhancements , tractography and coherence quantification in sections [ se : csd]-[se : coherence ] , respectively . in section[ se : results ] we provide extensive validation of the combination of csd and pde enhancements and the fbc , using three experiments : 1 .first we use the tractometer evaluation system on the isbi 2013 hardi reconstruction challenge dataset , a digital phantom with known ground truth , to demonstrate how contour enhancement improves both the local fod reconstruction and the global connectivity of fiber bundles compared to csd , see section [ se : hardirecon ] .2 . in section [se : evaluationdwmri ] we show on a human dw - mri dataset , containing different crossing bundles , that csd combined with enhancements yields an fod that is more robust with respect to the -value and the number of gradient directions used in the acquisition .furthermore , we make a comparison with earlier work involving erosions and nonlinear diffusion of fods directly applied to a dti - model , that was based on the same data . we show that with our method the glyphs are sharper at the locations where bundles cross .3 . finally in section [ se : orexperiment ] , we show an experiment with clinical data in which we reconstruct the optic radiation ( or ) to determine the position of the tip of the meyer s loop , that is of interest in epilepsy surgery planning .accurate estimation of this position is difficult due to the presence of spurious fibers in the reconstruction of the or .we show that both the fod enhancement and the fbc measure , see fig .[ fig : csdenhpipeline ] , and in particular the combination of the two allow for a more stable determination of the tip of the meyer s loop . here ` more stable ' means less variation with respect to stochastic realizations in the probabilistic tractography results .conclusions and a discussion can be found in section [ se : conclusions ] .in this paper it is assumed that we have hardi data as input , from which we derive an fod that models the orientation of fibers in each voxel , i.e. .for this we use csd , concisely described in section [ se : csd ] , as it gives sharp angular profiles and is able to distinguish multiple fiber directions within a voxel .then we use the enhancement pde for diffusion of the fod , coupling spatial and angular information .the combination of csd and such enhancement is a powerful method to obtain an enhanced fod in which the coherence inherent in the data is included , providing a more coherent input for the tractography .the enhancement technique is explained in section [ se : enhancement ] .we use the mrtrix algorithm for both deterministic and probabilistic tractography to estimate the structural connectivity in the brain . in the deterministic tractography ,fiber tracks are obtained by integrating a directional field , given an initial position and direction .the directional field is given by the locally maximal orientations in the glyphs .in contrast to deterministic tractography , the probabilistic tractography method of mrtrix samples the orientations from the entire fod and does not use just the maxima .more difficult paths can be reconstructed than with deterministic tracking , but typically also many spurious fibers are produced due to the probabilistic sampling .both the deterministic and the probabilistic method are explained in more detail in section [ se : tractography ] . in section [ se : coherence ]we introduce our new technique to quantify the coherence of fibers with respect to all the fibers in a bundle , based on the same pde theory as employed for the contextual enhancement in section [ se : enhancement ] .we explain how the kernel of the enhancement pde is used to construct a tractography - based density , how the fbc is computed and how this measure is able to classify spurious fibers in a tractography .in csd it is assumed that at each voxel position the measured signal can be represented by a spherical convolution of the fod with a response function , that is estimated from the data .since the spherical deconvolution to determine the fod is ill - posed , a non - negativity constraint is included as in .then , given the signal for a sample of orientations , the solution of csd is found by iteratively solving the minimization problem : for , with the maximum number of iterations . here is aligned with and symmetric around the -axis , the convolution is the usual spherical convolution , is the jacobian of the surface measure in orientation and is a parameter to influence the trade - off between the data driven term and regularization term .the linear operator in the regularization term gives the non - negativity constraint and is defined by : where is the heaviside function and is a threshold equal to a fixed factor times the mean of . the initial function for the iterationis computed by taking only the data driven term of eq .( [ eq : contcsd ] ) .the iteration stops when successive iterations yield the same result , typically after 5 to 10 iterations . throughout the paper ,we call the fod obtained by in practice csd is performed using spherical harmonics with a maximal spherical harmonic order of ( ) as discussed in .improvements to the original csd exist to modify and improve the response function , either by recursive calibration or auto - calibration , by using multiple acquisition shells or by including anatomical data .the latter two methods aim to reduce the partial volume effects , where csd is likely to produce spurious fiber orientations .these partial volume effects can occur when in a voxel multiple tissues or multiple bundles with different orientation are present .here we use the classical csd as it is the basic technique available in several neuroimaging packages .however , we stress that our method is not restricted to this type of csd . in any case , our method aims to reduce non - aligned crossings in the fod , also the ones induced by partial volume effects , as we will show in several experiments in this paper .further improvement of the methodology can be expected when including recently extended and more elaborate csd techniques , but this is left for future work . to improve alignment of neighboring glyphs of the fod , recall the glyph field visualization in figs .[ fig : csdenhpipeline ] and [ fig : kernelvis]c , we apply contextual enhancements . before we specify the pde we consider for this enhancement , we first need to express the notion of alignment in mathematical terms . to this end , let us consider fig .[ fig : coupling ] , where it is shown that the notion of alignment can not be supported by a decoupled , flat cartesian product with the combined euclidean distance .it is clear that the green bar at is better aligned with the gray bar at than the orange bar at , even though the distances in the space are equal , i.e. .this means that in order to appropriately describe the concept of alignment , we must consider more than just the amount of spatial displacement and the amount of change in orientation .coupling these two types of motion ( via rigid body motions ) is a solution to this problem .the coupling follows very naturally by expressing the motion of an oriented particle in terms of a moving frame of reference determined by its orientation .that is , spatial movement along the orientation should be much cheaper than spatial movement in the plane orthogonal to .this creates a natural anisotropy for spatial movement . for angular motionwe need isotropy .this extra structure can be obtained by embedding the space of positions and orientations in the rigid body motion group .this means that an element is identified with the rigid body motion , where is _ any _ rotation matrix such that , with pointing to the north pole .we denote this space of coupled positions and orientations by , so we have the group is equipped with the following ( non - commutative ) group product : this product moves oriented elements in a shift - twist fashion , rather than by a rotation followed by an independent translation . due to this shift - twist group product in eq .( [ eq : groupproduct ] ) , we automatically express motion of oriented particles in terms of a moving frame in , which makes this space well - suited for the application of our contextual enhancements .nevertheless , in the remainder of this article this space can still be regarded as the cartesian 5d space , where we secured the coupling of positions and orientations via our specific choice of differential operators and diffusions that are applied .is better aligned with than with , even though spatial and angular distances are equal .formally we can say that the sub - riemannian distance on is smaller between and .,scaledwidth=60.0% ] to improve alignment of fod glyphs , we use a particular diffusion process called contour enhancement that uses both spatial and angular diffusion in the extended space of positions and orientations . given a structure ( think of a fiber bundle ) in this space , see fig .[ fig : r3s2diff ] , we apply spatial diffusion only in the direction of the structure , not in the spatial plane perpendicular to it .angular diffusion is applied in the plane tangent to at the point .this diffusion process enhances elongated structures , while preserving crossing structures , and is given by a fokker - planck type of system , a linear diffusion equation on . for , , this systemcan be expressed as : here is a scale space representation in .the symbol denotes the gradient with respect to the spatial variables and is the laplace - beltrami operator on the sphere .parameters and are related to the amount of spatial and angular diffusion , respectivly .parameter is the diffusion time of the contour enhancement process .it can be seen as a brownian motion process , recall fig .[ fig : kernelvis]a , where particles are allowed to spatially move back and forth in the direction they are heading , or change their direction , but are not allowed to step aside ( comparable to the movement of a car ) .we refer to the solution of eq .( [ eq : contenh ] ) as the enhanced fod .it can be obtained via a finite difference scheme , or via a convolution with a kernel : a basic approximation to the exact green s function of the contour enhancement pde is known and can be written as the product of green s functions in the following way : with , , .the kernels are given by with to avoid numerical errors , we use the estimate for .this approximation is easy to use and allows for efficient implementation . from the approximation kernel in eq .( [ eq : kernr3s2 ] ) it can be seen that problems could occur when either or . to this end , a necessary and sufficient condition for the existence of a smooth solution kernel for the evolution process in eq .( [ eq : contenh ] ) is given by the hrmander requirement .this condition applies to more general situations than the one here , see e.g. , but for the specific case of contour enhancement the requirement is satisfied iff . setting would result in a singular non - smooth kernel , which has numerical disadvantages .more importantly , apart from this theoretical issue the need for both spatial and angular diffusion can also be argued from a practical point of view , as is illustrated in fig .[ fig : motivationd44 ] .we use an artificial example in which a curved fiber bundle is present , shown in the left figure . when the input is diffused with as in the middle of fig .[ fig : motivationd44 ] , the peaks stay distinct and point in the wrong direction . on the other hand , when as in the right figure , due to the angular diffusion the peak is redirected and the glyphs lie better aligned with the fiber bundle .hence is needed to ensure the crucial interaction between different orientations .finally we recall the relation between tikhonov regularization and diffusion , see e.g. ( * ? ? ?* thm 2 ) , which allows us to connect diffusion with with the fiber continuity model in .this model does not suffer from the inconvenience of considering only spatial regularization , as they represent the fod in a truncated spherical harmonic basis . when the enhancements are used in combination with probabilistic tractography, we first apply a standard sharpening deconvolution transform to the fod as described in , to maintain the sharpness of the fod . , .( right ) contour enhancement with .fiber propagation with leads to crossing artefacts rather than smooth fiber enhancement . ] as the next step in the pipeline we use the mrtrix tractography algorithm , as implemented in http://www.brain.org.au/software/index.html#mrtrix , version 0.2.12 .it allows us to perform deterministic and probabilistic fiber tracking on spherical harmonic representations of the ( enhanced ) fod . to have a fair comparison between trackings on the fod and the enhanced fod, we use the parameter settings as explained next .* in the deterministic tracking of mrtrix , seed points are randomly selected from a seed region .the initial direction is sampled randomly and every next step follows the direction of the most aligned fod maximum .if this maximum is below a threshold value , the fiber terminates . this threshold ( cutoff )is set to 10% of the maximal angular response of the fod .there is no constraint on the maximal curvature of the fibers . to prevent that fibers have an initial direction that is not aligned with the fiber bundle, we force the initial direction to be approximately in the direction of the maximal fod peak , by setting the initial cutoff to 0.9 .the step size is set to of the voxel size as is suggested in .tracks proceed in both directions from the seed point and terminate either when they hit the boundary of the volume or mask ( if applicable ) , or due to the threshold stopping criterion . * in the probabilistic case , starting from the seed region , every next step follows a direction randomly sampled from the fod . herewe set the minimal radius of curvature to mm , the default value in the mrtrix algorithm .optionally , a target region of interest is used to select only those fibers that cross this region .we base our choice for deterministic tractography or probabilistic tractography on the application .if only a seed region is specified , as in sections [ se : hardirecon ] and [ se : evaluationdwmri ] , we use deterministic tractography . in this casethere is too much freedom in the probabilistic algorithm andthe streamlines show a lot of spurious behavior . here, a probabilistic approach could make sense if extreme amounts of tracks are used for track density methods . as we do not pursue these methods here, we prefer to use deterministic tractography .if both a seed region and an end region are specified , as in the optic radiation application in section [ se : orexperiment ] , we prefer to use probabilistic tractography .it is known that deterministic tractography in this case provides only a few of the possible pathways from the seed to the end region , whereas reconstructions with probabilistic tractography are much fuller .probabilistic tractography results typically contain many false positive fibers .streamlines that are anatomically implausible can be removed with scoring methods or by imposing anatomical constraints .even when using these methods , the filtered tractography output can still contain fibers that deviate from the fiber bundle and are likely to be spurious . in the next section ,we propose a coherence measure for fibers in a fiber bundle in order to classify these spurious fibers . in this sectionwe introduce our second contribution of the paper , a _ fiber to bundle coherence _ ( fbc ) measure to quantify the coherence of each fiber with respect to all other fibers in the bundle , recall fig .[ fig : csdenhpipeline]b . a spurious fiber , as schematically shown in fig . [ fig : spurfibschematic ] ,is isolated from or poorly aligned with the bulk of the tracks and is therefore unlikely to represent the underlying brain structure .fibers with low coherence , i.e. a low fbc , can then be classified as spurious . to classify a fiber as spurious, we first construct a density by regarding each fiber as a superposition of -distributions in and convolving this distribution with the kernel in eq .( [ eq : kernr3s2 ] ) .this density is independent of the underlying data and is based purely on the collection of fibers .integration of this density along a part of length of a fiber gives a local measure for the coherence of that part .next we explain the mathematical techniques that support the idea in fig .[ fig : spurfibschematic ] .we denote the fibers from a tractography output by , , , with the arc length parameter , the total length of fiber and the number of fibers .now let be the tangent of the fiber , so that forms a curve ( fiber ) in . by construction , points in the forward direction of the fiber .since in dw - mri data antipodal orientations are identified , we also consider .the complete fiber bundle is defined as .a discrete formulation of a fiber with points is given by : this way there are elements in .now we regard every point as a -distribution in centered around . a density for the entire bundleis then constructed as follows : with index running over points within a fiber , running over all fibers and taking care of including forward and backward orientations .we use the same evolution process as in eq .( [ eq : contenh ] ) in which now serves as initial condition , to create a diffused density : we solve the system in ( [ eq : contenhf ] ) by convolution with the corresponding kernel , recall fig . [ fig : kernelvis ] , and call this the _ local _ fbc ( lfbc ) : with the shift - twist convolution as given in eq . ([ eq : shifttwistconv ] ) .this is illustrated in fig .[ fig : spurfibschematic ] in the 2d case .we can now define the fbc for fiber with respect to the bundle as the integral of this density along the fiber : this results in a global property of the fiber , but spurious fibers often only locally deviate from the bundle as in fig .[ fig : spurfibschematic ] . to this end, we compute for each fiber the minimum of such integrals along the fiber over intervals of length : } \frac{1}{\alpha } \int_a^{a+\alpha } { \text{lfbc}}(\gamma_i(s),\gamma ) \ ; { \mathrm{d}}s.\ ] ] the parameter defines the scale over which spuriousness of fibers can be detected and is much smaller than the average fiber length .our primary interest is not the value itself , but rather how it compares to the average coherence of fibers in the bundle , so finally we define the _ relative _ fiber to bundle coherence ( rfbc ) as : here is the _ average _ fiber to bundle coherence indicating the overall coherence of the fibers in the bundle , defined as to summarize , the of a fiber in a bundle is a measure for how well aligned the least aligned part of is , compared to the average coherence of the total bundle . in practice , we evaluate the convolution in eq .( [ eq : ptconvf ] ) only in the fiber points .we compute the , the diffused density in the oriented point , recall the notation in ( [ eq : notationgamma ] ) , as follows : where is any rotation matrix such that , index sums the contributions along a fiber , index runs over all the fibers and as before .the can then be computed as follows : } \frac{1}{\alpha } \sum_{k = a+1}^{a + \alpha } { \text{lfbc}}(\gamma_i^k,\gamma),\ ] ] where in this discrete case , so the lfbc is summed along short intervals of the fiber .likewise , the afbc can be computed as we apply this method in section [ se : orexperiment ] for quantifying the coherence of tractography results of the optic radiation and classifying the spurious fibers .in this section we extensively test the performance of our csd enhancement method ( a ) and the fbc method ( b ) , recall fig . [fig : csdenhpipeline ] and sections [ se : enhancement ] and [ se : coherence ] , in three different experiments : * we use the hardi reconstruction challenge dataset , which is artificial data with known ground truth , to quantitatively evaluate the csd enhancement method ( a ) on deterministic tractography in section [ se : hardirecon ] . * in section [se : evaluationdwmri ] we show on dw - mri human brain data that the enhancement ( a ) have a positive effect on deterministic tractography , for different acquisition protocols of the data .furthermore , on this dw - mri dataset and on the phantom dataset we compare our method to previous work , where a dti - based fod is used in combination with nonlinear pde flow .* in the third and last experiment , we reconstruct the optic radiation in human clinical data , see section [ se : orexperiment ] .we include an extensive evaluation of our methods , the enhancement of the fod ( a ) and the use of the fbc to classify and remove spurious fibers ( b ) , and the combination of both methods .we show that the reproducibility of the probabilistic tractography has increased , resulting in a more stable localization of the tip of the meyer s loop . for all datasets mathematica used to perform the contour enhancement algorithm and the csd , which in practice produces the same results as the mrtrix csd implementation when the same deconvolution kernel is used .mrtrix software was used to perform fiber tractography .the coherence quantification was implemented in c++ . in section [se : hardirecon ] we make use of the tractometer ( http://www.tractometer.org/ ) to evaluate tractography results .visualization was done in either the fibernavigator ( https://github.com/scilus/fibernavigator, ) , mathematica , or the open source vist / e tool ( eindhoven university of technology , imaging science & technology group , http://bmia.bmt.tue .nl / software / viste/[http://bmia.bmt.tue .nl / software / viste/ ] ) .+ the following experiment is performed on a digital phantom dataset that was designed for the isbi 2013 reconstruction challenge .it is used in combination with the tractometer , as a benchmark to compare different reconstruction and tracking methods .the phantom is inspired by the numerical fiber generator and the code to reproduce it is freely available as part of the python package phantomas ( http://www.emmanuelcaruyer.com/phantomas.php ) .this synthetic dataset is of size voxels with a resolution of mm .it consists of simulated white matter bundles , designed to resemble challenging branching , kissing and crossing structures at angles between and degrees , with various curvature and bundle diameters ranging from to .an image indicating the ground truth fiber configuration is shown in the centre of fig .[ fig : isbifull ] .the idea behind the signal simulation is that every voxel is subdivided into multiple sub - voxels , each one with its own attenuation profile .the final signal arrives from integrating the contribution of all the sub - voxels .then , it is possible to combine multiple compartment types in every voxel with added rician noise .this allows for modelling complex configurations as well as taking into account partial volume effects .while the numerical fiber generator uses a tensor - like model to simulate the signal in the sub - voxels , phantomas uses a charmed - based model .the charmed model based on the sderman - jnsson cylinder model captures well the non - gaussian behaviour of the diffusion signal for large b - values .the main reason why we selected the isbi phantom is that it is linked with the tractometer that allows for performing quantitative evaluations of the tractography results , using global metrics as demonstrated in the subsequent experiments . for the experiments presented in this section we used 64 uniformly distributed gradient directions using a -value of with different signal to noise ratios ( snrs ) .we use spherical harmonics in csd with maximal order 8 , resulting in estimated coefficients on each position .we then enhance the resulting fod functions using our contour enhancement algorithm with varying parameters . from the evolutions described in eq .( [ eq : contenh ] ) we see by a basic rescaling argument that it is sufficient to vary and the ratio . the larger this ratio , the more preference the spatial diffusion gets over the angular diffusion , resulting in elongated kernels ( visualized by thin glyphs ) .a smaller ratio is better suited in regions where the curvature of bundles is higher ( visualized by thicker glyphs ) .the higher the diffusion time , the more context is taken into account .when is too large , fiber bundles with high curvature can be damaged or false positives could be created . taking this into consideration ,we choose our parameters as follows : we fix spatial diffusivity parameter , we take the angular diffusivity parameter and diffusion times $ ] .( top ) and .the colors correspond to the direction of the fibers .the dataset consists of crossing , branching and kissing fiber bundles .the tractography on enhanced csd results in better aligned fibers and a fuller reconstruction of the bundles .the ground truth configuration of the bundles is depicted in the center . ]tractography results for the entire dataset are shown in fig .[ fig : isbifull ] .we can recognize the positive effect of the enhancements on deterministic tractography : we see less dropouts , better aligned fibers and better continuation of fibers at crossings .an extensive quantification of the performance of our method is done at the voxel level using the fods and at the macroscopic level using tractography in sections [ se : localmetrics ] and [ se : globalmetrics ] , respectively .both sections support the results summarized in fig .[ fig : isbiresults ] .we compare reconstructed fods locally with the ground truth using only the orientation of the peaks .let be the set of voxels in the white matter mask , then we denote the ground truth number of peaks in a voxel by and the orientations corresponding to the peaks by , .maxima of the constructed fod are found by evaluating the fods on a order icosahedron tessellation with antipodally symmetric points , giving an angular resolution of less than degree .maxima are taken into account only if it exceeds a threshold of , the same value we use as threshold in the tractography .let be the set of peak orientations in voxel estimated from the fod .the average angular error in degrees can then be computed by : in the top row of fig .[ fig : isbiresults ] we show the effects contour enhancement for different ratios of and upon variation of the diffusion time .the results are given for substantially low snr levels , and and .these snrs are computed w.r.t . the non - dw image .specifically , if the b=0 intensity is 1 then the standard deviation of the rician noise distribution is 1/snr . in all cases a clear improvementis found compared to csd without enhancements and the more noise , the more the angular error is decreased .higher diffusion times give better results and around the angular error is almost stable .it can also be seen that the combination of csd with enhancements at lower snrs gives lower angular errors than just csd for the higher snrs .there is no significant difference in the fods between the different values . even though it is visible that more angular diffusion leads to fatter glyphs , for the orientation of the peaks the precise value of is not of great importance : the angular errors for are slightly smaller , but there is not much difference with the higher values of . + and four different snrs as we increase the diffusion time .the top row shows the average angular error of the fod peaks , the rows below show the average bundle coverage ( abc ) , connection to seed ratio ( csr ) and the valid connection to connection ratio ( vccr ) , computed from the tractography results . ] at the macroscopic level we are interested in the impact of the enhanced local reconstruction on the quality of the global connectivity .the deterministic mrtrix tractography is used as described in section [ se : tractography ] , with seeds randomly selected in the white matter mask .the tracks have a minimum length of mm and new seed points are chosen until 10000 streamlines are selected . for every fod, the tractography is repeated five times with the same settings , to average out the variability in the tracking algorithm output .we then use the tractometer to perform a fiber tracking analysis based on the ground truth and the five results are averaged .the tractometer outputs values for various metrics , from which we use the valid connections ( vc ) , invalid connections ( ic ) and no connections ( nc ) .they indicate the percentage of tracks that correctly connect , incorrectly connect or do not connect gray matter areas in the dataset , respectively .we also use the average bundle coverage ( abc ) , the percentage of voxels in a bundle that is crossed by a valid streamline , averaged over all bundles .we combine the ( vc ) , ( ic ) and ( nc ) in two metrics introduced in : * connection to seed ratio ( csr ) , which represents the probability that a generated fiber actually connects two gray matter areas , computed as . * valid connection to connection ratio ( vccr ) , the probability that a connecting fiber is correct , computed as vc/(vc+ic ) .the results for the abc , csr and vccr with the same enhancement parameters and snrs as for the local metric are given in fig .[ fig : isbiresults ] .similar remarks hold for the global metrics as for the angular error .for all three metrics and all snrs the enhancements lead to an improvement compared to csd , the only exception being the abc for and . furthermore , as the snr decreases , the larger diffusion times are beneficial and the more significant the improvement is .the best results are obtained for .we expect that truncation of the spherical harmonics already introduces some angular smoothing of the fods on this artificial dataset , explaining the small effect of in the experiments . furthermore , we see that the diffusion time truly acts as a regularization parameter , resulting in a robustness for the metrics with respect to the snrs : the higher the diffusion time , the smaller the differences in the metrics between the different snrs . seeding from the white matter voxels can lead to an over - representation of the number of fibers in longer fiber bundles with respect to the shorter bundles .the longer bundles thereby have a larger contribution to the global metrics than the shorter bundles , which could lead to an overestimation of the fiber bundles .as proposed in , we compared the global metrics when seeding from the gray / white matter interface for csd and one specific set of enhancement parameters .the global metrics for that seeding strategy were slightly lower for csd and comparable when including enhancements .for the sake of comparing our enhancement method with csd , we therefore believe it is fair to use seeding from the white matter mask . the convincing improvement in the global metrics is supported by fig . [ fig : prtscrns ] , that shows a selection of the fiber bundles in the dataset . it can be seen that after enhancements , there are more valid connections in the green bundle and less wrong exits in the red bundle , leading to a higher ( vccr ) and a better bundle coverage .the glyphs in the top row show that the enhancements improve alignment of glyphs , especially at the boundary of the fiber bundles , where the original csd result tends to suffer from partial volume effects . and the parameters used for the enhancements are , , . the ground truth image with the same viewpoint as the bottom figures is depicted on the left .] in this experiment we consider a dw - mri dataset of a part of a human brain , previously used in .the study was approved by the local ethical commitee of maastricht university , and informed written consent was obtained from the subject .although the dataset consists of only 10 axial slices , the corpus callosum , corona radiata and superior longitudinal fasciculus are ( partly ) present in the data . we show that the combination of csd and enhancement is well - suited for different combinations of the -value and the number of gradient directions used in the acquisition .furthermore , we make a qualitative comparison with the dti - based method of on this dataset and conclude with a brief quantitative comparison with this method on the dataset of [ se : hardirecon ] .the acquisition was performed on a 3 t siemens allegra scanner , with fov 208x208 mm and voxel size 2x2x2 mm . during the data acquisition ,a brain region consisting of 10 axial slices was scanned with the following combinations of -values and , the number of orientations : s / mm with , s / mm with and s / mm with .we use again csd with spherical harmonics up to order 8 .the higher -value is obtained by using a stronger gradient pulse , making the acquisition more sensitive to detail in the tissue structure , but also inducing a lower snr .increasing the number of gradient directions gives a better angular resolution .we use deterministic tractography , with three seed regions manually selected in the middle of the corpus callosum , corona radiata and superior longitudinal fasciculus . in the right column of fig .[ fig : trackingcomparison ] we show that after enhancements , the fod allows for a more coherent reconstruction of the three bundles . especially in the region where the three bundles come together, it can be seen that the fibers have a better propagation through the crossings . moreover , the fods after enhancements are very similar to each other , visible in the glyph visualization , leading to three tractography results supporting similar fiber bundles .this is an improvement with respect to csd without enhancement , shown in the left column of fig .[ fig : trackingcomparison ] .there we find more noisy fods with more variation between the different protocols .this is also reflected in the tractography results , that contain more spurious fibers than after the enhancements .we conclude , just like in the first experiment on the phantom data , that applying enhancements induces more robust tractography also on real dw - mri data , in this case in the sense that it is less sensitive to the acquisition parameters and . , , .all three bundles are more apparent after enhancements and more fibers pass the crossings . ] in the next experiment we compare the performance of our combination of csd with enhancements with the method in which proposed to combine dti with _ non - linear _ pde - based enhancement obtained from successively applying erosions and diffusions .let us briefly describe this method , for details we refer to , and an implementation of the pde enhancements can be found in the hardi package for mathematica available at ( http://bmia.bmt.tue.nl/people/rduits/hardialgorithms.zip ) .first an fod on positions and orientations that we call was constructed via a transformation of the tensor field fitted to the data , according to the following definition : this fod is then sharpened with pde erosions , a type of morphological enhancement adapted from , on and regularized with nonlinear diffusions to find crossing structures from dti . previously in , the same dataset as in fig .[ fig : trackingcomparison ] for acquisition parameters s / mm and was processed .here we compare the fod obtained with csd , that we call here , with in the top and bottom figures , respectively , of fig .[ fig : glyphcomparison ] . unlike dti , which is limited by the gaussian assumption of the diffusion profile, csd can estimate multiple fiber orientations within a voxel .furthermore , we see that the large glyphs in the centrum semiovale in the bottom figure are not apparent in . applying ( linear )enhancements , as explained in section [ se : enhancement ] , to gives the second figure , and the approach in using erosions/(nonlinear ) enhancements applied to gives the second figure from below .it can be seen that also the enhanced dti glyphs supports multiple fiber directions within voxels via extrapolation , but at the cost of high regularization .another noticeable difference is the fact that the glyphs in the csd case are slimmer and crossings are more clearly defined . whether two separate maxima are visible at a crossing is less dependent on the diffusion parameters in the pde diffusion . .erosions and nonlinear diffusions for the dti - based method are done with parameters as in .the tractographies corresponding to the two methods are shown in the middle .outliers such as the red fiber , indicated by the arrow , occur due to the use of high regularization coefficients . ] besides the visual comparison of the fod glyphs , we provide deterministic tractography results for both procedures in the middle of fig .[ fig : glyphcomparison ] . it can be observed that both methods produce reasonable results , although the one obtained from the enhanced dti dataset seems oversmoothed and outliers ( indicated with the yellow arrow ) can occur .this is due to the extreme diffusion parameters needed to perform the fod extrapolation .we find that visually the combination of csd and linear enhancements yields better tractography than dti combined with erosions and nonlinear enhancements . to provide a more quantitative and complete comparison of dti , dti and nonlinear enhancements , csd and csd with linear enhancements , we also include results of the experiment in section [ se : hardirecon ] for the dti methods , see table [ tab : isbiresults ] .we heuristically determined good parameter settings for the nonlinear enhancement of dti : erosions ( * ? ? ? * eq .( 59 ) ) with , , and diffusion ( * ? ? ?* eq . ( 55 ) ) with , , and .in table [ tab : isbiresults ] is shown that applying enhancements for contextual regularization of the fod is beneficial for both dti and csd .the lower the snr , the more evident the improvements become .furthermore , we see that in terms of the local metric , the angular error of the peak orientations , the dti methods can compete with the csd based methods. however , the global metrics are significantly higher for csd based methods .the quantitative results on the phantom data in table [ tab : isbiresults ] are in line with the qualitative comparison on real data in fig .[ fig : glyphcomparison ] ..for two snr values , the results are shown for the dti method described in section [ se : evaluationdwmri ] , with or without nonlinear enhancements .we compare with csd and a specific instance of enhanced csd with parameters , .for local metric lower is better , for the other metrics higher is better . in boldfaceare the best results for the dti and csd methods . + [ cols="<,<,<,<,<",options="header " , ] [ tab : isbiresults ] the optic radiation ( or ) is a white matter fiber bundle connecting the primary visual cortex and the lateral geniculate nucleus ( lgn ) , see fig .[ fig : prtscrn_or ] .the most anterior part of the or is called the meyer s loop ( ml ) , of which the exact location is of interest for treatment of temporal lobe epilepsy . during neurosurgery ,a part of the temporal lobe is resected . to ensure that the or remains intact to prevent visual field defect , it is crucial to know the distance from the tip of the meyer s loop to the temporal pole ( ml - tp ) , which shows large interpatient variability .we use dw - mri scans of four subjects , performed on a 3.0 t philips achieva mr scanner , with s / mm , and a spatial resolution of 2x2x2 mm .all subjects gave written informed consent ; the study was approved by the medical ethics committee of maastricht university medical center ( n 43386.068 ) .the data is acquired from healthy volunteers , and ground - truth ml - tp distance is not known . therefore accuracy of this measure of our methods can not be checked , instead we focus on consistency and reproducibility .we apply csd to the data to construct the fod , with spherical harmonics up to order 6 requiring the estimation of 28 coefficients ( as 32 directions are insufficient to estimate the 45 coefficients when a spherical harmonic order 8 is used , when not using super - resolution as in ) .we seed from the lgn and include all fibers that reach the primary visual cortex .both regions of interest are selected manually on a t1-weighted image .we use probabilistic fiber tracking as described in section [ se : tractography ] .we demonstrate the effect of the enhancement of csd and the use of the fbc measure in sections [ se : methoda ] and [ se : methodb ] , respectively , in this relevant clinical setting . a quantitative comparison of the four methods csd ( o ) , csd + enhancement ( a ) , csd + fbc ( b ) and csd + enhancement + fbc ( a+b ) is provided in section [ se :quantitativeor ] .we show that the enhancement and/or the removal of spurious fibers , but in particular the combination of both methods , allows for a more stable computation of the ml - tp distance than the original tractography result . in this section ,we apply the pde enhancement ( step a ) to the csd fod as before , with parameter settings , and . after the enhancement we apply the sharpening deconvolution transform and probabilistic tractography with 10000 streamlines .we compare the results of the tractography on the subjects both before and after the enhancement in fig .[ fig : or4subjects ] .we see that the tracking on enhanced data generally shows less spurious fibers , and has a better pronounced tip of the meyer s loop . however , the optic radiation is a highly curved structure , where the advantage of the enhancement of elongated structures can not be fully exploited . to further reduce the spurious fibers ,we explore our other approach in the next section . in this section, we apply probabilistic tractography on subject 1 , with 20000 streamlines and including state of the art data scoring as in ( only relying on the data term , i.e. in ( * ? ? ?* eq.(11 ) ) ) , see fig .[ fig : orcomparison ] . the kernel parameters for the coherence quantification ( step b )are set to , and for the convolution .let be the set of the 1000 most anterior fibers in a tractography of the or , that roughly form the meyer s loop .we compute the lfbc and subsequently the rfbc for all the fibers in .then we take , the rfbc corresponding to the `` central '' fiber , in the sense that it is most coherent with the fiber bundle .we define the filtered set as this means the parameter acts as a threshold parameter and can be set such that fibers with a high spuriousness are removed .the fiber point in that is closest to the temporal pole defines the ml - tp distance .we repeat the probabilistic tractography five times with the same settings on the same data , to qualitatively compare different stochastic realizations of the tractography method .the original or reconstructions are shown in the top row of fig .[ fig : orcomparison ] .we observe that due to the presence of spurious fibers , the tip of the meyer s loop ( indicated by the orange spheres ) is estimated at different locations .when we set the threshold , removing in these cases between % and % of the most spurious fibers , we obtain the results as shown in the bottom row of fig .[ fig : orcomparison ] .it can be seen that the resulting fiber bundles are very similar to each other , demonstrating less variation in the localization of the tip . to support our claims of the two previous sections , we test the effect of our methods on the stability of the ml - tp distance under different stochastic realizations .here we perform probabilistic tractography with 10000 fibers ten times with the same settings , for each of the four subjects and each of the four methods ( csd , csd + enh , csd + fbc and csd + enh + fbc ) .the fbc measure is computed from the 1000 most anterior fibers as in the previous experiment and the threshold is set to .we compare the mean ml - tp distance and sample standard deviation determined from the tracking results of each of the methods .the results are summarized in the boxplots in fig .[ fig : mltp_distances ] .the figure strongly supports the application of the enhancements methods . for subjects 1 - 3the ml - tp distance shows much less variation when including the fbc .for all subjects also ( csd + enh ) gives more stable results than just csd .moreover , in all cases the combination ( csd + enh + fbc ) outperforms csd and for all but subject 1 the combined method ( csd + enh + fbc ) also gives better results than the enhancement or fbc individually .it should be remarked that higher up the graph indicates a larger resection if used for pre - surgical evaluation , which is not necessarily positive .however , we prefer to have a stable and reproducible method that can be used with a safety margin , then a method that is more conservative , but shows large variations .we have proposed two new tools to improve alignment of fibers in tractography results : ( a ) the combination of csd with contextual pde enhancements and ( b ) a fiber to bundle coherence measure to classify spurious fibers .both approaches rely on the same contextual processing via pdes on the space of coupled positions and orientations .we validate our methodology with a variety of experiments on synthetic and human data . in the first experiment we consider a digital phantom that simulates dw - mri data of a challenging configuration of multiple neural - like fiber bundles for different noise levels , see fig .[ fig : isbifull ] .the combination of csd with enhancements and subsequent deterministic tracking was extensively tested for varying enhancement parameters , see fig .[ fig : isbiresults ] .the enhanced fod peaks were compared with the ground truth fiber orientations , showing for all snrs that the maxima of the enhanced fod coincide better with the ground truth peaks than without application of enhancement . also , this improvement is particularly high for very low snr values . to quantitatively evaluate the impact of the enhancement on the tractographies we used the tractometer evaluation system .the results , shown in fig . [ fig : isbiresults ] confirm the benefit , for all the metrics considered , of including the enhancement .also an improved stability of the metrics with respect to different enhancement parameters is observed .furthermore , we found that data with a lower snr requires more regularization , obtained by choosing a higher diffusion time in the enhancement .these quantitative evaluations of local and global metrics are supported by the qualitative results in figs .[ fig : isbifull ] and [ fig : prtscrns ] , where we saw that after enhancement fibers are better aligned and propagate better through crossings .the second experiment is performed on human data of a representative area of the brain with crossing fiber bundles .we evaluate our combination of csd and enhancement for three different ( single - shell ) acquisition protocols , corresponding to different -values and number of gradient directions .we observed , see fig .[ fig : trackingcomparison ] , that whereas tractography on csd without enhancement showed notable differences between the three acquisition protocols , tractography after our enhancement lead to a qualitatively similar reconstruction in all cases .this implies that the application of enhancement in the processing pipeline makes the tractography results less dependent on the scanning protocol used .we use the same dataset and the phantom dataset to compare our method qualitatively and quantitatively with previous work in which sharpening methods and nonlinear enhancement pdes are applied to dti .we observed qualitatively on real data in fig .[ fig : glyphcomparison ] and quantitatively in table [ tab : isbiresults ] the advantage of csd , that allows to use linear enhancements with less extreme regularization parameters than with the dti based method , resulting in a more reliable tractography . for our second approach to improve fiber alignment , we introduced a fiber to bundle coherence measure that can be used for detecting and filtering spurious fibers .the fiber to bundle coherence ( fbc ) is computed from a tractography based density that we constructed using the same pde foundation as in the first method . as an application we considered the reconstruction of the optic radiation , a fiber bundle of which the position of the anterior extent ( the meyer s loop ) is of interest for temporal lobe resection surgery .accurate and stable localization of the tip of the meyer s loop is difficult due to the presence of spurious fibers , as shown in fig .[ fig : prtscrn_or ] .we demonstrated in figs .[ fig : or4subjects ] , [ fig : orcomparison ] and [ fig : mltp_distances ] that either by enhancement of the csd fod , or by removing the most spurious fibers using the fbc measure leads to a robust probabilistic tractography .in particular , the combination of both methods in one pipeline allows for a more stable localization of the tip of the meyer s loop and a more stable determination of the meyer s loop to temporal pole distance .our experiments show that our pde enhancement methods for contextual processing are an effective and widely applicable tool to both enhance csd data and to remove spurious fibers from tractographies .while we used csd to construct an fod , the pde enhancement can be applied to an fod obtained with any other method .we have seen that both our methods improve fiber alignment in tractography results and hence provide information on structural connectivity of the brain white matter more robustly . in the future , we aim to improve this framework by using data - adaptive smoothing , for example using local gauge frames .we would like to thank dr .a. roebroeck from the faculty of psychology & neuroscience , maastricht university for providing us with the human data used in section [ se : evaluationdwmri ] .the study was approved by the local ethical commitee of maastricht university .we gratefully acknowledge academic center for epileptology kempenhaeghe & maastricht umc+ for providing us with the healthy volunteer data used in section [ se : orexperiment ] .the study was approved by the medical ethics committee of maastricht university medical center ( n 43386.068 ) .informed written consent was obtained from all subjects .the research leading to the results of this paper has received funding from the european research council under the european community s 7th framework programme ( fp7/20072014)/erc grant agreement no . 335555 .le bihan d , breton e , lallemand d , grenier p , cabanis e , et al .( 1986 ) mr imaging of intravoxel incoherent motions : application to diffusion and perfusion in neurologic disorders .radiology 161 : 401407 .basser pj , mattiello j , le bihan d ( 1994 ) mr diffusion tensor spectroscopy and imaging .biophysical journal 66 : 259267 .tournier jd , mori s , leemans a ( 2011 ) diffusion tensor imaging and beyond .magnetic resonance in medicine 65 : 15321556 .jeurissen b , leemans a , tournier jd , jones dk , sijbers j ( 2013 ) investigating the prevalence of complex fiber configurations in white matter tissue with diffusion magnetic resonance imaging .human brain mapping 34 : 27472766 .descoteaux m , poupon c ( 2012 ) diffusion - weighted mri . in : comprehensive biomedical physics .tournier jd , calamante f , connelly a ( 2012 ) mrtrix : diffusion tractography in crossing fiber regions . international journal of imaging systems and technology 22 : 53 - 66 .jones dk , cercignani m ( 2010 ) twenty - five pitfalls in the analysis of diffusion mri data .nmr in biomedicine 23 : 803820 .wiest - daessl n , prima s , coup p , morrissey sp , barillot c ( 2007 ) non - local means variants for denoising of diffusion - weighted and diffusion tensor mri . in : miccai ( 2 ) .springer , volume 4792 of _ lecture notes in computer science _ ,344 - 351 .coup p , yger p , prima s , hellier p , kervrann c , et al .( 2008 ) an optimized blockwise nonlocal means denoising filter for 3-d magnetic resonance images .ieee transactions in medical imaging 27 : 425 - 441 .descoteaux m , wiest - daessl n , prima s , barillot c , deriche r ( 2008 ) impact of rician adapted non - local means filtering on hardi . in : miccaispringer , volume 5242 of _ lecture notes in computer science _ ,122 - 130 .poupon c , clark ca , frouin v , rgis j , bloch i , et al .( 2000 ) regularization of diffusion - based direction maps for the tracking of brain white matter fascicles . 12 : 184195 .coulon o , alexander dc , arridge sr ( 2001 ) a regularization scheme for diffusion tensor magnetic resonance images . in : ipmi .springer , volume 2082 of _ lecture notes in computer science _ ,tschumperl d , deriche r ( 2001 ) diffusion tensor regularization with constraints preservation . in : computer vision and pattern recognition , 2001 .cvpr 2001 .proceedings of the 2001 ieee computer society conference on .ieee , volume 1 , pp .burgeth b , didas s , weickert j ( 2009 ) a general structure tensor concept and coherence - enhancing diffusion filtering for matrix fields . in : visualization and processing of tensor fields , springer berlin heidelberg , mathematics and visualization .305 - 323 .burgeth b , breu m , pizarro l , weickert j ( 2009 ) pde - driven adaptive morphology for matrix fields . in : ssvm .springer , volume 5567 of _ lecture notes in computer science _ ,247 - 258 .tuch ds ( 2004 ) q - ball imaging . magnetic resonance in medicine 52: 13581372 .descoteaux m , angelino e , fitzgibbons s , deriche r ( 2007 ) regularized , fast , and robust analytical q - ball imaging .magnetic resonance in medicine 58 : 497510 .descoteaux m , deriche r , knosche t , anwander a ( 2009 ) deterministic and probabilistic tractography based on complex fibre orientation distributions .medical imaging , ieee transactions on 28 : 269286 .barmpoutis a , vemuri bc , howland d , forder jr ( 2008 ) extracting tractosemas from a displacement probability field for tractography in dw - mri . in : miccai ( 1 ) .springer , volume 5241 of _ lecture notes in computer science _ ,pp . 9 - 16 .goh a , lenglet c , thompson pm , vidal r ( 2009 ) estimating orientation distribution functions with probability density constraints and spatial regularity . in : miccaispringer , volume 5761 of _ lecture notes in computer science _877 - 885 .schultz t ( 2012 ) towards resolving fiber crossings with higher order tensor inpainting . in : new developments in the visualization and processing of tensor fields ,. 253265 .reisert m , kiselev vg ( 2011 ) fiber continuity : an anisotropic prior for odf estimation .ieee transactions on medical imaging 30 : 1274 - 1283 .tax c , duits r , vilanova a , ter haar romeny b , hofman p , et al . (2014 ) evaluating contextual processing in diffusion mri : application to optic radiation reconstruction for epilepsy surgery .franken e ( 2008 ) enhancement of crossing elongated structures in images .thesis , eindhoven university of technology , department of biomedical engineering , the netherlands .duits r , franken e ( 2011 ) left - invariant diffusions on the space of positions and orientations and their application to crossing - preserving smoothing of hardi images .international journal of computer vision 92 : 231 - 264 .creusen ej , duits r , dela haije tcj ( 2011 ) numerical schemes for linear and non - linear enhancement of dw - mri . in : ssvm .springer , volume 6667 of _ lecture notes in computer science _ ,duits r , dela haije tcj , creusen ej , ghosh a ( 2013 ) morphological and linear scale spaces for fiber enhancement in dw - mri .journal of mathematical imaging and vision 46 : 326 - 368 .franken e , duits r ( 2009 ) crossing - preserving coherence - enhancing diffusion on invertible orientation scores .international journal of computer vision 85 : 253278 .mumford d ( 1994 ) elastica and computer vision . in : algebraic geometry and its applications , springer new york .491 - 506 .zweck jw , williams lr ( 2000 ) euclidean group invariant computation of stochastic completion fields using shiftable - twistable functions . in : journal of mathematical imaging and vision .. 100116 .sanguinetti g , citti g , sarti a ( 2010 ) a model of natural image edge co - occurrence in the rototranslation group .journal of vision 10 .august j , zucker sw ( 2003 ) sketches with curvature : the curve indicator random field and markov processes .ieee transactions on pattern analysis and machine intelligence 25 : 387 - 400 .r , van almsick m ( 2008 ) the explicit solutions of linear left - invariant second order stochastic evolution equations on the 2d euclidean motion group . quarterly of applied mathematics 66 : 2767 .momayyezsiahkal p , siddiqi k ( 2013 ) 3d stochastic completion fields for mapping connectivity in diffusion mri .ieee transactions on pattern analysis and machine intelligence 35 : 983 - 995 .citti g , sarti a ( 2006 ) a cortical based model of perceptual completion in the roto - translation space .journal of mathematical imaging and vision 24 : 307 - 326 .duits r , franken e ( 2010 ) left - invariant parabolic evolutions on se(2 ) and contour enhancement via invertible orientation scores part i : linear left - invariant diffusion equations on se(2 ) .quarterly of applied mathematics 68 : 255292 .agrachev a , boscain u , gauthier jp , rossi f ( 2009 ) the intrinsic hypoelliptic laplacian and its heat kernel on unimodular lie groups . journal of functional analysis 256 : 2621 - 2655 . prkovska v , rodrigues p , duits r , haar romenij bt , vilanova a ( 2010 ) extrapolating fiber crossings from dti data : can we infer similar fiber crossings as in hardi ? in : miccai .workshop on computational diffusion mri .prkovska v , andorr m , villoslada p , martinez - heras e , duits r , et al .( 2015 ) contextual diffusion image post - processing aids clinical applications . in : visualization and processing of tensors and higher order descriptors for multi - valued data .reisert m , skibbe h ( 2012 ) left - invariant diffusion on the motion group in terms of the irreducible representations of so(3 ) .computing research repository abs/1202.5414 .reisert m , skibbe h ( 2013 ) fiber continuity based spherical deconvolution in spherical harmonic domain . in : mori k , sakuma i , sato y , barillot c , navab n , editors , miccai ( 3 ) .springer , volume 8151 of _ lecture notes in computer science _ ,493 - 500 .dela haije t , duits r , tax c ( 2014 ) sharpening fibers in diffusion weighted mri via erosion . in : westin c , vilanova a , burgeth b , editors , visualization and processing of tensors and higher order descriptors for multi - valued data , mathematics and visualization .tournier jd , yeh ch , calamante f , cho kh , connelly a , et al .( 2008 ) resolving crossing fibres using constrained spherical deconvolution : validation using diffusion - weighted imaging phantom data .neuroimage 42 : 617 - 625 .tax c , jeurissen b , vos s , viergever m , leemans a ( 2014 ) recursive calibration of the fiber response function for spherical deconvolution of diffusion mri data .neuroimage 86 : 6780 .schultz t , groeschel s ( 2013 ) auto - calibrating spherical deconvolution based on odf sparsity . in : miccai 2013 , springer .. 663670 .jeurissen b , tournier jd , dhollander t , connelly a , sijbers j ( 2014 ) multi - tissue constrained spherical deconvolution for improved analysis of multi - shell diffusion mri data .neuroimage 103 : 411 - 426 .roine t , jeurissen b , perrone d , aelterman j , leemans a , et al .( 2014 ) isotropic non - white matter partial volume effects in constrained spherical deconvolution .frontiers in neuroinformatics 8 .roine t , jeurissen b , perrone d , aelterman j , philips w , et al .( 2015 ) informed constrained spherical deconvolution ( icsd ) .medical image analysis : in press .calamante f , tournier jd , jackson gd , connelly a ( 2010 ) track - density imaging ( tdi ) : super - resolution white matter imaging using whole - brain track - density mapping .neuroimage 53 : 12331243 .dhollander t , emsell l , van hecke w , maes f , sunaert s , et al .( 2014 ) track orientation density imaging ( todi ) and track orientation distribution ( tod ) based tractography .neuroimage 94 : 312336 .ct ma , bor a , girard g , houde jc , descoteaux m ( 2012 ) tractometer : online evaluation system for tractography . in : medical image computing and computer - assisted intervention miccai 2012 , springer .. 699706 .ct ma , girard g , bor a , garyfallidis e , houde jc , et al .( 2013 ) tractometer : towards validation of tractography pipelines .medical image analysis 17 : 844 - 857 .daducci a , canales - rodrguez ej , descoteaux m , garyfallidis e , gur y , et al .( 2014 ) quantitative comparison of reconstruction methods for intra - voxel fiber recovery from diffusion mri .ransactions on medical imaging 33 : 384399 .falconer ma , serafetinides ea ( 1963 ) a follow - up study of surgery in temporal lobe epilepsy .journal of neurology , neurosurgery , and psychiatry 26 : 154 .powell h , parker g , alexander d , symms m , boulby p , et al .( 2005 ) mr tractography predicts visual field defects following temporal lobe resection .neurology 65 : 596599 .sherbondy aj , dougherty rf , ben - shachar m , napel s , wandell ba ( 2008 ) contrack : finding the most likely pathways between brain regions using diffusion tractography .journal of vision 8 : 15.116 .meesters s ( 2013 ) diffusion weighted tractography to reconstruct the optic radiation in support of temporal lobe epilepsy surgery .master s thesis , eindhoven university of technology .tournier jd , calamante f , gadian dg , connelly a ( 2004 ) direct estimation of the fiber orientation density function from diffusion - weighted mri data using spherical deconvolution .neuroimage 23 : 1176 - 1185 .tournier jd , calamante f , connelly a ( 2007 ) robust determination of the fibre orientation distribution in diffusion mri : non - negativity constrained super - resolved spherical deconvolution .neuroimage 35 : 1459 - 1472 .driscoll j , healy d ( 1994 ) computing fourier transforms and convolutions on the 2-sphere .advances in applied mathematics 15 : 202 - 250 .tournier jd , calamante f , connelly a ( 2013 ) determination of the appropriate b value and number of gradient directions for high - angular - resolution diffusion - weighted imaging .nmr in biomedicine .duits r , burgeth b ( 2007 ) scale spaces on lie groups . in : ssvm .springer , volume 4485 of _ lecture notes in computer science _ ,pp . 300 - 312 .creusen e , duits r , vilanova a , florack l ( 2013 ) numerical schemes for linear and non - linear enhancement of dw - mri .numerical mathematics : theory , methods and applications 6 : 326 - 368 .rodrigues p , duits r , ter haar romeny bm , vilanova a ( 2010 ) accelerated diffusion operators for enhancing dw - mri . in : proc . of the 2nd eg conference on vcbm .eurographics association , pp .hrmander l ( 1967 ) hypoelliptic second order differential equations .acta mathematica 119 : 147 - 171 .daducci a , caruyer e , descoteaux m , thiran jp ( 2013 ) . reconstruction challenge .ieee international symposium on biomedical imaging .http://hardi.epfl.ch / statis / events/2013_isbi/. ( 2014 ) mathematica .wolfram research inc ., 10.0 edition .chamberland m , whittingstall k , fortin d , mathieu d , descoteaux m ( 2014 ) real - time multi - peak tractography for instantaneous connectivity display .frontiers in neuroinformatics 8 .caruyer e , daducci a , descoteaux m , houde jc , thiran jp , et al .( 2014 ) phantomas : a flexible software library to simulate diffusion mr phantoms .international society for magnetic resonance in medicine .close tg , tournier jd , calamante f , johnston la , mareels i , et al .( 2009 ) a software tool to generate simulated white matter structures for the assessment of fibre - tracking algorithms .neuroimage 47 : 1288 - 1300 .assaf y , basser pj ( 2005 ) composite hindered and restricted model of diffusion ( charmed ) mr imaging of the human brain .neuroimage 27 : 4858 .sderman o , jnsson b ( 1995 ) restricted diffusion in cylindrical geometry .journal of magnetic resonance , series a 117 : 94 - 97 .girard g , whittingstall k , deriche r , descoteaux m ( 2014 ) towards quantitative connectivity analysis : reducing tractography biases .neuroimage 98 : 266 - 278 .smith re , tournier jd , calamante f , connelly a ( 2013 ) sift : spherical - deconvolution informed filtering of tractograms .neuroimage 67 : 298312 .smith re , tournier jd , calamante f , connelly a ( 2012 ) anatomically - constrained tractography : improved diffusion mri streamlines tractography through effective use of anatomical information .neuroimage 62 : 19241938 .aganj i , lenglet c , jahanshad n , yacoub e , harel n , et al .( 2011 ) a hough transform global probabilistic approach to multiple - subject diffusion mri tractography .medical image analysis 15 : 414425 .nilsson d , starck g , ljungberg m , ribbelin s , jnsson l , et al .( 2007 ) intersubject variability in the anterior extent of the optic radiation assessed by tractography .epilepsy research 77 : 1116 .duits r , janssen m , hannink j , sanguinetti g ( 2015 ) locally adaptive frames in the roto - translation group and their applications in medical imaging .
we propose two strategies to improve the quality of tractography results computed from diffusion weighted magnetic resonance imaging ( dw - mri ) data . both methods are based on the same pde framework , defined in the coupled space of positions and orientations , associated with a stochastic process describing the enhancement of elongated structures while preserving crossing structures . in the first method we use the enhancement pde for contextual regularization of a fiber orientation distribution ( fod ) that is obtained on individual voxels from high angular resolution diffusion imaging ( hardi ) data via constrained spherical deconvolution ( csd ) . thereby we improve the fod as input for subsequent tractography . secondly , we introduce the fiber to bundle coherence ( fbc ) , a measure for quantification of fiber alignment . the fbc is computed from a tractography result using the same pde framework and provides a criterion for removing the spurious fibers . we validate the proposed combination of csd and enhancement on phantom data and on human data , acquired with different scanning protocols . on the phantom data we find that pde enhancements improve both local metrics and global metrics of tractography results , compared to csd without enhancements . on the human data we show that the enhancements allow for a better reconstruction of crossing fiber bundles and they reduce the variability of the tractography output with respect to the acquisition parameters . finally , we show that both the enhancement of the fods and the use of the fbc measure on the tractography improve the stability with respect to different stochastic realizations of probabilistic tractography . this is shown in a clinical application : the reconstruction of the optic radiation for epilepsy surgery planning .
two perspectives are used for modeling solid and amorphous materials in the continuum limit .physicists and rheologists tend to prefer an eulerian treatment where the material can be viewed essentially as a fluid equipped with state variables that provide `` memory '' ( see ) . on the other hand , many in the engineering community prefer a lagrangian description , where the material body is decomposed as a network of volume elements that deform under stress ( see ) .continuum mechanical laws derive from point - particle mechanics applied to a continuum element .the primitive form is resultantly lagrangian , though an eulerian conversion can always be asserted one rewrites the constitutive laws in rate form and expands all material time derivatives in terms of fixed - space derivatives .be that as it may , a rigorous eulerian switch can be a painstaking mathematical task .this is especially true of solid - like constitutive laws , which often depend on nonlinear tensor operations and coupled history - dependent state variables , leading to unduly complicated eulerian rate expansions . to dodge these difficulties , those preferring eulerian - framehave generally resorted to approximations or added conditions that simplify the final constitutive form . while sometimes warranted , the connection back to lagrangian mechanics becomes clouded , complicating the process of deriving physically motivated constitutive behavior . in this paper , a field we call the `` reference map ''is utilized to construct and implement solid - like constitutive laws in eulerian - frame with _ no _ added approximations .the way the map provides `` memory '' to the system admits immediate computation of kinematic variables crucial to lagrangian solid mechanics . to maintain a clear presentation ,several avenues of motivation are first provided that discuss the necessary laws of continuum mechanics and the basic quantities of solid kinematics .the theory of large - strain elasticity , hyperelasticity , is then sketched . in particular , by enabling quick access to the _ deformation gradient _tensor , the reference map can be used to accurately compute solid deformations without the approximations , ambiguities , or pre - conditions of other eulerian approaches .three non - trivial deformations are then simulated to verify these points .in eulerian frame , the flow or deformation of a canonical continuous material can be calculated by solving a system of equations that includes : the first equation upholds mass conservation , and the next two , respectively , uphold conservation of linear and angular momentum .the flow is described by the velocity field and the stresses by the cauchy stress tensor , which includes pressure contributions .a consitutive law is then asserted to close the system of equations .we ultimately intend our approach to apply to any material with a `` solid - like '' constitutive law . by solid - like ,we mean specifically laws that express the stress tensor in terms of some kinematic quantity that measures the local deformation from some nearest relaxed state .this trait reflects the microscopic basis of solid stress as arising from potential energy interactions between material microconstituents .the simplest solid - like response is isothermal elasticity , where total deflection under loading immediately determines the stresses within .a less basic example would be elasto - plasticity , where internal stresses derive from a small elastic component of the total strain . here, the nearest relaxed state can differ from the original unstressed state and may depend on evolving state parameters , temperature , and/or rate . to encompass the broad definition above ,a continuum description for solid - like materials necessitates a rigorous way of tracking local relative displacements over some finite time period . without making any `` small displacement '' approximations ,a general and robust continuum framework calls for a kinematic field known as the _ motion function_. suppose at time , that a body of material is in an unstressed _ reference configuration_. the body then undergoes a deformation process such that at time , an element of material originally at has been moved to .the motion is defined by .we say that the body at time is in a _deformed configuration_. the motion can be used to define the _ deformation gradient _ , which is of crucial importance in continuum solid mechanics : note that we use for gradients in only , and always write gradients in in derivative form as above . as per the chain rule ,the tensor describes local deformation in the following sense : if represents some oriented , small material filament in the reference body , then the deformation process stretches and rotates the filament to in the deformed body .also , the evolution of can be connected back to the velocity gradient via where we use for material time derivatives . since for any physical deformation , the deformation gradient admits a polar decomposition where is a rotation , and is a symmetric positive definite `` stretch tensor '' obeying .to demonstrate the use and simplicity of the method , this paper shall focus on one broad class of materials : large - strain , 3d , purely elastic solids at constant temperature . a thermodynamically valid constitutive form for such materialsis derivable with only minimal starting assumptions . known as _ hyperelasticity _ theory , it has become the preeminent elasticity formulation in terms of physicality and robustness . though other elasticity formulations exist ( e.g. hypoelasticity and other stress - rate models ) the next section will recall how these are in fact specific limitting approximations to hyperelasticity theory .a brief review of hyperelasticity is provided below to establish the key results and demonstrate the physical basis of the theory ( see for details ) .an analysis of more complex solid - like behaviors ( e.g. elasto - plasticity , hardening , thermal elasticity ) is left as future work .in essence , one seeks a noncommittal 3d extension of 1d spring mechanics , where total relative length change determines the force in a fashion independent of deformation path . to institute this , presume that the helmholtz free - energy per unit ( undeformed ) volume and cauchy stress both depend only on the local deformation : where is used to designate constitutive dependences on kinematic quantities .we also assume that if no deformation has occurred ( i.e. ) , then .some helpful physical principles refine these dependences immensely .we enforce _ frame - indifference _ by restricting the dependences to account for rotations .suppose a material element is deformed by some amount , and then the deformed element is rotated . by the frame - indifference principle, the rotation should not affect the free - energy , and should only cause the stress to co - rotate .this ultimately restricts eqs [ general ] to next , we enforce _ non - violation of the second law_. a continuum - level expression of the isothermal second law of thermodynamics can be written as the dissipation law where is the deformation rate , familiar from fluid mechanics . following a procedure originally developed by coleman and noll , one can prove mathematically that eqs [ less_general ] uphold ineq [ second ] under all imposable deformations only if : likewise , must correspond to a local minimum of .eq [ elasticity ] along with the zero deformation hypotheses compose the theory of hyperelasticity .the above argument demonstrates how the assertions of frame - indifference and the second - law require that the assumed dependences of eq [ general ] refine to the form of eq [ elasticity ] .each valid choice of gives an elasticity law that could represent a continuous elastic solid .the `` deductive approach '' above has become a frequently used tool in materials theory .to use hyperelasticity , or deductive solid modeling in general , the ability to calculate during a deformation is crucial . in lagrangian - frame , each point is `` tagged '' by its start point , so can always be computed by differentiating current location against initial . in eulerianthe problem is more subtle , as knowledge of past material locations must somehow be procured . as suggested in , be directly evolved by expanding the material time derivative in eq [ f_evolution ] , giving unfortunately , this can not be used to solve the general boundary value problem .the term can only be computed adjacent to boundaries if is prescribed as a boundary condition .to assign at a boundary implies that the derivative of motion in the direction orthogonal to the surface can be controlled . in the general boundary value problem ,this information is outside the realm of applicability ; stress tractions and displacements / velocity conditions can be applied at boundaries , but how these quantities change orthogonal to the surface arises as part of the deformation solution .another approach that also advects the tensor directly ( more factually the tensor ) is the eulerian godunov method of miller and colella .the method solves for elastic or elasto - plastic solid deformation by treating the equations as a system of conservation laws with a nonconservative form for the advection of .it is a sophisticated , high - order method and has had success representing solid dynamics and deformation , but is aimed primarily at unbounded domains where implementation of a boundary condition on is unneeded . for pure elasticity , several eulerian, rate - based approaches have been developed that avoid directly referring to but add in several approximations / assumptions .begin by presuming isotropy .it can be shown that this reduces eq [ elasticity ] to \label{elasticity_final}\ ] ] where are the principal invariants of the left cauchy - green tensor .one way to uphold eq [ elasticity_final ] involves first defining a _ strain measure _ that , among other features , must asymptote to in the small displacement limit . to linear order in , the elasticity lawcan then be written as taking the material time derivative gives .the chain rule on generally leads to a long expression in terms of and , which can ultimately be rewritten as some function of and .eulerian expansion of introduces a term .once again , the same problem as that encountered in eq [ f_euler ] occurs ; to compute , the full cauchy stress tensor must be assigned at the boundary . while certain components of can be controlled at a boundary namely the traction vector the components describing stresses along a plane orthogonal to the surface can not , in general , be prescribed . to dodge this difficulty the term presumed to be negligible . as a consequence of neglecting stress convection , one accepts certain errors in representing dynamic phenomena .ultimately , what remains is an eulerian constitutive relation for the evolution of where the function derives from the choice of strain - measure .while is sometimes called the `` strain - rate '' , we note that it is not the time rate of change of a valid strain measure ; the axes of do not rotate with the material . however , in the small displacement limit for all strain definitions . assuming small displacement and small rate of volume change , eq [ rate2 ] reduces to a simple form known as _ hypoelasticity _ where is an `` objective stress rate '' equal to plus extra terms that depend on the choice of strain measure .hypoelasticity can be seen as a specific approximation to a physically derived isotropic hyperelasticity law .be that as it may , eq [ hypo ] is oftentimes asserted as a starting principle by assigning , sometimes arbitrarily , from a list of commonly used stress rates e.g. jaumann rate , truesdell rate , green - naghdi rate ( see for a detailed review ) thereby cutting off the connection to hyperelasticity .in fact , there are infinitely many stress rate expressions upholding frame - indifference that qualify as objective hypoelastic rates .rate forms for elasticity require the assumptions and approximations listed herein , which limit their applicability .the neglect of stress convection can pay heavy consequences when attempting to represent waves or other dynamic phenomena . while eq [ rate2 ] is fairly general for isotropic linearly elastic materials ,the resulting equations usually require tedious calculation that must be redone if the stress / strain relation is changed . even in the small strain limit , hypoelasticity s presumptions of linearity and isotropy poorly represent some common materials . for instance , granular matter is nonlinear near zero strain ( due to lack of tensile support ) , and crystalline solids are not isotropic .rate elasticity , if used as a first principle , also offers no physical basis to account for thermodynamics , making it troublesome for theories of thermalized or non - equilibrium materials .to sidestep these issues , we now describe a new eulerian approach to solid mechanics .the key is to utilize a fixed - grid field that admits a direct computation of .define a vector field called the _ reference map _ by the evolution law : this advection law implies that never changes for a tracer moving with the flow .combined with the initial condition , the vector indicates where the material occupying at time originally started . by the chain rule ,a material filament obeys .thus , altogether , eqs [ basic2 ] , [ elasticity ] , [ advect ] , and [ fnew ] , along with the kinematic expression for the density , compose an eulerian system that solves exactly for hyperelastic deformation .in essence , what we are suggesting is to obtain solid stress in a fashion similar to fluid stress . for fluids ,the ( shear ) stress is given by , which is computed from the gradient of .here , we advect the primitive quantity and use its gradient to construct . the stress is then obtained from by the constitutive law .this approach alleviates many of the complications discussed previously that arise when attempting to directly advect a tensorial quantity like or .in particular , unlike the advection law of eq [ f_euler ] , the reference map is easily definable on boundaries provided complete velocity / displacement boundary conditions .that is , if a boundary point originally at is prescribed a displacement bringing it to at time , then .we also note that is an integral quantity of and thus a smoother function .we expect this property to be of benefit numerically compared to methods that directly advect or .the notion of a map that records initial locations of material has been defined by others in various different contexts . to these authors knowledge, it has never been used for the purposes of solving solid deformation as described above . use an `` original coordinate '' function akin to our reference map in defining a pseudo - concentration method for flow fronts .the inverse of at time , which is indeed equivalent to the field , is also discussed in belytschko for use in finite element analysis .in this section , we describe the discretization of the above system of equations .our general strategy is to first evaluate , then update using eq [ basic2 ] , and finally evolve with [ advect ] .time derivatives in eqs [ basic2 ] , and [ advect ] are discretized as a simple euler step , on a two - dimensional grid , with grid spacing , the velocity and reference map are located at corner points , while stresses are located at cell centers , .thus , away from any boundary , we can compute by finite difference at the mid - point of horizontal grid edges , and similarly , on vertical grid edges , we obtain at cell centers by averaging , allowing us to compute the deformation gradient tensor using eq [ fnew ] , and thus .we now can define stresses at cell centers by specifying the hyperelasticity law .we compute at the mid - point of vertical grid edges , and similarly , on horizontal grid edges , as a result , we obtain at cell corners , where is stored .finally , in equation [ basic2 ] , can be discretized in the same manner as .additionally , since eq [ advect ] is an advection equation , we use a weno discretization for to guarantee stability . in order to solve the system with irregular boundaries ,we introduce a level set function , .we define inside the solid , outside , thus implicitly representing the domain boundary as the zero level set of .choosing to be a signed distance function , i.e. , allows us to compute the cut - cell length , with .for example , if the boundary cuts a horizontal cell edge , i.e. , then here , we must change the discrete derivatives in eq [ eq : dxi_dx_discretization ] . assuming that , where is a boundary condition for .other derivatives near boundaries are treated in the same manner .\(a ) ( b ) in this section we present two large - deformation numerical tests , both in plane - strain for simplicity . to rigorously test the method , each case models a non - trivial , inhomogeneous deformation .first , we solve our system in a circular washer geometry for which the outer wall is fixed and the inner wall is rotated over a large angle . under the levinson - burgess hyperelasticity law( see below ) , this environment has an analytical solution , which we use to verify the consistency / correctness of the method .second , we solve the deformation of a disk being stretched into a triangular shape , also utilizing the levinson - burgess law .this has no analytical solution , and demonstrates the method s applicability in cases where the reference and deformed boundary sets differ .we focus here on static solutions , obtained by enforcing the boundary conditions and waiting for transients to pass .artificial viscosity was added to the stress law to expedite collapse to the static solution .the levinson - burgess free - energy function , after application of eq [ elasticity_final ] , induces the following stress law under plane - strain conditions where , are invariants of the tensor , with and . in the unstrained state , and represent the bulk and shear moduli . throughout, we use kpa .[ [ circular - washer - shear ] ] circular washer shear + + + + + + + + + + + + + + + + + + + + + the analytical static displacement field is and where , and are constants that fit the boundary conditions and . the graph in figure [ washer_triangle](a ) shows excellent agreement between our numerical solution ( sampled along the central horizontal cross - section ) and the analytical . we have observed equally high agreement levels when the inner wall rotation angle is varied .[ [ stretched - disk ] ] stretched disk + + + + + + + + + + + + + + the unstressed material shape is a disk that is inscribed perfectly within its final equilateral triangular shape . for on the triangle edge ,the boundary condition for the final deformed body is where is the disk center , the disk radius , and the distance between the disk edge and the triangle as measured along the radial segment containing .hence , each point on the disk edge is moved outward radially to the triangle .the final , static displacement field is shown in figure [ washer_triangle](b ) . in this section ,we display the method s ability to accurately track the motion of a large - strain compression wave .recall from section [ past ] , that dynamic correctness is sacrificed in many stress - rate models by neglecting stress convection terms .the reference map on the other hand , enables high - accuracy representation of elasto - dynamics as shall now be demonstrated . in order to check for the ability of the method to handle dynamic situations, we choose to produce an analytical solution for an elastic compresssion wave . for this purpose , consider a material obeying the following large - strain elasticity law , for the left stretch tensor . the material body is a rectangular slab constrained in the thickness direction ( i.e. plane - strain conditions ) .the unstressed material density is uniform and has a value . under this constitutive law ,the following and fields give an exact , analytical solution for a rightward moving compression wave passing through the slab : due to symmetry , the and components of both fields do not change from their initial , unstressed values .the constant is the wave speed .this solution invokes a large - strain deformation with compressive strain as high as at the center of the pulse .consequently , this represents a realistic test of the ability of the present approach to tackle dynamic effects as well as large - strain deformation .the equations are discretized in space as before , but for the time discretization we embed the euler step described above into a standard second order runge - kutta scheme .the stability restriction of this fully explicit scheme is so that , for some small constant . from this approachwe expect second order global convergence .\(a ) ( c ) in order to verify the convergence rate , we set up a two - dimensional doubly - periodic domain \times\left[-5,5\right]$ ] .we use eqs [ eq : xi_exact_solution ] and [ eq : v_exact_solution ] at as the initial condition , with .the travelling wave solution should come back to it original shape and location at . for a sequence of grids with , we set using , and compute the of , as , we report in figure [ dynamic](a ) the expected second order global convergence .the convergence rate between the two finest grids , and , is computed to be .we have also found nearly identical convergence properties for the velocity ; the velocity s convergence rate is found to be between the two finest grids .we conclude the scheme is globally second order accurate for all dynamic variables , confirming its ability to capture dynamic solutions under large - strain elasticity .finally , figure [ dynamic](b ) shows one - dimensional cross sections for at different times .the solid line represents the exact solution computed from eq [ eq : v_exact_solution ] at , which , by periodicity of the domain , also corresponds to the solution at .we see that the exact solution and numerical solution agree well for , a rather coarse grid .we also plot the numerical solution at for illustrative purposes .this work has demonstrated the validity of the reference map for use in reformulating and simulating solid deformation under a completely eulerian framework .there are still several avenues of future investigation .other material models are to be simulated , most notably elasto - plastic laws with and without rate - sensitivity . also , the reference map has potential to simplify the simulation of fluid / solid interactions , due to both phases having a similar eulerian treatment .our preliminary results on this front are promising , and shall be reported in a future paper .also , a method to institute traction boundary conditions within this framework , especially the traction - free condition , would be important future study .this may ultimately be accomplished with a fluid / solid framework , by treating traction - free boundaries as surfaces of interaction with a pressure - free , stationary fluid .k. kamrin would like to acknowledge support from the ndseg and nsf grfp fellowship programs .c . nave would like to acknowledge partial support by the national science foundation under grant dms-0813648 .
we develop a computational method based on an eulerian field called the `` reference map '' , which relates the current location of a material point to its initial . the reference map can be discretized to permit finite - difference simulation of large solid - like deformations , in any dimension , of the type that might otherwise require the finite element method . the generality of the method stems from its ability to easily compute kinematic quantities essential to solid mechanics , that are elusive outside of lagrangian frame . this introductory work focuses on large - strain , hyperelastic materials . after a brief review of hyperelasticity , a discretization of the method is presented and some non - trivial elastic deformations are simulated , both static and dynamic . the method s accuracy is directly verified against known analytical solutions .
in a variety of physical phenomena , the dominant dynamics occur in spherical and cylindrical geometries .examples include astrophysics ( e.g. , supernova collapse ) , nuclear explosions , inertial confinement fusion ( icf ) and cavitation - bubble dynamics . a natural approach to solving these problemsis to write the governing equations in cylindrical / spherical coordinates , which can then be solved numerically using an appropriate discretization .historically , the first such numerical studies were conducted by von neumann and richtmyer in the 1940s for nuclear explosions . to treat the discontinuities in a stable fashion, they explicitly introduced artificial dissipation to the euler equations . while this method correctly captures the position of shocks and satisfies the rankine - hugoniot equations , flow features in the numerical solution , in particular discontinuities ,are smeared due to excessive dissipation .the collapse and explosion of cavitation bubbles , supernovae and icf capsules share similarities in that they are all , under ideal circumstances , spherically symmetric flows that involve material interfaces and shock waves .such flows are rarely ideal , insofar as they are prone to interfacial instabilities due to accelerations ( rayleigh - taylor ) , shocks ( richtmyer - meshkov ) , or geometry ( bell - plesset ) . when solving problems with large three - dimensional perturbations , cylindrical / spherical coordinates may not be advantageous . however , in certain problems such as sonoluminescence , the spherical symmetry assumption is remarkably valid . modeling the bubble motion with spherical symmetry can greatly reduce the computational cost . as an example , akhatov , et al . used a first - order godunov scheme to simulate liquid flow outside of a single bubble whose radius was given by the rayleigh - plesset equation .this approach assumes spherical symmetry but does not solve the equations of motion inside the bubble . high - order accurate methods are becoming mainstream in computational fluid dynamics . however , implementation of such methods in cylindrical / spherical geometries is not trivial .several recent studies in cylindrical and spherical coordinates have focused on the lagrangian form of the equations .the euler equations in cylindrical or spherical geometry were studied by maire using a cell - centered lagrangian scheme , which ensures conservation of momentum and energy .these equations were also considered by omang et al . using smoothed particle hydrodynamics ( sph ) , though sph methods are generally not high - order accurate .on the other hand , solving the equations in eulerian form is not trivial , especially when trying to ensure conservation and high - order accuracy .li attempted to implement eulerian finite difference and finite volume weighted essentially non - oscillatory ( weno ) schemes on cylindrical and spherical grids , but did not achieve satisfactory results in terms of accuracy and conservation . considered flux difference - splitting methods for ducts with area variation . followed a similar formulation of the equations employing a total variation diminishing method to simulate explosions in air .johnsen & colonius used cylindrical coordinates with azimuthal symmetry to simulate the collapse of an initially spherical gas bubble in shock - wave lithotripsy by solving the euler equations inside and outside the bubble using weno . de santis showed equivalence between their lagrangian finite element and finite volume schemes in cylindrical coordinates .xing and shu performed extensive studies of hyperbolic systems with source terms , which are relevant as the equations in cylindrical / spherical coordinates can be written with geometrical source terms .although one of their test cases involved radial flow in a nozzle using the quasi one - dimensional nozzle flow equations , they did not consider the general gas dynamics equations .thus , at this time , a systematic study of the euler equations in cylindrical and spherical coordinates , with respect to order of accuracy and conservation , has yet to be conducted . in this paper , we investigate three different spatial discretizations in cylindrical / spherical coordinates with radial dependence only using finite difference weno schemes for the euler equations .in particular , we propose a new approach that is both high - order accurate and conservative . here , we are concerned with the interior scheme ; appropriate boundary approaches will be investigated in a later study .the governing equations are stated in section [ sec : numerical framework ] and the spatial discretizations are presented in section [ sec : numerical method ] . in section [ sec : numerical results ] , we test the different discretizations on smooth problems ( scalar advection equation , acoustics problem for the euler equations ) for convergence , and with shock - dominated problems ( sod shock tube and sedov point blast problems ) for conservation .the last section summarizes the present work and provides a future outlook .the differential for the euler equations in cylindrical / spherical coordinates with radial dependence only : where , , and . is the density , u is velocity in radial direction , is time , is the radial coordinate , is the pressure , is the total energy per unit volume , and is a geometrical parameter , which is 0 , 1 , or 2 for cartesian , cylindrical , or spherical coordinates , respectively .subscripts denote derivatives .diffusion effects are neglected .for an ideal gas , the equation of state to close this system can be written : where is the internal energy per unit volume , and is the specific heats ratio .other equations of state can be used , e.g. , a stiffened equation for liquids and solids .we describe three discretizations of the euler eqs . incylindrical / spherical coordinates that differ based on the treatment of the convective terms .while the discretized form of the euler equations in cartesian coordinates is generally designed to conserve mass , momentum and energy , the conservation condition does not necessarily hold in cylindrical or spherical coordinates , depending on the numerical treatment of the equations . the criterion we use to determine discrete conservation is as follows : where is a domain ( possibly a computational cell ) .here , represents a conserved variable , e.g. , density , momentum per unit volume or energy per unit volume .this equation means that the total mass , momentum , and energy are constant in time provided there is no flux of these quantities through the boundaries of the domain , which is the case for the problems of interest here .a different approach to defining conservation for hyperbolic laws is the exact c - type property , which implies that the system admits a stationary solution in which nonzero flux gradients are exactly balanced by the source terms in the steady - state case .xing and shu applied weno in systems of conservation laws with source terms and considered radial flow in a nozzle using the quasi one - dimensional nozzle flow equations . in our work , we focus on the euler equations .we consider finite difference(fd ) weno schemes .we give brief description of a fifth - order finite difference weno scheme is given in appendix . for finite difference weno ,given the cell - centered values , the fluxes are first split and then interpolated to compute the numerical flux . in order to give a clear image of implementation , we write the solution procedure right after describing the spatial discretization for each method .the first spatial discretization , labelled _method one _ here , can be found in chapter 1.6 of toro .expand the convective term and move the part without spatial derivative to right hand of eq .[ eq : compressible_euler ] to obtain the differential equations the notation is consistent with the notation in eq .[ eq : compressible_euler ] . for the convenience of programming , we also write the mass , momentum and energy in semi - discrete form : [ eq : compressible_euler_fd1 ] {i+1/2}- [ ( e+p ) u]_{i-1/2}}{\delta r_i } - \frac{\alpha}{r_{i } } \left [ u ( e+p ) \right]_{i } , \end{aligned}\ ] ] where is the linear radial cell width . for fd weno ,the variables to be evolved in time are the cell - centered values , i.e. , the values at in cell ] , and local speed of wind , .the plus sign means the flux moves toward right , the minus sign means the flux moves toward left .the convention is used in all the flux calculation part in this paper .+ 3 . using the local characteristic decomposition and finite difference weno to approximate the flux , obtain , , and {i+1/2}^{\pm} ] , and local speed of wind , .+ 3 . using the local characteristic decomposition and finite difference weno to approximate the flux ,obtain , , and {i\pm1/2}^{\pm} ] .our solution is to approximate using the same nonlinear weights .the source terms are updated in each sub - step .march in time .+ the third spatial discretization , method three , is inspired by the solution to acoustics problems in spherical coordinates .this approach is also used by toro and zhang .multiplying eqs . by , for the convenience of programming , the mass , momentum , and energy equationsare written in semi - discrete form : [ eq : compressible_euler_mass_fd3 ] {i+1/2}-[r^{\alpha } ( \rho u^{2}+p)]_{i-1/2}}{\delta r_i}+\alpha(pr^{\alpha -1})_{i } , \\\frac{\mathrm{d}(r^{\alpha } e)_{i } } { \mathrm{d } t}= & -\frac{[r^{\alpha } ( e+p)u]_{i+1/2}-[r^{\alpha } ( e+p)u]_{i-1/2}}{\delta r_i}. \\ \notag \end{aligned}\ ] ] for fd weno , the cell - centered values are considered .this approach strictly follows the integral form of the euler equations in cylindrical or spherical coordinates and satisfies the c - type property for hyperbolic equations with source terms .thus , it is expected to be both conservative and high - order accurate . for the advection equation ., title="fig:",scaledwidth=50.0% ] for the advection equation ., title="fig:",scaledwidth=48.0% ] this part summarizes the solution procedure for method two used in our code : + 1 .initialize the primitive variables , , , , and , then calculate , , , and for each cell .+ 2 . using local lax - fredrich to split the flux ,obtain , ^{\pm} ] , and local speed of wind , .+ 3 . using the local characteristic decomposition and finite difference weno to approximate the flux ,obtain , {i\pm 1/2}^{\pm} ] .calculate the residual .the source terms in method three are collocated with primitive variable , can be directly added to the residual .the source terms are updated in each sub - step .march in time .in this section , we apply the three discretizations introduced in the previous section to four test cases using fifth - order weno in characteristic space with local lax - friedrichs upwinding , and fourth - order explicit runge - kutta with a courant number of for time marching .first , we use two smooth problems ( scalar advection and acoustics for the euler equations ) to demonstrate the convergence rates of each method for the interior solution , with no regards for boundary schemes .next , we test conservation with two shock - dominated problems ( sod shock tube and sedov point blast problems ) . before considering nonlinear systems ,the scalar advection equation is investigated . the advection equation in cylindrical and spherical coordinates with symmetryis written : where is a scalar field , is the ( constant and known ) wave speed . here , .the initial conditions are for this problem , the exact solution at time is the initial conditions and exact solution at are shown in fig .[ fig : initial condition for advection equation ] .nearly identical set - ups are used for the cylindrical and spherical cases , the only difference being the geometrical parameter : for the cylindrical case , and for the spherical .the goal is to determine the convergence rates of each method independently of boundary schemes .the problem set - up is specifically chosen to prevent any boundary effects . here, we show convergence results only for cylindrical coordinates , as the convergence rate is similar for the spherical case .grids with are considered with constant , and the exact solution is used to evaluate the error of each solution .[ fig : advection_second_norm ] shows the error norm to verify the order of accuracy .methods one and three both achieve close to fifth - order accuracy , while method two is only second - order accurate , as expected from the discussion in the previous section .error norm for all three discretizations for the scalar advection problem.,scaledwidth=80.0% ] error norm in density for each discretization for the acoustics problem ., scaledwidth=80.0% ] a smooth problem is used to verify convergence with the euler equations . the acoustics problem from johnsen &colonius is adapted to spherical coordinates , with initial conditions : with perturbation for a sufficiently small ( here ) , the solution remains very smooth . in this problem ,the initial perturbation splits into two acoustic waves traveling in opposite directions . to prevent the singularity at the origin and boundary effects , the final timeis set such that the wave has not yet reached the origin .again , grids with 21 , 41 , 81 , 161 , 321 and 641 are used with constant .although an exact solution to order is known , the solution on the finest grid is used as the reference to evaluate the error .[ fig : euler_second_norm ] shows the error in density for this problem .the results show that methods one and three remain high - order and in fact fall on top of each other , although the rate now is closer to fourth order . again , for method two , the rate is second order , as expected .we consider the sod shock tube problem in cylindrical coordinate with azimuthal symmetry .the initial conditions are : the domain size is 1 and 100 equally spaced grid points are used .the location of the `` diaphragm '' separating the left and right states is .since no wave reaches the boundaries over the duration of the simulation ( final time : ) , the boundary scheme is irrelevant . for the sod problem with 100 points.,scaledwidth=100.0% ][ fig : comparing sod ] shows density , velocity , pressure and internal energy profiles for this problem at the final time . method three on a gird of 800 points is used as a reference solution . on this grid ,all three methods produce similar profiles . however , the residuals of the total mass and energy , yield different results , as observed in fig .[ fig : residual of mass sod ] . for fd weno ,a high - order gauss quadrature is employed to integrate the total mass and total energy from the cell - centered values . as shown in fig .[ fig : residual of mass sod ] , while methods two and three are conservative to round - off level , method one is not discretely conservative , as expected . in fig .[ fig : comparing sod ] , differences in shock position due to lack of conservation are not clear .the total momentum is zero based on the azimuthally symmetric setup .finally , we consider the sedov point - blast problem in spherical coordinates .following the set - up of fryxell et al . the initial conditions are except for a few computational cells around the origin , whose pressure is here , is the dimensionless energy per unit volume . is the specific heats ratioand and a geometrical parameter , which is consistent with the value in section [ sec : numerical framework ] .the domain size is 1 , and with uniform spacing .we choose a constant to be three times as large as the cell size for . reflecting boundary conditions are used along the centerline ; since the shock does not leave the domain , the outflow boundary scheme is irrelevant . due to the reflecting boundary condition at the center , the high pressure regionis made up of 6 cells , i.e. , 3 ghost cells and 3 cells in the interior .the solution is plotted at . for the sedov problem with 100 points ,fd weno.,scaledwidth=100.0% ] density , velocity , and pressure profiles for fd weno and the analytical solution are shown in fig .[ fig : fd_comparing sedov ] . because the total energy residual shows results qualitatively similar to the total mass residual ,only the latter is shown . the density profile and mass residual for different grid sizeare plotted in fig .[ fig : residual of mass sedov ] .the difference in shock location is clear for this problem .method one is non - conservative and thus produces an incorrect shock speed and thus location ; it appears to converge to the correct location with grid refinement .this result is confirmed by considering the mass residual .method two and three can capture the right shock position on coarse gird , whereas they need much finer grid to capture the peak of the shock .for this problem , method three proved to be unstable at the present courant number due to the stiff source term , so a smaller value ( 0.1 ) is used for this problem .we analyzed three different spatial discretizations in cylindrical / spherical coordinates with radial dependence only for the euler equations using finite difference weno .in particular , high - order accuracy and conservation were evaluated . only our newly proposed method three achieved high - order accuracy and was conservative .the other methods are either conservative or high - order accurate , but not both .current work is underway to extend the analysis and implementations to discontinuous finite element methods and to incorporate diffusive effects .high - order reflecting boundary conditions will be investigated subsequently , which are not trivial for finite difference / volume schemes .this approach will form the basis for simulations of cavitation - bubble dynamics and collapse in the context of cavitation erosion .the classical fourth order explicit runge - kutta method for solving where is l(u , t ) is a spatial discretization operator [ eq : time discretization ] \\ \notag \end{aligned}\ ] ] all the numerical examples presented in this paper are obtained with this runge - kutta time discretization .the flux splitting method used in this paper is the local lax - friedrich splitting scheme . where is calculated by .contrary to global lax - frederich , in which the wind speed is the maximum maximum absolute eigenvalue , the local lax - frederich use the maximum absolute eigenvalue at each point as the wind speed . a fifth order accurate finite difference weno scheme is applied in this paper . for more details, we refer to .+ for a scalar hyperbolic equation in 1d cartesian coordinate we first consider a positive wind direction , . for simplicity ,we assume uniform mesh size .a finite difference spatial discretization to approximate the derivative by the numerical flux is computed through the neighboring point values . for a 5th order weno scheme , compute 3 numerical fluxes .the three third order accurate numerical fluxes are given by the 5th order weno flux is a convex combination of all these 3 numerical fluxes and the nonlinear weights are given by with the linear weights given by and the smoothness indicators given by where is a parameter to avoid the denominator to become 0 and is usually takes as in the computation .the procedure for the case with is mirror symmetric with respect to .this work was supported in part by onr grant n00014 - 12 - 1 - 0751 under dr .ki - han kim .de santis , d. , geraci , g. , and guardone , a. , 2014 .equivalence conditions between linear lagrangian fniite element and node - centred finite volume schemes for conservation laws in cylindrical coordinates , int .fluids 74 , 514542 .fryxell , b. , olson , k. , ricker , p. , timmes , f.x . ,zingale , m. , lamb , d.q . ,macneice , p. , rosner , r. , truran , j.w . , tufo , h. , 2000 .flash : an adaptive mesh hydrodynamics code for modeling astrophysical thermonuclear flashes , astrophys .131 , 273 - 334 wang , z.j . , fidkowski , k. , abgrall , r. , bassi , f. , caraeni , d. , cary , a. , deconinck , h. , hartmann , r. , hillewaert , k. , huynh , h.t ., kroll , n. , may , g. , persson , p.o . , van leer , b. , and visbal , m. , 2013 .high - order cfd methods : current status and perspective , int . j. numerfluids 72 , 811845 .
we consider implementations of high - order finite difference weighted essentially non - oscillatory ( weno ) schemes for the euler equations in cylindrical and spherical coordinate systems with radial dependence only . the main concern of this work lies in ensuring both high - order accuracy and conservation . three different spatial discretizations are assessed : one that is shown to be high - order accurate but not conservative , one conservative but not high - order accurate , and a new approach that is both high - order accurate and conservative . for cylindrical and spherical coordinates , we present convergence results for the advection equation and the euler equations with an acoustics problem ; we then use the sod shock tube and the sedov point - blast problems in cylindrical coordinates to verify our analysis and implementations .
in this paper we apply a mixture of analytical and topological methods to establish that a recently derived solution defining equatorially trapped waves is dynamically possible .this remarkable solution , derived by constantin in and given below by equation , is an exact solution of the nonlinear governing equations for equatorial water waves , and it is explicit in the lagrangian framework .the main result of this paper establishes that the three - dimensional mapping from the lagrangian labelling domain to the fluid domain defines a global diffeomorphism a consequence of which is that the solution defines a fluid motion which is dynamically possible .we achieve this result by first establishing that is locally diffeomorphic and injective , and then we render our results global by applying a suitable version of the classical degree - theoretic _ invariance of domain _ theorem , cf . .the solution presented by constantin in represents a geophysical generalization of the celebrated gerstner s wave , in the sense that ignoring coriolis terms in recovers gerstner s wave solution .the primary importance of gerstner s wave is probably the fact that it represents the only known explicit and exact solution of the nonlinear periodic gravity wave problem with a non - flat free - surface .gerstner s wave is a two - dimensional wave propagating over a fluid domain of infinite depth ( cf . ) , and interestingly it may be modified to describe edge - waves propagating over a sloping bed .the geophysical solution presented in encompasses gerstner s solution , yet it also possesses a number of inherent characteristics which transcends gerstner s wave .the solution is a truly three - dimensional eastward - propagating geophysical wave , and furthermore it is equatorially - trapped achieving its greatest amplitude at the equator and exhibiting a strong exponential decay in meridional directions away from the equator .the solution is furthermore nonlinear , as is seen from the wave - surface profile , and has a dispersion relation that is dependant on the coriolis parameter .since the solution is explicit in the lagrangian formulation , we may immediately discern some qualitative properties of the physical fluid motion .indeed , an advantage of solutions in the lagrangian framework is that the fluid kinematics may be explicitly described . fromwe see that at each fixed latitude the solution prescribes individual fluid particles to move clockwise in a vertical plane .each particle moves in a circle , with the diameter of the circles decreasing exponentially with depth .in it was simply shown that the solution is compatible with the governing equations of the approximation for equatorial water waves - .the aim of this paper is to rigorously justify that the fluid motion defined by is dynamically possible .this is achieved by establishing that the solution defines a global diffeomorphism , thereby ensuring that it is indeed possible to have a three - dimensional motion of the whole fluid body where all the particles describe circles with a depth- dependant radius at fixed latitudes , and furthermore the particles never collide but instead they fill out the entire infinite region below the surface wave . in sodoing we show that the fluid domain as a whole evolves in a manner which is consistent with the full governing equations .we note that subsequent to the derivation of constantin s solution , a wide range of geophysical generalizations and variations to have been produced and analysed , for example .it is expected that the rigorous considerations of this paper are also applicable to these variants .we consider geophysical waves in the equatorial region , where we assume that the earth is a perfect sphere of radius km .we are in a rotating framework , where the -axis is facing horizontal due east ( zonal direction ) , the -axis is due north ( meridional direction ) , and the -axis is pointing vertically upwards .the governing equations for geophysical ocean waves are given by euler s equation with additional terms involving the coriolis parameter which is proportional to the rotation speed of the earth , see the mass conservation equation and the equation of incompressibility here represents the latitude , is the fluid velocity , rad is the ( constant ) rotational speed of earth ( which is the sum of the rotation of the earth about its axis and the rotation around the sun , see ) , m / s is the gravitational constant , is the water density , and is the pressure .we are interested in equatorial waves , that is , geophysical ocean waves in a region which is within latitude of the equator .since the latitude is small , we may use the approximations , and , and thus linearising the coriolis force leads to the -plane approximation to equations given by where m .the relevant boundary conditions are the kinematic boundary conditions where is the ( constant ) atmospheric pressure , and is the free surface .the boundary condition states that all the particles in the surface will stay in the surface for all time , and the boundary condition decouples the water flow from the motion of the air above .we work with an infinitely - deep fluid domain and so we require the velocity field to converge rapidly to zero with depth , that is the governing equations for the approximation of geophysical ocean waves are given by - . in this sectionwe present and describe briefly the exact solution of the -plane governing equations - which was recently derived by constantin .this solution describes a three- dimensional eastward - propagating geophysical wave which is equatorially trapped , exhibiting a strong exponential decay in meridional directions away from the equator , and which is periodic in the zonal direction .equatorially trapped waves propagating eastward and symmetric about the equator are known to exist , and they are regarded as one of the key factors in a possible explanation of the el nio phenomenon ( cf .the formulation of the solution employs a lagrangian viewpoint , describing the evolution in time of an individual fluid particle .the lagrangian positions of the fluid are given in terms of the labelling variables , and time by }\sin \left[k(q - ct ) \right],\\ y&\displaystyle = s,\\ z&\displaystyle = r+{1\over k}e^{k\left[r - f(s)\right]}\cos \left[k(q - ct ) \right ] , \end{array } \right.\ ] ] where is the wave number , defined by where is the wavelength , and the wave phase speed is determined by the dispersion relation and also determines the decay of fluid particle oscillations in the meridional direction .the labelling variables take the values , where is fixed . for every fixed ,the system describes the flow beneath a surface wave propagating eastwards ( in the -direction ) at constant speed determined by . at fixed latitudes ( that is , for fixed ) the free surface obtained by setting in the third equation in , where is the unique solution to }\over 2k}-r_0(s ) = { e^{2kr_0}\over 2k}-r_0.\ ] ] a plot of the free - surface for the wave solution is given in figure [ gerstner ] below .[ htp ] in the author focuses on proving , by explicit computation , that the exact solution is compatible with the governing equations - .our aim in this work is to prove that it is dynamically possible to have a global motion of the fluid domain where , at fixed latitudes , the particles move in circular paths with depth - dependant radius .indeed , we prove in our main result proposition [ prop ] that the fluid motion defined by is dynamically possible , that is , at any instant , the label map is a global diffeomorphism from the labelling variables , , to the fluid domain beneath the free surface given by , s , r_0(s)+{1\over k}e^{r_0(s)-f(s)}\cos\left[k(q - ct)\right]\right).\ ] ] for a fixed latitude , the surface wave profile is a reverse trochoid if and a reverse cycloid with a cusp at the wave crest if and . fixed , and given and ,the curve given parametrically by is a trochoid if and a cycloid if .it represents the curve traced by a fixed point at a distance from the center of a circle of radius rolling along a straight line without slipping , ( see figure [ cycloid_trochoid ] ) .[ htp ] therefore , for a fixed latitude , the free surface of the fluid has the equation which represents a reverse trochoid propagating to the right with velocity . since is periodic with minimal period then the surface is a periodic wave with period , which concurs with the definition of the wave number .to prove that the motion is dynamically possible , it is sufficient to analyse for the time , when it takes the form }\over k } \sin \left(kq\right),\\ y&=s\\ z&\displaystyle = r+{e^{k\left[r - f(s)\right]}\over k}\cos \left(kq\right ) .\end{array } \right.\ ] ] the case of a general time in is recovered making first the change of variables , performing , and finally shifting the horizontal variable by .therefore we can focus on , and we further note that as varies by , the value reoccurs and is shifted linearly by .hence , it suffices to analyse on the domain , r \le r_0 \mbox { and } s\in{{\mathbb r}}\right\}.\ ] ] in the following result we first prove that the map is an injective local diffeomorphism . [ inject ] for every fixed , if then the map is a local diffeomorphism from , r \le r_0 \mbox { and } s\in{{\mathbb r}}\right\} ] into its image . to prove that the local diffeomorphism is in fact a global diffeomorphism we just have to prove that it is a homeomorphism . indeed , since the hypotheses in the invariance of domain theorem [ invariance_domain ] are satisfied , then the map is a homeomorphism .although it is guaranteed by the invariance domain theorem [ invariance_domain ] , we can see directly that the map sends into the boundaries of the image of .the vertical semiplanes and are transformed by in the vertical surfaces and respectively , and the horizontal semiplane becomes part of the reverse trochoid if , which is smooth , and it becomes part of the reverse cycloid if and , which is piecewise smooth with upward cusps .we have proved that is a global diffeomorphism map from into its image if , with singularities occurring when and .since the full system can be recovered from by making the change of variables , and finally shifting the horizontal variable by , it follows that is a global diffeomorphism from into the fluid domain below the free surface .
the aim of this paper is to prove that a three dimensional lagrangian flow which defines equatorially trapped water waves is dynamically possible . this is achieved by applying a mixture of analytical and topological methods to prove that the nonlinear exact solution to the geophysical governing equations , derived by constantin in , is a global diffeomorphism from the lagrangian labelling variables to the fluid domain beneath the free surface .
the transport of fluid mixtures in porous media has many important industrial applications like oil and gas extraction , dispersion of contaminants in underground water reservoirs , nuclear waste storage , and carbon sequestration .although there are many papers on the modeling and numerical solution of such compositional models , there are no results on their mathematical analysis . in this paper , we provide an existence analysis for a single - phase compositional model with van der waals pressure in an isothermal setting . from a mathematical viewpoint ,the model consists of strongly coupled degenerate parabolic equations for the mass densities .the cross - diffusion coupling and the hypocoercive diffusion operator constitute the main difficulty of the analysis .our analysis is a continuation of the program of the first and third author to develop a theory for cross - diffusion equations possessing an entropy ( here : free energy ) structure .the mathematical novelties are the complex structure of the equations and the observation that the solution of the binary model , for specific diffusion matrices , satisfies an unexpected integral inequality giving rise to a minimum principle , which generally does not hold for strongly coupled diffusion systems .more specifically , we consider an isothermal fluid mixture of mass densities in a domain ( ) , whose evolution is governed by the transport equations where .the van der waals pressure and the chemical potentials are given by these expressions are well defined if a.e . , where here , is the total mass density and is a ( small ) parameter .the parameter measures the attraction between the and species , and is a measure of the size of the molecules .the diffusion matrix is assumed to be symmetric and positive semidefinite .moreover , we suppose that the following bound holds : for some , , where is the projection on the subspace of orthogonal to . a property like is known in the literature as _ hypocoercivity _ , that is , coercivity on a subspace of the considered vector space . in our case , the matrix in is coercive on the orthogonal complement of the subspace generated by .bound is justified in the derivation of model - , as the diffusion fluxes must sum up to zero ( see section [ sec.model ] ) .equation is the van der waals equation of state for mixtures , taking into account the finite size of the molecules .equations - are derived from the helmholtz free energy of the mixture ; see below . for details of the modeling and the underlying assumptions, we refer to section [ sec.model ] .we impose the boundary and initial conditions note that we choose equilibrium boundary conditions. a physically more realistic choice would be to assume that the reservoir boundary is impermeable , leading to no - flux boundary conditions .however , conditions are needed to obtain sobolev estimates , together with the energy inequality below .numerical examples for homogeneous neumann boundary conditions for the pressure in case are presented in section [ sec.num ] .up to our knowledge , there are no analytical results for system - and . in the literature , euler andnavier - stokes models were considered with van der waals pressure .for instance , the existence of global classical solutions to the corresponding euler equations with small initial data was shown in .the existence of traveling waves in one - dimensional navier - stokes with capillarity was studied in .furthermore , in the existence and stability of shock fronts in the vanishing viscosity limit for navier - stokes equations with van der waals type equations of state was established .a straightforward computation shows that the gibbs - duhem relation holds .therefore , can be written as which is a cross - diffusion system in the so - called entropy variables .the matrix is of rank one with two eigenvalues , a positive one and the other one equal to zero ( with algebraic multiplicity ) .thus , if , system is not parabolic in the sense of petrovski , and an existence theory for such diffusion systems is highly nontrivial , which is the _ first difficulty_. the property on the eigenvalues is reflected in the energy estimate . indeed , a formal computation , made rigorous below , shows that in case we obtain only one gradient estimate for which is not sufficient for the analysis .there exist some results for so - called strongly degenerate parabolic equations ( for which the diffusion matrix vanishes in some subset of positive -dimensional measure ) .however , the techniques can not be applied to the present problem .therefore , we need to assume that . then the gradient estimates for and together with the boundary conditions yield uniform bounds , which are the basis of the existence proof .the behavior of the solutions for are studied numerically in section [ sec.num ] .the _ second difficulty _ is the invertibility of the relation between and , i.e. to define for given the mass density vector , where and is defined by . a key ingredient for the proof is the positive definiteness of the hessian of the free energy since .this is only possible under a smallness condition on the eigenvalues of ; see lemma [ lem.hess ] .this condition is not surprising since it just means that phase separation is prohibited .the analysis of multiphase flows requires completely different mathematical techniques ; see , e.g. , for phase transitions in euler equations with van der waals pressure .the _ third difficulty _ is the proof of a.e .this property is needed to define and through - , but generally a maximum principle can not be applied to the strongly coupled system .the idea is to employ the boundedness - by - entropy method as in , i.e. to work with the entropy variables .we show first the existence of weak solutions to a regularized version of , define and perform the de - regularization limit to obtain the existence of a weak solution to . since a.e . by definition of , turns out to be bounded .this idea avoids the maximum principle and is the core of the boundedness - by - entropy method .let us now detail our main results . using the boundedness - by - entropy method and the energy inequality , we are able to prove the global existence of bounded weak solutions .we set and .[ thm.ex1 ] let , , be lebesgue measurable and let such that , where , , is defined by .furthermore , let the matrices and be symmetric and satisfy as well as respectively , where is the maximal eigenvalue of .then : 1 . there exists a weak solution to - satisfying the free energy inequality and 2 .there exists a constant , depending on and such that the idea of the large - time asymptotics of is to exploit the energy inequality .since it is difficult to relate the free energy and its energy dissipation , we can not prove an exponential decay rate although numerical experiments in and section [ sec.num ] indicate that this is the case even when . instead, we show for the relative energy that , for some constant and some nonnegative function , from which we deduce that the convergence is of order as .since the free energy is strictly convex , by lemma [ lem.hess ] below , we obtain convergence in the norm . if , we obtain only a gradient estimate for .this lack of parabolicity is compensated by the following surprising integral identity , for arbitrary functions ; see the appendix for a formal proof .this means that there exists a family of conserved quantities depending on a function of variables .it is unclear whether this identity is sufficient to perform the limit and to prove the existence of a solution to with .if , the integral identity does not hold in general .however , for specific diffusion matrices , the following inequality holds in place of : for functions specified in theorem [ coro ] below .interestingly , this implies a minimum principle for . a choice of the diffusion matrix ensuring the validity of is , for given , with , in , where is the hessian of the free energy . clearly , is bounded and positive definite ( although not strictly ) for .in particular , the constraint does not hold , and so the assumptions of theorem [ thm.ex1 ] are not satisfied . however , with this choice of , equation becomes and the existence proof for is simpler than in the case where satisfies .[ coro.ex ] let , , be lebesgue measurable and let , where , , is defined by .furthermore , let the matrices and be symmetric and satisfy , .then there exists a weak solution to - , , satisfying the free energy inequality and , for , our second main result reads as follows .[ coro ] let for on . under the assumptions of corollary [ coro.ex ] , the solution to - , constructed in corollary [ coro.ex ] satisfies for all functions ^{n-1}) ] and moreover , for any , in the degenerate situation , we are able to show an exponential decay rate for the pressure , at least for sufficiently smooth solutions whose existence is assumed .the key idea of the proof is to analyze the parabolic equation satisfied by , because of the quadratic gradient term , we need a smallness assumption on at time .thus , the exponential convergence result holds sufficiently close to equilibrium .[ thm3 ] let , , and let be a solution to - with isobaric boundary conditions on , , for some constant .let .we assume that and for any .then there exists , which depends on and , such that if , then , for some , the paper is organized as follows. details on the modeling of the fluid mixture are presented in section [ sec.model ] .auxiliary results on the hessian of the free energy , the relation between and , and the diffusion matrix are shown in section [ sec.aux ] . in section [ sec.ex1 ] , we prove theorem [ thm.ex1 ] and corollary [ coro.ex ] , while the proofs of theorems [ coro ] and [ thm3 ] are presented in section [ sec.coro ] and [ sec.thm3 ] , respectively .the evolution of the one - dimensional mass densities and the pressure are illustrated numerically in section [ sec.num ] for the case .finally , identity is verified in the appendix .we consider the isothermal flow of chemical components in a porous domain with porosity .the transport of the partial mass densities is governed by the balance equations for the mass , where is the partial velocity of the species . in order to derive equations for the mass densities only , we impose some simplifying assumptions . to shorten the presentation , we set all physical constants equal to one .moreover , we set to simplify the mathematical analysis .our results will be also valid for ( smooth ) space - dependent porosities .introducing the diffusion fluxes by , where is the barycentric velocity and denotes the total mass density , the balance equations become we suppose that the barycentric velocity is given by darcy s law , where is the fluid pressure .we refer to for a justification of this law .the second assumption is that the diffusion fluxes are driven by the gradients of the chemical potentials , i.e. for ; see , e.g. , ( * ? ? ?* section 4.3 ) . here , is some number and are diffusion coefficients depending on . according to onsager s principle of thermodynamics, the diffusion matrix has to be symmetric and positive semidefinite ; moreover , for consistency with the definition , it must hold that for .the equations are closed by specifying the helmholtz free energy density where and are positive numbers , and is symmetric .the first term in the free energy is the internal energy and the remaining two terms are the energy contributions of the van der waals gas ( * ? ? ?* formula ( 4.3 ) ) .the third assumption is that the fluid is in a single state , i.e. , no phase - splitting occurs .mathematically , this means that the free energy must be convex .this is the case if the maximal eigenvalue of is sufficiently small ; see lemma [ lem.hess ] .the single - state assumption is restrictive from a physical viewpoint .it may be overcome by considering the transport equations for each phase separately and imposing suitable boundary conditions at the interface ( * ? ? ?* section 1 ) .however , this leads to free - boundary cross - diffusion problems which we are not able to treat mathematically . another approach would be to consider a two - phase ( or even multi - phase ) compositional model with overlapping of different phases , like in .in such a situation , a new formulation of the thermodynamic equilibrium based upon the minimization of the helmholtz free energy is employed to describe the splitting of components among different phases .the chemical potentials are defined in terms of the free energy by and the pressure is determined by the gibbs - duhem equation ( * ? ? ?* formula ( 64 ) ) this describes the van der waals equation of state for mixtures , where the parameter is a measure of the attractive force between the molecules of the and species , and the parameter is a measure of the size of the molecules .the pressure stays finite if , which means that the mass densities are bounded . in the literature , many modifications of the attractive term have been proposed .examples are the so - called peng - robinson and soave - redlich - kwong equations ; see . taking the gradient of and observing that ,can be written as therefore , we can formulate as the cross - diffusion equations multiplying this equation by , summing over , observing again that , and integrating by parts , we arrive at the energy equation since is assumed to be positive definite on , where , and , this gives , thanks to lemma [ lem.ab ] , estimates for and , thanks to the equilibrium boundary condition and poincar s inequality , estimates for .first we show a result estimating the norms of two vectors from below .[ lem.ab ] let , be such that . then, for any , the constant is not optimal .for instance , if , we have the theorem of pythagoras , .let be the projection of on and be the orthogonal part . then , clearly , .by young s inequality with and , we have we deduce from that , and thus , finishing the proof .[ lem.hess ] let , defined in the pressure relation , be a symmetric matrix whose maximal eigenvalue satisfies .then the hessian of the free energy is positive definite , i.e. where is given by . in particular, is uniformly positive definite .a straightforward computation shows that , where let .it holds that defining , , and for , the quadratic form can be rewritten as since , we may define , which yields the norm of can be estimated from above : since , we have or , and .therefore , we infer that is strictly positive : we apply lemma [ lem.ab ] to to obtain which , since , implies that the relation and the definition of allow us to write this , together with , yields the desired lower bound for .[ lem.inv ] the mapping , is invertible .since is positive definite in , it follows that is one - to - one and the image is open .we claim that is also closed .then , and the proof is complete .let , , define a sequence in such that as .the claim follows if we prove that there exists such that .since varies in a bounded subset of , the theorem of bolzano - weierstra implies the existence of a subsequence , which is not relabeled , such that converges to some as , where , .we assume , by contradiction , that .let us distinguish two cases ._ case 1 : _ there exists such that . if , then implies that , which contradicts the fact that is convergent .thus it holds that .this means that for some .however , choosing in and exploiting the relation leads to , contradiction ._ case 2 : _ for all , it holds that and . arguing as in case 1 , it follows that for all , which is absurd .we conclude that , which finishes the proof .[ lem.b ] let , for , where and satisfies .then for all and , where , only depend on ( the constant in ) and . from , , and follows that applying lemma [ lem.ab ] with and to the expression in the brackets yields since , this finishes the proof of the first inequality .the second one is proved in an analogous way .we consider the following time - discretized and regularized problem in : with homogenous dirichlet boundary conditions where is given , , , , , and we write .note that is positive definite by lemma [ lem.b ] .we reformulate - as a fixed - point problem for a suitable operator .let \to l^\infty(\omega;{{\mathbb r}}^n) ] follows from the above coercivity estimate for .let us show that is continuous .then , because of the compact embedding for , is also compact. let , , define a sequence converging to in and let ] , where , putting in evidence the dependence on . by definition , (\mu^{(m ) } ) = f((\mu^*)^{(m)},\sigma^{(m)}) ] .it follows that (\mu^{(m ) } ) - { { \mathcal a}}[(\mu^*)^{(m)}](\overline\mu ) , \mu^{(m)}-\overline\mu\big\rangle + \langle { { \mathcal a}}[(\mu^*)^{(m)}](\overline\mu ) - { { \mathcal a}}[\overline\mu^*](\overline\mu ) , \mu^{(m)}-\overline\mu\big\rangle \nonumber \\ & = \big\langle f((\mu^*)^{(m)},\sigma^{(m ) } ) - f(\overline\mu^*,\overline\sigma ) , \mu^{(m)}-\overline\mu\big\rangle .\label{3.aux1}\end{aligned}\ ] ] clearly , is bounded in and , by the compact embedding , also in .this fact , together with the convergences in and , implies that (\overline\mu ) - { { \mathcal a}}[\overline\mu^*](\overline\mu ) , \mu^{(m)}-\overline\mu\big\rangle & \to 0 , \\\big\langle f((\mu^*)^{(m)},\sigma^{(m ) } ) - f(\overline\mu^*,\overline\sigma ) , \mu^{(m)}-\overline\mu\big\rangle & \to 0.\end{aligned}\ ] ] consequently , by , (\mu^{(m ) } ) - { { \mathcal a}}[(\mu^*)^{(m)}](\overline\mu ) , \mu^{(m)}-\overline\mu\big\rangle \to 0.\ ] ] the previous monotonicity estimate for shows that (\mu^{(m ) } ) - { { \mathcal a}}[(\mu^*)^{(m)}](\overline\mu ) , \mu^{(m)}-\overline\mu\big\rangle \ge \int_\omega\na(\mu^{(m)}-\overline\mu)^\top ( b^*)^{(m)}\na(\mu^{(m)}-\overline\mu)dx.\ ] ] then we deduce from the strict positivity of and the poincar inequality that strongly in .the uniform bound for in implies that strongly in for any .take .then the embedding is compact , and , possibly for a subsequence , strongly in . by the uniqueness of the limit, the convergence holds for the whole sequence .this shows the continuity of .we can now apply the fixed - point theorem of leray - schauder to conclude the existence of a weak solution to .let be a solution to .employing as a test function and summing over gives where .since and is convex , it follows that and therefore , lemma [ lem.b ] shows that let , for some .we introduce the piecewise constant functions in time for and ] . then can be formulated as now , we sum over and employ and to obtain in the following , denotes a generic constant independent of and , while denotes a constant depending on but not on .we deduce from and poincar s lemma that by lemma [ lem.hess ] , the matrix is uniformly positive definite .thus , the uniform bound for in provided by implies a uniform bound for for all in , where .therefore , since is bounded and , in particular , is uniformly bounded in . using these estimates in showsthat in view of estimates and , we can apply the aubin - lions lemma in the version of , ensuring the existence of a subsequence , which is not relabeled , such that , as , in fact , in view of the bound , this convergence holds in for any .furthermore , we have it holds that for a.e .let . by , , andfatou s lemma , we infer that , for a subsequence , which implies that , a.e . in .the fact that a.e . in implies that a.e . in property and the relation a.e . in that is a.e .convergent as for .let be such that is convergent for and let we want to show that either or .let us assume by contradition that ( here is the number of elements in ) .it follows that since , the first sum on the right - hand side diverges to , while the second sum is convergent .so the right - hand side of the above equality is divergent , while the left - hand side is convergent , by assumption .this is a contradiction .thus either the set is empty or it equals , i.e. for a.e . , either for , or . summarizing up , .it follows from that exists such that moreover , since , we infer that on . these convergences allow us to perform the limit in , obtaining we will now show that a.e . in .then this implies that a.e . in and so , since .to this end , summing up the components in yields ( remember that in ) let .we employ the test function in giving an integration in time in the interval ] ) yields since the function inside the integral on the right - hand side vanishes in the region , we can rewrite the above equation as we want to show that the integral on the right - hand side is bounded from above by a constant that depends on but not on . we show first that .first , we observe that , because of , this implies that .let .we decompose the first term on the right - hand side is bounded in .the same holds true for the second term since for and is uniformly bounded in .we infer that , showing the claim .the right - hand side of becomes identity , the bound for , and the above estimate imply that taking the limit inferior on both sides and applying fatou s lemma , we obtain which implies that for a.e . , , and . as a consequence , is a weak solution to - . actually , equation is satisfied for test functions in but a density argument shows that the equation holds in .next , we show that strongly in for any . since a.e . in and uniformly bounded , it suffices to show that the term is strongly convergent ( see ) .this is a consequence of the fact that both and are uniformly bounded in .the convergence of , together with fatou s lemma , then allows us to take the limit in and to obtain .we point out that , since all the constants appearing in the previous estimates are independent of the final time , all the bounds that have been found hold true in the time interval .we conclude the existence proof by showing that .we use the test function in , where notice that a.e . in , .it follows that inserting on the right - hand side , the first term is nonpositive ( because of , we have ) and we end up with where the constant estimates the term proportional to and and .it is straightforward to see that is uniformly bounded with respect to in the region . since , we deduce that is uniformly bounded with respect to .furthermore , the regularity implies that is uniformly bounded with respect to . as a consequence , taking the limit inferior on both sides of the above inequality and applying fatou s lemma , we conclude that .this finishes the proof of part ( i ) .we first show that , for some generic constant , let , i.e. .it follows from lemma [ lem.hess ] that this gives it remains to estimate the right - hand side .we claim that . indeed , with , we have the first term on the right - hand side is bounded since implies that .then , since for , hence , by definition of , and therefore , putting together and yields .a computation shows that ( in fact , this is the gibbs - duhem relation , see ) and ( this follows from ) . since , we have .we use the fact that varies in a bounded domain and employ the poincar inequality with constant and the identity to find that which , thanks to , leads to taking into account and lemma [ lem.b ] , we obtain we deduce from the above inequalities and the facts that and , where .a nonlinear gronwall inequality shows that where and .we define now \to{{\mathbb r}} ] satisfies the assumptions of theorem [ coro ] . a simple computation yields employing as a test function in leads to where it holds that and so , since is convex .we show now that as .we compute the above relations , together with the boundedness of and , allow us to apply the dominated convergence theorem and deduce that as . moreover , implies that as , .the continuity and boundedness of imply that in as . taking the limit inferior on both sides of and exploiting all the convergence relations as well as the nonnegativity of yield finally , by fatou s lemma, we conclude that holds .the final statement of theorem [ coro ] is a consequence of the following lemma .[ lem.max ] let , for be positive functions such that on for some constant , .let a constant exist such that in .finally , assume that holds for any satisfying .then in .let for . clearly satisfies . taking into account the assumptions of the lemma, we deduce that holds for the above choice of .since , the right - hand side vanishes . because of the nonnegativity of and the positivity of , we infer that in and in , concluding the proof .we multiply by , sum over , and compute in the sense of distributions : where .because of the gibbs - duhem relation , it follows that , and consequently , we claim that .indeed , definition leads to then we show that in , , where .then equation is uniformly parabolic . using as a test function in and integrating by parts gives since , it follows that .thus , together with young s inequality , we find that the second term on the right - hand side can be bounded by means of the cauchy - schwarz , gagliardo - nirenberg ( with constant , using ) , and young inequalities : so implies that in view of our regularity assumptions on , we have , and we conclude with gronwall s lemma that , i.e. in , .we multiply with and use the lower bound and the gagliardo - nirenberg inequality with : we claim that for some constant which only depends on and . because of on , we have , which implies that where is the poincar constant .the function satisfies in and on . by elliptic regularity , for some constant , and therefore , we infer from and that becomes and hence , where .let . then , by assumption , .since , the coefficient remains positive in a small time interval . as a consequence , is nonincreasing in .a standard prolongation argument then implies that and is nonincreasing for all . in particular , from this fact and estimates and , we deduce that and gronwall s lemma allows us to conclude .we solve system - numerically in one space dimension for the case and , imposing dirichlet and homogeneous neumann boundary conditions for .let with be a discretization of the time interval and with , , and , be a uniform discretization of the space interval .we set for . for the discretization of, we distinguish between the two boundary conditions .we employ the staggered grid and denote by and the approximations of and , respectively .the values at the interior points are the unknowns of the problem , while the values at the boundary points are determined according to the initial condition is discretized by approximating the time derivative by the implicit euler scheme and the diffusion flux at by the implicit upwind scheme the finite - difference scheme for becomes where , , . to be consistent with the boundary conditions , we define and , . here , we do not need to employ the staggered grid , so we use the original grid .the implicit scheme - works also in this situation , with the only difference that the boundary conditions are simply given by , , and the initial condition is defined by .the nonlinear equations are solved by using the matlab function fsolve , with as the initial guess .the time step is chosen in an adaptive way . at each timeiteration , once the new iterate is computed , the relative difference between two consecutive iterates , is evaluated and compared to the maximal tolerance . if , the iterate is rejected , the time step is halved , and the step is repeated . otherwise , the iterate is accepted . before the next iterate is computed , is compared to the minimal tolerance ( with ) . if , the time step is increased by a factor .otherwise , is kept unchanged . in the simulations , we have chosen the values , , and .we present the results of four numerical simulations , referring to the different boundary conditions and different choices of the parameters , namely where and , which corresponds to a lower bound on the hessian of the free energy approximately equal to . in all cases , the initial data have the form which describes an accumulation of , close to , , respectively .the parameters , , , are chosen in such a way that , which is necessary in order to have convergence to a steady state in the case of dirichlet boundary conditions , since any steady state is characterized by the pressure assuming a constant value .for homogeneous neumann boundary conditions and ( case i ) , figure [ fig1 ] shows the evolution of the mass densities , and the pressure at the time instants ( the solution at represents the steady state ) as well as the relative free energy . as expected , the pressure converges to a constant function for `` large '' times .the stationary mass densities are nonconstant .the neumann boundary condition is numerically satisfied , but we observe a boundary layer at , originating from the `` constraint '' of constant pressure .the relative free energy decays exponential fast .after , the stationary state is almost reached and the values of the free energy are of the order to the numerical precision .): evolution of the mass densities , , pressure , and logarithm of the relative free energy .,width=680 ] in figure [ fig2 ] , we present the results for ( case ii ) , still with homogeneous neumann boundary conditions .we observe that the relative free energy decay is slightly slower than in case i but still exponential fast . ):evolution of the mass densities , , pressure , and logarithm of the relative free energy .,width=680 ] for the case of dirichlet boundary conditions , an additional term has to be added to the free energy in order to have free energy decay , due to the presence of additional boundary contributions in the free energy balance equation .more precisely , we choose the modified free energy , where , are such that the boundary term in vanishes . here, we have used the relations and ( see ) .the boundary term vanishes if solves the linear system where , are the values of at , , respectively , and for . if , the above linear system is uniquely solvable .we remark that the modified free energy does not change the energy dissipation but it is nontrivial , as is nonconstant in time .figures [ fig3 ] and [ fig4 ] illustrate the evolution of , , , and of the modified relative free energy . again, the mass densities at ( they are basically stationary ) are nonconstant , and the modified relative free energy converges exponentially fast .the decay rate is faster for , contrarily to what happens in the case of neumann boundary conditions . ):evolution of the mass densities , , pressure , and logarithm of the relative modified free energy .,width=680 ] ): evolution of the mass densities , , pressure , and logarithm of the relative modified free energy .,width=680 ]we prove the integral identity in a formal setting .we proceed as in the proof of theorem [ coro ] .let for , and let . since , the statement follows if for .a straightforward computation gives since , it follows that moreover , putting these three identities together yields for .h. amann .nonhomogeneous linear and quasilinear elliptic and parabolic boundary value problems . in : h.j .schmeisser and h. triebel ( editors ) , _ function spaces , differential operators and nonlinear analysis _ , pages 9126 .teubner , stuttgart , 1993 .
the transport of single - phase fluid mixtures in porous media is described by cross - diffusion equations for the mass densities . the equations are obtained in a thermodynamic consistent way from mass balance , darcy s law , and the van der waals equation of state for mixtures . the model consists of parabolic equations with cross diffusion with a hypocoercive diffusion operator . the global - in - time existence of weak solutions in a bounded domain with equilibrium boundary conditions is proved , extending the boundedness - by - entropy method . based on the free energy inequality , the large - time convergence of the solution to the constant equilibrium mass density is shown . for the two - species model and specific diffusion matrices , an integral inequality is proved , which reveals a minimum principle for the mass fractions . without mass diffusion , the two - dimensional pressure is shown to converge exponentially fast to a constant . numerical examples in one space dimension illustrate this convergence .
discrimination between given hypotheses is one the most basic tasks in our every day lives .very often we are confronted with the necessity of having to identify an option between some possible choices based on some acquired evidence . in the quantum setting the discrimination problemconsists of identifying one of two possible states given a number of identical copies available for measurement .this task encompasses a plethora of non - trivial theoretical and experimental implications . in the usual setting the a priori states are known , i.e. , the classical information characterizing the possible states is provided and the discrimination protocol is tailored for this specific information .one usually considers two types of approaches : unambiguous and minimum error discrimination .an unambiguous protocol is one where the identification of the state is error free .of course , this is only possible stochastically , i.e , unless the states are orthogonal , the protocol must give an inconclusive answer ( the `` i do not know '' outcome ) with a non vanishing probability . in the minimum error approach , the protocol always yields a definite answer , which may be wrong some of the times .an optimal protocol is one which minimizes the inconclusive or the error probability . it may also be possible to go continuously from one case to the other by considering margins of error probabilities . in spite of being such a fundamental problem ,only very recently a closed expression for the asymptotic error probability has been obtained ( see and references therein ) , the quantum chernoff bound , from which metric distances and state densities can be derived .very much in the spirit of universal computers , it is interesting to consider discrimination devices that are not specialized in a specific discrimination instance but can discriminate between arbitrary pairs of states . in these ,the set of possible states enter the device as programs " , i.e. , the classical description of the states is not provided beforehand , rather the information is incorporated in a quantum way ( this can also be viewed as an instance of relative information ) .these devices have program ports that are loaded with the program states , and a data port that is loaded with the unknown input state one wishes to identify .the device will identify the state of the data port as being one of the states fed in the program ports , but this identification will in general be erroneous with a probability that decreases with the number of copies of the states entering the ports .one can also regard these devices as learning machines , where the device is instructed through the program ports about different states , and based on this knowledge the machine associates the state in the data port with one of the states belonging to the training set . increasing the number of copies of states at the program and data ports of course increases the chances of correct identification .it is particularly relevant to understand how the probability of error scales with an increasing number of copies and what are the corresponding error rates .the value of this rate is one of the most relevant parameters assessing the performance of the device .we will consider the discrimination of two general qubit states , although most of our results can be generalized to higher dimensional systems ( see for a single copy continuous variable setting ) .for simplicity we will assume that the prior occurrence probability of each state is identical and compute the unambiguous and minimum error rates for optimal programmable devices .we first study the performance of such devices for pure states .we compute the error probabilities for any number of pure qubit states at the input ports .some of the results are already available in the literature , but the way we formalize the problem here is crucial to treat the more general mixed state case .in addition we obtain analytical expressions that enable us to present the results and study limiting cases in a unified way . in particular , when the program ports are loaded with an infinitely large number of copies of the states we recover the usual state discrimination problem , since it is clear that then one has the classical information determining the states entering the program ports . on the other hand ,when the number of copies at the data port is infinitely large , while the number of copies at the program ports are kept finite , we recover the state comparison problem .we extend the previous pure state study to the case of mixed input states . in this scenariowe only compute the minimum error probability , as no unambiguous answers can be given if the states have the same support .the performance of the device for a given purity of the input states allows to quantify how the discrimination power is degraded in the presence of noise .the expressions here are much more involved , however one can still exploit the permutation symmetry of the input states to write the problem in a block - diagonal form .we then obtain closed expressions for the probability of error that can be computed analytically for small number of copies and numerically evaluated for a fairly large number of copies .we are also able to obtain analytical expressions for some asymptotic rates .again , the leading term , as in the pure state case , is seen to coincide with the average minimum error for known states .we also analyze the fully universal discrimination machine , i.e. , a device that works optimally for completely unknown input states . in this case onehas to assume a uniform distribution for the purity .in contrast to the pure state distribution , there is no unique choice , and different reasonable assumptions lead to different uniform priors .here we consider hard - sphere , bures and chernoff priors .the paper is organized as follows . in the next sectionwe obtain the error probabilities for pure states when each program port is fed with copies of each state and there are copies of the unknown state entering the data port . in sec[ sec : pure - limits ] we study the asymptotic rates in several scenarios . in section [ sec : mixed ] we analyze the performance of these devices when the ports are loaded with copies of states of known purity and obtain some interesting limiting cases in sec .[ sec : mixed - limits ] .we finally obtain the error rates for the fully universal programmable machine .some brief conclusions follow and we end up with two technical appendices .let us start by fixing the notation and conventions used throughout this paper .we label the two program ports by and .they will be loaded with states and , respectively .the data port , , is the middle one and will be loaded with the states we wish to identify .we also use the short hand notation ] .we may also omit the subscripts and when no confusion arises .we assume that the program ports are fed with copies of each state and the data port with copies of the unknown state .this is a rather general case for which closed expressions of the error probabilities can be given .the case with arbitrary and copies at each port is discussed in appendix [ na - nb - nc ] .the expressions are more involved but the techniques are a straightforward extension of the ones presented here . when the state at the data port is or , the effective states entering the machine are given by the averages [ \psi_1^{\otimes m}]_b [ \psi_2^{\otimes n}]_c \nonumber \\\sigma_2&=&\int d\psi_1 d\psi_2 [ \psi_1^{\otimes n}]_a [ \psi_2^{\otimes m}]_b [ \psi_2^{\otimes n}]_c\ , , \end{aligned}\ ] ] respectively . the integrals can be easily computed using schur lemma , = \frac{1}{d_x } \openone_x ] , where ^{\bot}=\openone_n-[\psi^{\otimes n}] ] .the minimum error probability in this limit can be tackled in a similar fashion .the asymptotic expression of eq . , though not as direct as in the unambiguous case , is rather straightforward to obtain .notice that the dominant factor in the term containing factorials inside the square root is .so , we can effectively replace the square root term by one , for all .taking into account that for the square root vanishes , we have the minimum error probability of a strategy that first estimates perfectly the input state and then tries to associate the correct label to it is given by helstrom formula for and where is the trace - norm of operator .substituting the expression of the states we obtain \otimes [ \psi^{\otimes n}]^{\bot } \right .\nonumber \\ & & \phantom{xxxxxxxxxxx } - [ \psi^{\otimes n}]^{\bot}\otimes[\psi^{\otimes n } ] \| \bigg)\nonumber \\ & = & \frac{1}{2}\left ( 1-\frac{2}{2(n+1 ) } \| [ \psi^{\otimes n}]\otimes [ \psi^{\otimes n}]^{\bot}\| \right)\nonumber \\ & = & \frac{1}{2}\left ( 1-\frac{n}{n+1 } \right)=\frac{1}{2(n+1 ) } , \label{eq : me - averaged}\end{aligned}\ ] ] where in the first equality we have subtracted the common term \otimes[\psi^{\otimes n}] ] . as expected , the result is again independent of . to end this section we compute the asymptotic error probabilities for the symmetric case , that is , when all the ports are loaded with the same ( and large ) number of copies . in the unambiguous approachwhen the first nonvanishing order of reads to compute the minimum error probability , it is convenient to write eq . for as where and we first observe that is a monotonically increasing function and hence it takes its maximum value at .second , we note that around this point where is the shannon entropy of a binary random variable and we have used that and .similarly , one has and hence . with this, the probability of error in this limit reads finally , we perform the change of variables and use that in eq . for to obtain where we have defined the function which converges very quickly to its exact value ( the first four terms already give a value that differ in less than from the exact value ) .we now move to the case when the program and data ports are loaded with mixed states .this situation arises when , e.g. , there are imperfections in the preparation or noise in the transmission of the states .it is reasonable to suppose that these imperfections have the same effect on all states , i.e. to consider that the states have all the same purity .the input states are then tensor products of where is a unitary vector and are the usual pauli matrices . in what followswe assume that only the purity is known , i.e. one knows the characteristics of the noise affecting the states , but nothing else .this means that the averages will be performed over the isotropic haar measure of the sphere , like for pure states . at the end of this sectionwe also analyze the performance of a fully universal discrimination machine , that is , when not even the purity is considered to be known .notice that mixed states can only be unambiguously discriminated if they have different supports , which is not the case when the ports are loaded with copies of the states as they are full - rank matrices .therefore , only the minimum error discrimination approach will be analyzed here .it is worth stressing that the computation of the optimal error probability in the multi - copy case is very non - trivial , even for known qubit mixed states .only recently feasible methods for computing the minimum error probability for a rather large number of copies have been developed and the asymptotic expression of this probability has been obtained .the main difficulty can be traced back to the computation of the trace - norm [ see eq . ] of large matrices .the dimension of the matrices grows exponentially with the total number of copies entering the machine , and for a relative small number of them the problem becomes unmanageable .however , as it will be clear , it is possible to exploit the permutation symmetry of the input states to write them in block - diagonal form crucially reducing the complexity of the problem .the two effective states we have to discriminate are where is the invariant measure on the two - sphere .any state having permutation invariance , as e.g. , can be written in a block diagonal form using the irreducible representations of the symmetric group .each block is specified by the total angular momentum and a label that distinguishes the different equivalent representations for a given the angular momentum takes values for odd ( even ) and the number of equivalent representations for each is that is . for each blockwe have which , of course , is the same for all equivalent irreducible representations , i.e. , independent on the label .we sketch here the origin of the factors appearing in ( full details can be found in ) .the first factor comes from the contribution from the singlets present in a representation made up of spin-1/2 states .the summation term is the trace of the projection of the remaining states in the symmetric subspace with total angular momentum , where we can use the rotational invariance of the trace to write each state in diagonal form .this term simply reads \end{aligned}\ ] ] and hence very much in the same way as it happened in previous sections , the only difference between the diagonal basis of and is the ordering of the angular momenta couplings . in first couple subspaces and and obtain where is the projector onto the subspace with quantum numbers and is defined in eq . .notice that depends only on the purity of the state and on the total angular momentum .notice also that the tensor product of a mixed state has projections in all subspaces and the blocks are not uniquely determined by the value of , i.e. , one has to keep track of the labels and as well .of course , subspaces with different quantum numbers are orthogonal , i.e. , =\delta_{\xi \xi'}{{\rm tr}}\openone_{\xi } ] in the regime of low purities for all cases is represented ( solid lines).,title="fig:",width=291 ] numerical results of the minimum error probability as a function of the purity of the input states for the symmetric case are depicted in fig .[ fig : fig1 ] .one sees that for low values of ( ) the dependence on the purity is not very marked , the curves are concave almost in the whole range of the purity . for larger there is an interval of purities where the behavior changes quite significantly . for , e.g. , the inflection point occurs at . at very large values of expects a step - like shape with an inflection point approaching because the probability of error remains very small for and is strictly 1/2 at .the shape of the curves is explained by the existence of two distinct regimes . for high purities the probability of erroris well fitted by a linear function in the inverse of the number of copies .we get where the value coincides with the analytical value computed for pure states eq .. of course , this approximation can not be valid for low purities . in this range of low puritythe minimum error probability is very well approximated by the gaussian function ] or ] and $ ] is . if eq .is satisfied for , then it will be satisfied all over this range of , since is a monotonically increasing function of .the overlap has the very simple form thus , eq .is equivalent to which is clearly true .eq . does not hold if , for which we have .notice that since no error is made for , for , the total inconclusive probability reads , which has the explicit expression where .note that when the term proportional to disappears and the square root term simplifies , so we recover the closed form given in the main text [ cf .eq . ] .the minimum error probability can be computed entirely along the same lines . for a pair of states we have , and the total error probability reads this expression coincides with eq .( 31 ) of .here we compute the average of the coefficients [ see eq . ] for the hard sphere , bures and chernoff priors , eqs . , considered in the fully universal discrimination machine .m. guta and w. kotlowski , arxiv : 1004.2468 . for machine - learning related ideas in the context of quantum information sciencesee also : ameur e. , brassard g. and gambs , proc .19th canadian conference on artificial intelligence ( canadian ai06 ) , pp .433 - 444 .springer - verlag ; a. bisio , g. chiribella , g. m. dariano , s. facchini , p. perinotti , phys . rev .a * 81 * , 032324 ( 2010 ) ; s. hentschel and b. c. sanders , phys .lett . * 104 * , 063603 ( 2010 ) .t. rudolph , r. w. spekkens , and t. s. turner , phys .rev . a * 68 * , 022308 ( 2003 ) ; p.raynal , n. ltkenhaus , and s. j. van enk , phys .rev . a * 68 * , 022308 ( 2003 ) ; u. herzog and j. bergou , phys . rev .a * 71 * , 050301(r ) ( 2005 ) .
quantum state discrimination is a fundamental primitive in quantum statistics where one has to correctly identify the state of a system that is in one of two possible known states . a programmable discrimination machine performs this task when the pair of possible states is not a priori known , but instead the two possible states are provided through two respective program ports . we study optimal programmable discrimination machines for general qubit states when several copies of states are available in the data or program ports . two scenarios are considered : one in which the purity of the possible states is a priori known , and the fully universal one where the machine operates over generic mixed states of unknown purity . we find analytical results for both , the unambiguous and minimum error , discrimination strategies . this allows us to calculate the asymptotic performance of programmable discrimination machines when a large number of copies is provided , and to recover the standard state discrimination and state comparison values as different limiting cases .
photons are well suited to be quantum information carriers . over the past decades, there has been a large number of both theoretically proposed and experimentally tested quantum information protocols designed for photons .notable example with practical applications is the quantum cryptography that allows for unconditionally secure transmission of information .one can use both both fiber and free - space optics to distribute photon - encoded information over considerable distances .even though photons are not so susceptible to interaction with the environment as for instance atoms , their state also deteriorates because of noise and absorption in the communication channel . since channel transmissivity and level of noise are limited by unavoidable technological imperfections , a viable alternative strategy to increase communication range is based on amplification .however , quantum properties of photon states ( unless the state is known _ a priori _ ) are not preserved by classical amplification based on mere `` measure and resend '' or stimulated emission approach , thus these approaches are not always suitable .quantum amplifiers have to be used instead . in discrete variable encoding , polarization or spatial degree of freedom of individual photonsare usually used to encode qubits .it is therefore not surprising that optical qubit amplifiers are proposed and built to address these degrees of freedom .similarly to other linear - optical quantum gates , the qubit amplifiers are also probabilistic and their successful operation has to be heralded by specific detection outcome on ancillary photons . thus apart from amplification gain , one has to introduce success probability to characterize performance of qubit amplifiers .( color online ) conceptual scheme of a heralding qubit amplifier .input state is transformed according to eq .( [ eq : amplification ] ) .d detector , epr ancillary photons ,g amplifier , ff feed forward.,width=226 ] in general , a qubit amplifier performs the following transformation on a mixture of vacuum and single qubit state where and stand for the input and output qubit density matrices , denotes normalization and is the overall ( nominal ) gain of the amplifier .so far only perfect amplifiers ( ) have been discussed in literature . in this paper , we extend the analysis of our previously published scheme to the general case of imperfect amplification ( ) . the paper is organized as follows : in sec .[ sec_princip ] , we describe the principle of operation of the proposed scheme. moreover we introduce the describe the basic quantities used to characterize our proposed amplifier .we introduce fidelity of the operation as the overlap between the input and output qubit states .this analysis allows us to establish the success probability versus fidelity trade - off and observe increased success probability at the expense of a fidelity drop that we describe in sec .[ sec : trade - off ] . finally , in sec .[ sec : state - dependent ] , inspired by optimal state - dependent quantum cloning , we also show that having some _ a priori _ information about the input state allows us to optimize the amplification procedure in order to improve this fidelity versus success probability trade - off .we conclude in sec .[ sec : conclusion ] .( color online ) scheme for state - dependent linear - optical qubit amplifier as described in the text .source of entangled ancillary photon pairs , pbs polarizing beam splitter , ppbs partially polarizing beam splitter ( defined in the text ) , wp wave plate , pdf polarization dependent filter , d standard polarization analysis detection block ( for reference see ).,width=321 ] in this section we describe the principle of operation of our scheme depicted in fig .[ fig : scheme ] so that in subsequent sections we can analyze the above mentioned fidelity vs. success probability trade - off and state dependent amplification .the signal state is prepared in superposition of vacuum and single polarization encoded qubit state ) and the qubit and describing the superposition of horizontal and vertical polarization basis states .the amplifier also makes use of an ancillary pair of entangled photons in a state parametrized by angle $ ] for horizontal polarization . on the other hand ,the ppbs2 reflects all horizontally polarized light and has reflectivity for vertical polarization .partially polarizing beam splitter ppbs1 can be described in terms of creation operators where labelling of modes corresponds to the scheme in fig .[ fig : scheme ] .analogous transformation describes the action of the ppbs2 .projection on diagonal and anti - diagonal linear polarization is performed in both detection modes and .the resulting signal state is recovered by combing horizontal and vertical component on the output fully polarizing beam splitter pbsout .one can trace how the individual components of the three - photon total state ( signal and ancillary photons ) get transformed by the setup assuming post - selection on detection of one photon in each detection mode and or anti - diagonal linear polarization states ( both detected photons share the same polarization ) the output signal state can be expressed as where the output state is kept intentionally not normalized to provide simple expression for success probability in subsequent calculations . alternativelythe output signal state ( also not normalized ) takes the form of if or coincidence is observed ( detected photons have mutually orthogonal polarizations ) .a feed - forward operation has to be adopted to correct the qubit part of state given by eq .( [ eq : signalout1 ] ) to be identical to qubit part of eq .( [ eq : signalout2 ] ) .this feed - forward transformation consists of polarization dependent filtrations and when or coincidence are detected .these filtrations are functions of the ancilla parameter and reflectivity , but are signal state independent : in the case of or coincidence detection , additional phase shift ( sign flip ) is imposed to vertical polarization ( ) .this process is not lossy so we assume it is performed in all the subsequently evaluated scenarios . for the subsequent analysis ,several quantities are crucial .first of them is the overall success probability of the procedure .it can be expressed using the norm of the output state and . not implementing the lossy feed - forward, the success probability reads where the factor of two describes the two equally probable coincidences leading to or . on the other hand , if the feed - forward is implemented , the output states and are transformed to the form of and the corresponding success probability reads a second very important parameter of the amplifier is the gain the ratio between qubit and vacuum components for the amplified state divided by the analogous ratio for the initial input state as show in eq .( [ eq : amplification ] ) . in general, the gain can differ for horizontal and vertical polarizations .one can easily define the gain for both polarizations in the case the feed - forward is implemented if the lossy feed - forward is not implemented , the gain can be calculated as average gain for output state and the overall gain defined in eq .( [ eq : amplification ] ) is obtained by combining the two gains for horizontal and vertical polarization . in the case ofapplied feed - forward , the overall gain is given by and in the case without the lossy feed - forward ( only the phase flip performed ) it is given similarly by the last quantity that has to be calculated in this section is the output qubit fidelity .this fidelity compares the overlap between the qubit state at the input with the qubit subspace of the output state .if the feed - forward is implemented , the fidelity is simply if only the feed - forward phase correction and not the full lossy transformation is performed , the fidelity of the output qubit reads where is the normalized density matrix of the single photon subspace being a balanced mixture of and with transformation performed on the later .( color online ) success probability given by eq .( [ eq : r0psucc ] ) as a function of output state fidelity given by eq .( [ eq : r0fidelity ] ) in the case of infinite gain is depicted for four different input states as described in the text . ]( color online ) success probability as a function of both output state fidelity and amplification gain is depicted for four different input states as described in the text .thr stands for threshold of unreachable area.,title="fig : " ] ( color online ) success probability as a function of both output state fidelity and amplification gain is depicted for four different input states as described in the text .thr stands for threshold of unreachable area.,title="fig : " ] + ( color online ) success probability as a function of both output state fidelity and amplification gain is depicted for four different input states as described in the text .thr stands for threshold of unreachable area.,title="fig : " ] ( color online ) success probability as a function of both output state fidelity and amplification gain is depicted for four different input states as described in the text .thr stands for threshold of unreachable area.,title="fig : " ] in this section we investigate the trade - off between success probability and the output state fidelity . for this analysis, we fixed the parameters and we also took into account the lossy feed - forward .first , we studied this trade - off on the particular case of infinite gain .the infinite gain is an important setting of qubit amplifiers . to achieve this regime ,one simply sets .thus , the previously obtained expressions can be considerably simplified .coefficients and become equal so there is no need for lossy feed - forward any more ( ) , only the is performed .success probability and qubit fidelity take the form of \end{aligned}\ ] ] and respectively .the fig .[ fig : psucc_fid_infgain ] shows the dependence of the success probability on output state fidelity for four different input state parametrized by and .calculation reveals that there is no improvement in success probability in the case of a balanced input state ( ) and the success probability remains constant and fidelity independent .in contrast to that , the more the input state is unbalanced , the more pronounced is the dependence of the success probability on fidelity .this fact will reemerge in the next section discussing state dependent amplification .for instance in the case of , the success probability can be increased by a factor of 1.7 at the expense of 85% output state fidelity . in the next step , we performed numerical calculation of maximum achievable success probability for given values of overall gain given by eq .( [ eq : amplification ] ) and the output state fidelity given by eq .( [ eq : fidelityff ] ) .this calculation has been carried out on the same four input state as mentioned above by varying the and parameters .plots in fig .[ fig : psucc_fid_gain ] present the obtained results confirming the finding described in fig .[ fig : psucc_fid_infgain ] .in addition to that , one can observe that set to lower values of gain , the setup performs better for higher fidelities than for lower ones . in the case of higher gainshowever , the setup behaves as described in the infinite gain analysis . also we were able to establish state - dependent unreachable area set of gain and fidelity coordinates that can not be reached by presented setup .this area is visualized by the threshold ( trh ) line shown in fig .[ fig : psucc_fid_gain ] .( color online ) probability density function given by eq .( [ eq : density ] ) over the poincar sphere for various values of parameter used in subsequent numerical simulations . labels , and denote position of horizontal , diagonal and right - hand circular polarization states respectively.,title="fig : " ] ( color online ) probability density function given by eq .( [ eq : density ] ) over the poincar sphere for various values of parameter used in subsequent numerical simulations . labels , and denote position of horizontal , diagonal and right - hand circular polarization states respectively.,title="fig : " ] + ( color online ) probability density function given by eq .( [ eq : density ] ) over the poincar sphere for various values of parameter used in subsequent numerical simulations .labels , and denote position of horizontal , diagonal and right - hand circular polarization states respectively.,title="fig : " ] ( color online ) probability density function given by eq .( [ eq : density ] ) over the poincar sphere for various values of parameter used in subsequent numerical simulations .labels , and denote position of horizontal , diagonal and right - hand circular polarization states respectively.,title="fig : " ] this section brings forward the main result of the paper : how can we improve the success probability of amplification given some _ a priori _ knowledge about the input qubit state ? for the purpose of quantifying the _ a priori _ information about the input signal , we use the von mises fisher distribution ( also known as the kent distribution ) describing dispersion on a sphere .this probability density function is defined as where is the input state parameter describing the axial angle of the state on the poincar sphere and , i.e. the concentration parameter , determines the amount of knowledge about the input qubit .[ fig : spheres ] depicts the probability distribution over the poincar sphere for various values of .note that in the case of , all states are equally probable ( therefore no _ a priori _ knowledge ) and the larger the concentration parameter is , the more precise information about the input state we have .this trend is illustrated in the tab .[ tab : kent ] providing the values of medians and first deciles for various values of .note that while throughout this paper we center the distribution around the northern pole of the sphere horizontal polarization the generality of our scheme does not suffer by this choice .if the knowledge about the input state is not centred around north pole , one can always perform a deterministic rotation to make it so and inverse it after the state comes out of the amplifier ..[tab : kent]values of medians and first deciles of the the von mises - fisher distribution for several values of the concentration parameter . [ cols= " <, < , < " , ] using this quantification of input state knowledge , we performed a series of numerical calculations with the goal to determine the fidelity success probability trade - offs .our results show the relation between the highest achievable average success probability for the fixed values of average gain and fidelity respectively , where and is the surface of the poincar sphere . only the integral is not trivialsince is a rational function of , thus it was calculated numerically , however the other integrals can be expressed as linear functions of .the investigated cases are depicted in fig [ fig : psucc_fid_kappa ] . in each casewe targeted one specific average overall gain value from the set , where the average was taken over input states distributed according to the von mises - fisher distribution for four different values of .for all the average gain and combinations , we determined the relation between the average output state fidelity and the average success probability .note that similarly as in the previous section , we assumed and we also took into account the lossy feed - forward . similarly as in the case analyzed in sec .[ sec : trade - off ] , not all the values of fidelity are accessible simply because of the fact , that the setup can not produce fidelity lower that a certain threshold that depends on the values of and average gain .it is a well expected result , that for the combination of and infinite gain , the success probability of the setup and fidelity are state - independent .this result can be analytically verified using formulas from sec .[ sec_princip ] for .in contrast to that , for other than infinite average gains , there is always a maximum of success probability depending on . for ,this maximum is found for unit fidelity .it follows from the above mentioned observations that for a given value of average gain and , there exists a specific fidelity value giving maximum success probability . in some casesthis maximum is to be found on the threshold providing the lower bound on the accessible fidelity values , but surprisingly this is not always the case .this effect reflects the fact that the space of and values providing at the same time the required value of fidelity and the average gain has a non - trivial structure. thus , it seams that the question about the the limits on the success rate of the state - dependent quantum amplifier for fixed amplification parameters does not have a simple answer .nevertheless , it is apparent that in general one can increase the success probability of the setup at the expense of the lower success probability , but sometimes the maximum value can be reached at a lower cost than approaching the fidelity threshold .( color online ) maximum achievable success probability as a function of average fidelity for various values of average overall gain and state knowledge described by parameter of probability density function given by eq .( [ eq : density]).,title="fig : " ] ( color online ) maximum achievable success probability as a function of average fidelity for various values of average overall gain and state knowledge described by parameter of probability density function given by eq .( [ eq : density]).,title="fig : " ] + ( color online ) maximum achievable success probability as a function of average fidelity for various values of average overall gain and state knowledge described by parameter of probability density function given by eq .( [ eq : density]).,title="fig : " ] ( color online ) maximum achievable success probability as a function of average fidelity for various values of average overall gain and state knowledge described by parameter of probability density function given by eq .( [ eq : density]).,title="fig : " ] ( color online ) merit function given by eq .( [ eq : merit ] ) depicted for various parameters and average gains .,title="fig : " ] ( color online ) merit function given by eq .( [ eq : merit ] ) depicted for various parameters and average gains .,title="fig : " ] + ( color online ) merit function given by eq .( [ eq : merit ] ) depicted for various parameters and average gains .,title="fig : " ] ( color online ) merit function given by eq .( [ eq : merit ] ) depicted for various parameters and average gains .,title="fig : " ] one can argue , that some applications require perfect amplification with unit fidelity and thus it is not suitable to increase the success probability of the setup at the expense of lower fidelity .while this may indeed be true in some cases , realistic protocols for quantum communication have to be robust against at least some degree of fidelity drop . this leads us to formulate a figure of merit function inspired by where the numerator is the maximum of the product of fidelity and corresponding success probability and the denominator is just the success probability at unit fidelity . since the product of fidelity and success probability can be understood as some sort of output rate of signal qubits , the function gives maximum factor of increased output signal rate if one allows for the fidelity to be smaller then 1 ( see fig . [fig : kappa_merit ] ) . it can be easily shown that for the very specific case of both infinite average gain and infinite , the setup gives exactly the same outcomes of simple _ photon amplifier _ based on the `` detect and reproduce '' method . while for no _ a priori _ knowledge about the input state , the setup provides the same functionality as previously published_ qubit amplifier _ . in this sense, the setup covers the transition between these two conceptually different devices .the possibility to operate a qubit amplifier in a imperfect regime , where output qubit fidelity may be smaller than one offers significant increase in success probability if one has some _ a priori _ information about the input qubit state . in this paper, we analyzed the capabilities of the proposed linear - optical setup for the state - dependent qubit amplifier .we determined output state fidelity , gain and success probability as functions of setup parameters .next , we performed a numerical optimization of success probability depending on target output state fidelity and gain for various input states .this calculation shows that the closer the state is to the pole of poincar sphere , the more pronounced is the success probability improvement if fidelity is allowed to drop .also this effect manifests more strongly in the cases of higher gains .furthermore , we performed numerical analysis of success probability as a function of average output state fidelity for several target average gains and levels of _ a priori _ information about the input state quantified by the von mises - fisher distribution .the results shows how the maximum success probability versus fidelity trade - off behaves depending on average gain and _ a priori _ information about the input state . to clearly visualize the potential improvement in success probability, we have constructed a specific function of merit that we use to characterize the amplifier in several regimes ( various gains and levels of _ a priori _ knowledge about the input state ) .this analysis indicate that success probability can be increased in order of tens of percents depending on the conditions .interestingly , we found that in general ( for cases other than infinite gain ) the success probability of the amplifier does not increase in a monotonic way for decreasing fidelity .this result clearly demonstrates that the success probability of state - dependent amplifiers can be maximally increased without significant drop in output state fidelity .for this reason we believe that our results can stimulate further research on state - dependent qubit amplifiers and their potential applications .the authors gratefully acknowledge the support by the operational program research and development for innovations european regional development fund ( project no .cz.1.05/2.1.00/03.0058 ) .acknowledges project no .p205/12/0382 of czech science foundation .k. b. and k. l. acknowledge support by grant no .dec-2011/03/b / st2/01903 of the polish national science centre and k. b. also by the operational program education for competitiveness european social fund project no .cz.1.07/2.3.00/30.0041 while k. l. acknowledges the support by czech science foundation ( project no .13 - 31000p ) .the authors thank evan meyer - scott , thomas jennewein , norbert ltkenhaus , jan soubusta and jra cimrman for inspiration .
we propose a linear - optical setup for heralded qubit amplification with tunable output qubit fidelity . we study its success probability as a function of output qubit fidelity showing that at the expense of lower fidelity , the setup can considerably increase probability of successful operation . these results are subsequently applied in a proposal for state dependent qubit amplification . similarly to state - dependent quantum cloning , the _ a priori _ information about the input state allows to optimize the qubit amplification procedure to obtain better fidelity versus success probability trade - off .
the statistical analysis of spectra aims at a comparison of the spectral fluctuation properties of a given physical system with theoretical predictions like those of random matrix theory ( rmt ) , those for integrable systems , or interpolations between these two limiting cases .specific problems arise whenever the spectra under consideration involve a relatively small number of levels .this is the situation in the analysis of spectra of nuclei in the ground state domain , of atomic spectra , and of molecular spectra . here, one usually deals with sequences of levels of the same spin and parity containing only 5 or 10 levels .several or many such sequences are then combined to obtain an ensemble of statistically relevant size .the sequences forming the ensemble may involve levels of different spin parity and/or levels from different nuclei .the resulting data set is typically analysed with regard to the nearest neighbor spacing ( nns ) distribution only . in view of the shortness of the individual sequences , correlations between spacings of levels are not investigated . in the present paper, we address two problems which arise in the analysis of such data .first , we ask whether a fit to a histogram of the nns distribution is the optimal way to analyze the data .we compare this method with the method of bayesian inference which has been successfully used to analyze the statistical properties of coupled microwave resonators .second , for a reliable analysis , one has to unfold the individual sequences .this yields a new data set with mean level spacing unity .then , one combines these level sequences to form a larger ensemble of spacings suitable for the statistical analysis .how big is the statistical error due to this unfolding procedure ?we answer this question for two extreme cases where the spacings are taken from a spectrum without ( with ) the long range rigidity typical for chaotic systems , respectively . in section [ nns ] , we give a brief summary of the nns distribution and of spectral analyses using it . in section [ baye ], we give a short account of bayesian inference tailored to the problems just mentioned . in section[ anal ] , we address the above mentioned two problems .section [ summ ] contains a summary and our conclusions .the canonical ensembles of random matrix theory ( rmt ) are classified according to their symmetries . here , we focus attention on systems which are invariant under time reversal and under space rotations .such systems are represented by the gaussian orthogonal ensemble ( goe ) of random matrices .the nns distribution of levels of the goe is well approximated by wigner s surmise here , is the spacing of neighboring levels in units of the mean level spacing .rmt was introduced originally to describe the spectral fluctuation properties of complex quantum systems .later , it has been conjectured that rmt also applies to quantum systems whose classical counterpart is chaotic .this conjecture has enormously widened the range of applications of rmt and has led to a juxtaposition of rmt and of the theoretical description of quantum systems which are integrable in the classical limit .the latter possess a nns distribution which is generically given by the poisson distribution , there also exist intermediate situations . examples are ( i ) mixed systems where the motion in some parts of classical phase space is regular and in other parts , chaotic , see refs . and references therein ; ( ii ) pseudointegrable systems which possess singularities and are integrable in the absence of these singularities , see e.g. refs . ; ( iii ) fully chaotic systems where a conserved symmetry is dynamically broken , or is ignored .here we focus attention on case ( iii ) .the hamiltonian for a system with strictly conserved symmetry is block diagonal .each block is characterized by a quantum number ( or a set of quantum numbers ) of the symmetry under consideration and may be separately considered as a member of a goe .symmetry breaking is modelled by introducing off diagonal blocks that couple diagonal blocks with different quantum numbers .the resulting spectrum differs from goe predictions .such modelling has been useful in the following cases .( i ) isospin mixing in nuclear spectra and reactions .( ii ) isospin mixing in the low lying states of .( iii ) the gradual breaking of a point group symmetry ( which is statistically fully equivalent to the breaking of a quantum number like isospin ) in an experiment with monocrystalline quartz blocks .( iv ) the electromagnetic coupling of the resonances in two superconducting microwave resonators .the statistical analyses relating to some of these cases can be found in refs . and .ignoring a conserved symmetry leads to a superposition of several goe spectra and to spectral fluctuation properties which are similar to those of the cases considered above .very strong mixing of states possessing different symmetries leads to a goe distribution , and the superposition of many goe spectra leads to a nns distribution which is poissonian .thus , there is a variety of physical situations that give rise to level fluctuations which are intermediate between the goe and the poisson cases .we attempt to describe the nns distribution in all these intermediate situations by a one parameter family of functions interpolating between expressions ( [ 0 ] ) and ( [ p ] ) .this family is defined in section [ prop ] .we are aware of the fact that our procedure can not be exact .arguments will be given to justify its use but it remains an approximation .the bayesian analysis of the nns distribution proceeds in three steps .( i ) we propose a probability distribution for the observed spacings of nearest neighbors .this function depends parametrically upon the parameter which measures the deviation from goe statistics .it is our aim to determine from the data .( ii ) we determine the posterior distribution for the parameter .( iii ) we deduce the optimum value of together with its statistical error .the three steps are outlined in the three subsections which follow . to construct , we consider a spectrum containing levels of the same spin and parity .( in practice , we usually deal with a set of spectra but consider only a single one in the present section .the generalisation to a set of spectra is considered in section [ 4.2 ] ) .the levels in may , however , differ in other conserved quantum numbers which are either unknown or ignored .the spectrum can then be broken up into sub spectra of independent sequences of levels , with . the fractional level number of is denoted by where and let denote the nns distribution for the sub spectrum .we assume that each of the distributions is given by the goe and has unit mean level spacing . to an excellent approximation ,the s are then given by wigner s surmise ( 1 ) . the construction of the nns distribution for the superposition spectrum ( with unit mean level spacing ) from the s was explicitly carried out by rosenzweig and porter .their construction is not useful in the present context because in practice , we do not know the number of sub spectra , nor is it possible to determine all the parameters from the data . to overcome this difficulty, we use an approximation scheme first proposed in ref . which leads to an approximate nns distribution for viz .\nonumber \\ & & \times \exp \left \ { - \left(1 - f \right ) s - f \left ( 0.7 + 0.3 f \right ) \frac{\pi s^{2 } } { 4 } \right\ } \ . \label{10}\end{aligned}\ ] ] this function depends on only a single parameter , the mean fractional level number for the superimposed subspectra. this quantity will eventually be used as a fit parameter .the derivation of eq .( [ 10 ] ) and the definition of are discussed in appendix [ app1 ] . for a large number of sub spectra , is of the order of and , thus , small . in this limit, approaches given by eq .this expresses the well known fact that the superposition of many goe level sequences produces a poissonian sequence . on the other hand , when , approaches the nns of the goe .this is why we refer to as to the chaoticity parameter .we use as defined above for the analysis of the data .our model for has been constructed with case ( iii ) of section [ nns ] in mind , even when the symmetries are only weakly broken . in that case , the distribution ( [ 10 ] ) is not accurate for very small spacings because differs from zero at while the symmetry breaking interaction lifts all degeneracies .however , this defect should not affect the spacing distribution beyond the domain of very small spacings .the magnitude of this domain depends on the ratio of the strength of the symmetry breaking interaction to the mean level spacing . for the other cases of intermediate situations mentioned in section [ nns ], our model may not be the best choice .the experimental nns distributions of mixed systems ( case ( i ) ) are frequently analyzed using brody s interpolation formula , , where is a fit parameter and ^{\gamma + 1} ] .this neglect entails , however , that the distribution ( [ 4 ] ) does not satisfy the condition of unit mean spacing = 1 \ , .\label{9a}\ ] ] in order to satisfy this condition we determine the parameter from eq .( [ 9a ] ) while keeping eq .( [ 7 ] ) for the parameter .we do so in order to maintain the correct behavior of a collection of independent goe subsequences at small values of .hopefully , this approximation will take into account some of the effects of the neglected terms in the power series expansion of the logarithm .the proposed nns distribution of the composite spectrum is then given by \exp\left[-\left(1-f\right)s -q(f)\frac{\pi s^{2}}{4 } \right ] \, , \label{10a}\ ] ] where is defined by the condition ( [ 9a ] ) .this procedure yields an implicit relation between and which involves a complementary error function .we have numerically solved the implicit equation and obtained for in the interval of the resulting solution was approximated by the parabolic relation with this approximation , the mean spacing differs from unity by less than 0.5% .the distribution ( 3 ) coincides with the exact expression up to the 6th decimal digit .the exact values were obtained by doubly differentiating eq .( [ 1 ] ) , see ref . .the prior distribution eq .( [ 15 ] ) was evaluated numerically after inserting eq .( [ 12 ] ) into eq.([15 ] ) .the result was approximated by the sixth order polynomial the distribution assumes very small values even for only moderately large values of .therefore , the accurate calculation of the posterior distribution requires some care . in order to simplify the calculation ,we have rewritten eq .( [ 12 ] ) in the form where \rangle\ , . \label{18}\end{aligned}\ ] ] here , the notation has been used .we find that the function has a pronounced absolute minimum , say at .this minimum provides the maximum of the posterior distribution which is of interest in the neighborhood of .there one can represent by parameterizing in the form of a third order polynomial , the parameters and are implicitly defined by eq .( [ 18 ] ) . in the analysis of the nns distributions for the coupled microwave resonators ,the number of spacings for each coupling was so large ( ) that a gaussian distribution was found to describe the of the posterior very well .indeed , for a sufficiently large number of data the posterior should approach a gaussian .although this is not a consequence of the central limit theorem , the proof of this statement is similar .therefore , the posterior distributions obtained in that analysis were gaussians characterized by a mean value and the variance .the present analysis , however , addresses nns distributions that involve a considerably smaller number of spacings .therefore , we can not further simplify the approximation ( [ 20 ] ) and arrive at \right),\ , 0\le f\le 1\ , .\label{22}\ ] ] here , is the new normalization constant .we expand on the integration over uninteresting parameters briefly discussed in subsection [ expl ] . let the model be conditioned by two parameters , only is interesting .the precision with which one can infer the value of depends on one s knowledge of .we distinguish two cases of prior knowledge : ( i ) is known to have the value and ( ii ) is unknown . in the first case , is simply inserted into the model so that is inferred from . in the second case , is inferred after has been integrated over .the precision with which can be obtained , is better in the first case than in the second . as an example we consider the gaussian model here , the data form a 2dimensional vector as do the parameters .the 2dimensional correlation matrix contains the variances and the correlation coefficient .( this model has also been discussed in chap .12.2.2 of ) .\(i ) when is known to have the value , one obtains the gaussian model \right ) \label{a3.4}\ ] ] conditioned by .the posterior distribution of is gaussian with variance the variance ( ) of the second case is larger than the variance ( [ a3.5 ] ) of the first case .the variances agree if the correlation vanishes. then the problem factorises with respect to the parameters and .r. e. kass , phd thesis `` the riemannian structure of model spaces : a geometric approach to inference '' , university of chicago , 1980 ; r. e. kass and e. wassermann , j. am . statist .91 , 1343 ( 1996 ) .
we consider nearest neighbor spacing distributions of composite ensembles of levels . these are obtained by combining independently unfolded sequences of levels containing only few levels each . two problems arise in the spectral analysis of such data . one problem lies in fitting the nearest neighbor spacing distribution to the histogram of level spacings obtained from the data . we show that the method of bayesian inference is superior to this procedure . the second problem occurs when one unfolds such short sequences . we show that the unfolding procedure generically leads to an overestimate of the chaoticity parameter . this trend is absent in the presence of long range level correlations . thus , composite ensembles of levels from a system with long range spectral stiffness yield reliable information about the chaotic behavior of the system .
the accurate prediction of gravitational waveforms produced in the collisions of black holes has become a central topic of research in general relativity , due to their potential observability with modern interferometric gravitational wave detectors . given the lack of symmetries in a collision ,it was believed for a long time that only a full numerical integration of the einstein equations would lead to reliable answers .recently it has been noticed that one can make some progress in understanding the collisions by using black hole perturbation theory , especially for collisions in which the holes start sufficiently close to each other for the collision to be considered to be the evolution of a single , distorted black hole . following this approach ,called the `` close limit approximation , '' linearized perturbation theory has been shown to provide a remarkably accurate picture of the head - on collision of momentarily stationary and boosted black holes .if this technique is going to be considered a valid method for cases in which full numerical simulations are still not available , one needs to develop indicators for deciding when the approximation is trustworthy .intuitive rules of thumb , such as requiring that a single , almost spherical , horizon initially surround both holes turn out to be too conservative to be practical , as was demonstrated by the momentarily - stationary head - on collision results . recently , it was suggested that the use of second order perturbation theory to provide `` error bars '' could be an effective way of estimating the domain of validity of the first order results . for the head - on collision of momentarily stationary black holes the proposal appeared to work very well .the purpose of this paper is to explore the application of second order calculations to an important case of initial data that is not momentarily stationary , the head on collision of initially `` boosted '' ( i.e. , moving ) holes .the introduction of boost turns out to add several technical and conceptual complications .these are important beyond their relevance to the specific collision studied here , since the cases of realistic physical interest ( collisions with spin and net angular momentum ) all involve initial data that is not momentarily stationary .this paper will therefore attempt to lay part of the groundwork for future investigations of more realistic situations .one of the conclusions of paper will be that perturbative calculations can be a very reliable tool to get quantitative predictions at a certain level of accuracy . if we wish to push the accuracy to a few percent level , some questions remain .this is in part due to the fact that numerical codes we use for comparison can not at present be trusted to that level of accuracy either , and in part due to several technical complications that appear in perturbation theory .it is remarkable , however , that the addition of considerable amounts of boost to the black holes does not preclude the applicability of perturbation theory techniques .this paper is closely related to two previous studies to which reference will frequently be made .the first is the linearized analysis for initially boosted holes , by baker _et al._ ; we shall refer to this as baabrps .the present work relies heavily on the second order formalism described in refs.( ) , which we shall refer to collectively as gnpp . in the present paper , sec .ii gives the details of how our initial data is parameterized with a separation between the holes , and a momentum parameter .a discussion is given of the meaning of perturbation theory in a two parameter space of solutions .it is also pointed out that a feature of our initial data differs from that used in computations with numerical relativity , and this difference hinders a perfect comparison of results .we find , in this section , that to use the second order formalism of gnpp , we must make gauge transformations that eliminate the first order monopole perturbation of the extrinsic curvature .this is the first of several technical issues that were not evident in the linearized work of baabrps or the momentarily stationary data of gnpp .section iii shows how the initial data is evolved with a wave equation that has the structure of a zerilli equation with a source term quadratic in the first order perturbations . from these results ,it is shown how one computes the gravitational waveforms and energies correct to second order , and the second - order correct results are presented and are compared with results from numerical relativity . in this sectiona discussion is given of a second order technical detail that has previously been ignored and that was of little consequence in the evolution of momentarily stationary initial data .the gauge fixing used in gnpp leaves unfixed a degree of freedom associated with time translations .this is not relevant to the computation of radiated energy , but must be resolved if our waveforms are to be compared with those of numerical relativity . in sec.iiia convenient method is given for fixing this gauge freedom in the waveforms .with this choice , the second order correct waveforms as well as energies are found to be in excellent agreement with the results of numerical relativity . the methods and results of the paper are briefly reviewed in sec.iv and the connection to future work is pointed out .we use for the most part notation introduced in baabrps and gnpp , which in turn is based on the notation of regge and wheeler .in addition , we will use here the convention of adding superscripts in parentheses _ when necessary _ to indicate whether a quantity is first or second order , and to indicate what multipole it refers to .multipole indices will be distinguished with `` . ''thus , for example , would indicate the third order quadrupole regge - wheeler perturbation .the momentarily - stationary misner initial solution to the initial value problem of general relativity for a two black hole situation has a convenient explicit analytical form ; for initially moving holes no such form is available and the first step in the problem is to find an appropriate initial value solution .we use the conformal approach ( see and references therein ) , in which one assumes the metric to be conformally flat and constructs the conformal extrinsic curvature . in terms of these variables the initial value constraint equations ( assuming maximal slicing ) read , where all the derivatives are with respect to flat space .one can construct solutions to the first set of equations ( momentum constraint ) for a single black hole with linear momentum , \ .\label{boyok}\ ] ] here is the distance , in the conformally related flat space , from the origin and is a vector in that space and can be shown to be the momentum of the hole in the asymptotically flat physical space . by superposing two such solutions one obtains the conformally related extrinsic curvature representing two moving black holes ( although the flat space vectors in this case must be considered parameters that have a clear interpretation as momentum only in the case that the holes are widely separated ) . since the constraint equation ( [ momcons ] ) for the conformally related extrinsic curvature is linear, the superposition still solves the constraint .as was done in baabrps , we locate the two black holes on the axis of the conformally related flat space , at positions and we choose the flat space vectors to be symmetrically directed towards the origin and to have equal size .( the case of holes moving away from the origin is represented with a negative value of . ) as in baabrps , we treat the separation parameter as our perturbation parameter , and we expand the two - hole superposed in . due to the equal mass / opposite momentum symmetry ,only odd powers of appear , and the first two terms are : \\ & + & { 3 p l^3 \over 16 r^5 } \left [ \begin{array}{ccc } 2 ( 1 - 18 \cos^2 \theta + 25 \cos^4 \theta ) & 4 r \sin \theta \cos \theta ( -1 + 5 \cos^2 \theta)&0\\ 4 r \sin \theta \cos \theta ( -1 + 5 \cos^2 \theta ) & r^2 ( 1 + 6\cos^2 \theta-15\cos^4 \theta)&0\\ 0&0&r^2 ( -3 + 33 \cos^2 \theta -65 \cos^4 \theta + 35 \cos^6 \theta ) \end{array}\right]\nonumber.\end{aligned}\ ] ] here is the flat space distance to the origin , related to the flat space distance to the holes by , and the expressions in eq.([kapprox ] ) are valid only for .one must now put this solution in the right hand side of the hamiltonian constraint ( [ hamil ] ) and solve the resulting nonlinear elliptic equation . in this processone needs to decide which boundary conditions to impose for the elliptic problem .a common choice in numerical studies has been the use of symmetrization of the data through the two throats of the black holes ( see for instance and references therein ) .this kind of procedure is not very convenient if one is interested in semi - analytic work as we are , chiefly because symmetrizing implies using the method of images an infinite number of times and the expressions involved become quite large and difficult to handle . on the other hand, nothing prevents one from constructing unsymmetrized data for boosted black holes along the same lines as for the momentarily - stationary case ( the brill lindquist problem ) .this was recently emphasized by brgmann and brandt .here we will take this latter approach .this is not completely inconsequential , since the only numerical simulations available for comparison are for symmetrized data ; we will return to this point later . to generalize the brill lindquist construction to the case with momentum , one assumes the conformal factor to be composed of two pieces , one piece is singular at the points in flat space , and represents throats .that is , when one introduces a new radial coordinate of the form the `` singular point '' of the conformally related flat space is seen to have the actual geometry of space that is asymptotically flat as .the result of putting eqs .( [ twopiece ] ) and ( [ blpiece ] ) in ( [ hamil ] ) is which is to be solved for with the boundary conditions that is regular at and approaches unity as .notice that the right hand side of eq.([hamforphireg ] ) is well behaved at ; although the numerator diverges as , the denominator increases as .the main difference between our approach and that of brandt and brgmann is that we shall solve the initial value problem perturbatively .we have defined a two parameter ( and ) family of initial data that we could evolve into a two parameter family of spacetimes .we are , of course , primarily interested in the close limit , the limit of small initial separation , and hence of small . in principle, we could use initial data correct to second order in .this would mean solving eq.([hamforphireg ] ) for and expanding the solution in .in practice , eq.([hamforphireg ] ) would require numerical solution since the right hand side of eq.([hamforphireg ] ) is regular , and the green function for the equation is simple , this would present no significant obstacle , but it would have the disadvantage that we would have the solution only numerically .in particular , this would mean that the dependence on would not be transparent .for that reason , we follow a different path . as in baabrps, we consider small and small .more specifically , we consider a curve in the family of spacetimes , with a numerical factor of order one .this means , for example , that terms proportional to and are all of the same order , and are our lowest order perturbations .our second order perturbations will be of the form . due tothe symmetry of our configuration , no terms arise of order .since has a leading factor of , the numerator on the right hand side of ( [ hamil ] ) is proportional to .this point is rather subtle , and there is a temptation to come to a wrong conclusion .the expressions in ( [ kapprox ] ) are and suggest that the numerator in ( [ hamil ] ) is .it must be understood , however , that the expressions in ( [ kapprox ] ) are valid only for .the solution of ( [ hamil ] ) does not depend locally only on the large form of the right hand side .it connects boundary conditions at infinity with boundary conditions of the throats , boundary conditions for which the condition does not hold . due to this non - locality in ( [ hamil ] ) , or equivalently ( [ hamforphireg ] ) , the conformal factor depends in a complicated , nonpolynomial , way on the parameter .our `` small '' assumption amounts to taking the numerator in ( [ hamil ] ) , or equivalently ( [ hamforphireg ] ) , to be perturbative , at all points in space .this , and closely related issues , are further discussed in baabrps .the perturbative problem requires at several points the specification of a `` mass '' , either of the spacetime or of the black holes .let us discuss this in some detail .first , there is the problem of what mass does one use for the background schwarzschild spacetime around which we are doing perturbation theory .our experience shows that one should use the adm mass of the spacetime .we saw a similar situation when we analyzed the radiation generated as an initially conformally flat ( `` bowen - york'' ) spinning hole settles into its kerr final state .the spin rate was taken to be small , and the problem was treated as a perturbation away from the schwarzschild geometry . for apparently moderate amounts of spin ,the radiation generated was rather small , but the effect on the adm mass ( i.e. , the spin dependent increase over the schwarzschild mass ) could be a factor of several . by computing exactly the effect of spin on the adm mass, we found we could successfully apply perturbation theory for moderate spin.the only question could be if one uses that of the initial slice or that after the radiation has gone out , but for the cases of interest the difference is less than , so we will consider the adm mass of the initial slice for our background .then there is the issue of the initial data .the initial data for boosted black holes is characterized by the separation , the momentum and a `` bare '' mass for each hole , which also serves as overall scale factor .this mass has no clear physical meaning , and no equivalent in the reflection symmetric initial data used in numerical relativity .because of this , we would prefer to have the initial data parameterized by .since and determine uniquely the adm mass , this is formally no problem . in practicewe proceed in the following way .the adm mass ( for a given set of parameters ) can be found from the monopole part of our second order solution for the conformal factor .one can then write an expansion for of the form , with a constant .one can then take the intial data , and rewrite it as and use the above expansion ( [ expanmadm ] ) for the explicit form of . as a result of this reparameterization ,the first and second order terms of the initial data are left invariant .that is , one simply takes the initial data and where it read `` '' one replaces . for the second order pieces this is also true , the second order pieces of ( [ expanmadm ] )only contribute irrelevant terms to second order and do not change the initial data . summarizing ,we construct the perturbative initial data , and wherever it said we replace and this is consistent to the order of perturbation theory we are considering .therefore our problem is completely parameterized now by the adm mass , which also facilitates comparison with the full numerical data , which are also parameterized and normalized by the adm mass .this issue was the source of significant confusion in this area initially .in particular , the results of are not properly normalized and therefore depart from our predictions in this paper for moderate and large values of the momentum ( when starts to differ significantly from twice the `` bare mass '' of the holes ) .the concrete details of computation start with ( [ hamforphireg ] ) .for and for computations only to second order , one needs only the portion linear in of the extrinsic curvature . keeping terms only to second order , and taking on the right hand side of the hamiltonian constraint , the form of given in ( [ blpiece ] ) ,one gets the piece of the conformal factor , by solving , with the boundary condition at .we can simplify the solution of this poisson equation by decomposing the source into multipoles : the solution for the monopole and quadrupole parts are : one can also solve for the piece but we will not need it .the solution contains two constants and representing the homogeneous solution of the poisson equation .these constants determine , in effect , what boundary conditions are being chosen for the conformal factor .the choice we have made is that , and hence , is regular everywhere .the wrong choice of or means that when the solutions in ( [ phis ] ) are continued to they will be singular , so that does not contain all the information about the singularities .( this would be the case , for instance , if we took and to have the values for the symmetrized solution . ) to determine what values of and give a regular solution , we start by noticing that from ( [ phis ] ) we can see that the asymptotic form of the conformal factor is , on the other hand , we know that the regular part of the conformal factor admits an expansion of the form , since the right hand side of ( [ hamforphireg ] ) falls off as one can conclude that the first three coefficients of the above expansion are part of the homogeneous solution , and have the form we therefore see that and are just the leading coefficients in an expansion of and in terms of and .one can obtain a closed form expression for and by applying gauss theorem to ( [ hamforphireg ] ) , and using the fact that ( by choice ) is regular on the whole plane .this takes the form where represents the right hand side of ( [ hamforphireg ] ) , and the integral on the left is evaluated over the boundary of the plane at infinity .it is clear that the only term that contributes to this integral is the leading term for the expansion of .from there we can therefore determine .considering the same construction , now for one can determine .the results are , so therefore for the leading terms we get , \right|_{p=0,l=0 } \label{quspert}\\ q_2 & = & -{1\over 8 } { \partial^4\over \partial p^2 \partial l^2 } \left .\left [ \int_0^\infty dr \int_0^\pid\theta\;r^4 \sin(\theta ) p_2(\cos(\theta ) ) s(r,\theta , p , l ) \right ] \right|_{p=0,l=0 } \nonumber.\end{aligned}\ ] ] these expressions are straightforward to evaluate , especially since to the order of interest we can replace the source by , where is the square of the trace of ( [ kapprox ] ) and is the conformal factor evaluated for .the integrals ( [ quspert ] ) however , can not be solved in closed form .instead they were computed numerically ( in several different ways ) .the numerical treatment of requires some care . as pointed out near the end of sec.iib ,though the source term is regular , both the numerator and denominator on the right hand side diverge at the points representing the holes .the results we get for the constants are these numbers are in excellent agreement with an approximate calculation due to brandt and brgmann .they obtain approximately the correction of the adm mass due to the momentum in the initial data .the leading term in the expansion in of their formula is precisely our .their result is .one can reproduce these formulas by considering expansions of the integrals considered in powers of . having the initial data for the problem , we now can input it into the perturbation formalism and evolve it .the first order perturbations are evolved with a zerilli equation .the second order perturbations are evolved with a zerilli equation with a `` source '' term quadratic in first order perturbations .the details of how this is done for the momentarily stationary misner initial data was described in gnpp .those details , however , were rather specific to the misner case .in particular , the formalism in that work used the fact that in the misner initial metric the only first order perturbations are quadrupolar , and hence the source in the second order zerilli equation is constructed entirely from first order perturbations .those details also assumed that certain of the second order initial metric perturbations vanished .it will be convenient to use gauge transformations to satisfy these same conditions , so that the previous formalism can be used .the initial metric ( because it is conformally flat ) has the correct misner - like second order form .the extrinsic curvature , however , has a first order perturbation which generates perturbations in the evolved data .these perturbations would contribute to the source term of the second order zerilli equation .below we will use a first order gauge transformation to eliminate this first order perturbation .this transformation , however , changes the second order initial metric , taking it out of `` misner form . ''we then use a second order gauge transformation to restore it to the misner form .let us start by writing the perturbations in the standard regge wheeler notation for the multipolar decomposition of a metric tensor , ie , \ell(\cos\theta)\\ g_{\phi\phi } & = & r^2\sin^2\theta\sum_\ell\left[k^{(\ell)}(r , t ) + g^{(\ell)}(r , t)\cot\theta(\partial/\partial\theta ) \right]p_\ell(\cos\theta)\label{lastrw}\ .\end{aligned}\ ] ] in these expressions , is related to , the radial coordinate in the conformally related flat space , by .the `` background mass '' , as previously discussed , is the adm mass computed numerically for a given choice of . since the initial geometry is conformally flat , the only non - vanishing perturbations are those in and .the quadrupole parts , to second order , of these perturbations are : l^2 p^2 \over 35 r^3 ( \sqrt{r}+\sqrt{r-2m})^6 } \nonumber \end{aligned}\ ] ] to describe the perturbations of the extrinsic curvature we shall use a notation like that in ( [ firstrw])([lastrw ] ) , but shall prefix extrinsic curvature quantities with a `` '' .thus , for example , .the non - vanishing monopole perturbations of the extrinsic curvature are and the quadrupole perturbations are we start the process of gauge transformations by writing a general and first order gauge transformation vector , and we choose all components to vanish except , this gauge transformation eliminates the perturbation of the extrinsic curvature to first order , and leaves the first order initial data unchanged , but it introduces quadratic changes in the second order components of the initial data . to compute these second order changes ,we need a four dimensional metric , whereas up to now we have only dealt with the initial values .we assume zero perturbative lapse and shift to all orders and use the initial data to write an expansion in powers of a fiducial time for the four dimensional metric around , where is constructed in a straightforward manner with the 3-metric and the chosen lapse and shift , and the time derivative of the perturbative piece of the metric is completely determined by the extrinsic curvature , we then apply the formulas for gauge transformations to the above constructed metric and take the limit to recover the initial data in the new gauge . the second order changes due to quadratic combinations of the first order gauge transformation have and components .we will ignore the first , since they are non - radiative .the second order metric that results from the gauge transformation of ( [ firstm0])([secondm0 ] ) is : } { m { r}^{4 } \left(\sqrt{r}+ \sqrt { r-2\,m}\right)^{4}}}\\ k^{(2)(\ell=2 ) } & = & { \frac { 10\,{p}^{2}{l}^{2}}{m{r}^{3}}}-{\frac { 8\ , p { l}^{3}t \left ( 2m^2 - 5rm+2r^2 - 2 ( r-3 m ) \sqrt{r } \sqrt{r-2 m } \right ) } { { r}^{4 } m\left ( \sqrt { r}+ \sqrt { r-2\,m}\right ) ^{4 } } } \\g^{(2)(\ell=2 ) } & = & { \frac { 2\,{p}^{2}{l}^{2}}{m{r}^{3}}}+{16pl^3t\sqrt{r-2 m } \over r^3(\sqrt{r}+\sqrt{r-2m})^5 } \label{lastpost}.\end{aligned}\ ] ] in the formalism of gnpp the initial data was taken to have up to second order .this was true of our perturbed metric before the gauge transformation of ( [ firstm0])([secondm0 ] ) , but is not true of the post - transformation metric of ( [ firstpost])([lastpost ] ) .we now restore the conditions , for the quadrupole , with another , purely second order , gauge transformation : \\ m^{(\ell=2)}_a & = & { \displaystyle \frac { 1}{3 } } p\,l^{2}\,m \,t ^{3 } { -8 l\,r^{3 } m\sqrt{r - 2\,m}+ p\,t\,(r - 2\,m)(\sqrt{r}+\sqrt{r-2m})^5 \over r^{10}(\sqrt{r}+\sqrt{r-2m})^5}.\end{aligned}\ ] ] with this transformation , the final form of the first and second order parts of the quadrupole metric perturbations read , \left/ \left [ \left(\sqrt { r}+\sqrt { r-2\,m}\right ) ^{5}{r}^{3}m \right ] \right .+ \nonumber \\ & + & { \frac { 128\,{l}^{2}{p}^{2}{\it q2}}{\left ( \sqrt { r}+\sqrt { r-2\,m } \right ) ^{5}\sqrt { r}}}+{\frac { 192\,{m}^{2}{l}^{4}}{7\,\left ( \sqrt { r}+\sqrt { r-2\,m}\right ) ^{10}r } } \nonumber \\ % ----- k^{(2)(\ell=2 ) } & = & - { 1\over 35 } { l}^{2}{p}^{2 } \left [ 642\,{r}^{5/2}-1910\,\sqrt { r-2\,m}{r}^{2}-3820\ , \left ( r-2\,m\right ) { r}^{3/2}-3820\,\left ( r-2\,m\right ) ^{3/2}r - \right . \nonumber \\ & -&\left .1910 \,\left ( r-2\,m\right ) ^{2}\sqrt { r}-382\,\left ( r-2\,m\right ) ^{5/2 } \right ] \left/ \left [ \left ( \sqrt { r}+\sqrt { r-2\,m}\right ) ^{5 } { r}^{3}m \right ] \right .\nonumber \\ & + & { \frac { 128\,{l}^{2}{p}^{2}{\it q2}}{\left ( \sqrt { r}+\sqrt { r-2\,m } \right ) ^{5}\sqrt { r}}}+{\frac { 192\,{m}^{2}{l}^{4}}{7\,\left ( \sqrt { r}+\sqrt { r-2\,m}\right ) ^{10}r } } \nonumber \\ % ----- g^{(2)(\ell=2 ) } & = & { \frac { 2\,{p}^{2}{l}^{2}}{m{r}^{3 } } } \nonumber \end{aligned}\ ] ] and the extrinsic curvature is , for perturbations satisfying the misner conditions ( ) the first order , quadrupole , zerilli function , in the notation , of gnpp is given by + { r \over 3 } k^{(1)(\ell=2)}\ ] ] and its time derivative by , \ ] ] here use has been made of the first order einstein equations to simplify the occurrence of higher time derivatives . with the notation and formalism of gnpp , the second order , zerilli function is computed to be { [ 7\,r^{9/2}\left ( 2\,r+3\,m\right ) ] ^{-1}}\nonumber\end{aligned}\ ] ] where , and the time derivative of the second order , , zerilli function is given by , \left[14\,\rho^{4}\left ( 2\,r+3\,m\right ) r^{17/2}\right]^{-1}\nonumber\end{aligned}\ ] ] where a subscript denotes differentiation .to arrive at the expressions in ( [ secondzerdef ] ) and ( [ genchidot ] ) the second order einstein equations have been used to eliminate higher order time derivatives .the above expressions were automatically computed with maple computer algebra codes .it is impractical to give more details of their construction in print .the source codes and documentation can be found in our anonymous ftp server . when the explicit 3-geometry and extrinsic curvature of ( [ final3geom])([finalext ] ) are put into the expressions of ( [ secondzerdef ] ) and ( [ genchidot ] ), we arrive at the following initial data for the first and second order zerilli equations , we are now ready to evolve the initial data and compute waveforms and radiated powers .the initial data generated in the previous section is now fed to the first and second order zerilli equations where is the usual `` tortoise '' coordinate covering the exterior of the black hole , so the horizon is at and spatial infinity at , and the potential and source terms in the zerilli equations are given by , + \frac{2(l-1)(l+2)l(l+1)}{r\delta}\right\}\label{zpot}\\ { \cal s } & = & { 12 \over 7 } { \mu^3 \over \delta } \left [ -{12 ( r^2+mr+m^2)^2 \over r^4\mu^2\delta } \left(\psi,_t\right)^2 -4 { ( 2r^3 + 4r^2m+9rm^2 + 6m^3 ) \over r^6\delta } \psi \psi,_{rr } \right . \nonumber \\ & & + { ( 112r^5 + 480r^4m+692r^3m^2 + 762r^2m^3 + 441rm^4 + 144m^5 ) \over r^5\mu^2\delta^3 } \psi \psi,_t - { 1 \over 3r^2 } \psi,_t \psi,_{rrr }\nonumber \\ & & + { 18r^3 - 4r^2m-33rm^2 - 48m^3 \over 3 r^4\mu^2\delta } \psi,_r \psi,_t + { 12r^3 + 36r^2m+59rm^2+ 90m^3 \over 3 r^6\mu } \left(\psi,_r \right)^2 \nonumber \\ & & \ ! + \ !12 { ( 2r^5 + 9r^4 m + 6r^3m^2\!-\!2r^2m^3\!-\!15rm^4 - 15m^5 ) \over r^8\mu^2\delta } \psi^2 \!-\!4 { ( r^2+rm+m^2 ) \over r^3\mu^2 } \psi,_t\ !\psi,_{tr } \nonumber \\ & & -2 { ( 32r^5 + 88r^4m+296r^3m^2 + 510r^2m^3 + 561rm^4 + 270m^5 ) \over r^7\mu\delta^2 } \psi \psi,_r + { 1 \over 3r^2 } \psi,_r \psi,_{trr } \nonumber \\ & & - { 2r^2-m^2 \over r^3\mu\delta } \psi,_t \psi,_{rr } + { 8r^2 + 12rm+7m^2 \over r^4\mu\delta } \psi \psi,_{tr } + { 3r-7 m \over 3r^3\mu } \psi,_r\psi,_{tr } - { m \over r^3\delta } \psi \psi,_{trr } \nonumber \\ & & + { 4(3r^2 + 5rm+6m^2 ) \over 3r^5 } \psi,_r \psi,_{rr } \left .+ { \mu\delta \over 3 r^4 } \left ( \psi,_{rr } \right)^2 - { 2r+3 m \over 3r^2\mu } \left ( \psi,_{tr } \right)^2 \right]\end{aligned}\ ] ] where with and . the potential is given for general in ( [ zpot ] ) but we will use it only for . as can be seen , equations ( [ firstorderzeq ] ) and ( [ secorderzeq ] ) have the same form , including the same potentials , but the second order equation has a source term that is quadratic in the first order zerilli function and its time derivatives .we have written a fortran code to evolve these equations by a simple leapfrog algorithm .convergence to second order was checked and special care was taken to avoid noise from the high derivative order of the source term . to find the gravitational waveforms and power a transformation must be made to a gauge that is asymptotically flat to first and second order .the details of this process were discussed in gnpp and will not be repeated here .the result is that the transverse - traceless perturbations , in the asymptotically flat gauge , correct to second order , are encoded in the quantity \ , \ ] ] ( where it is understood that all quantities are ) and this is the quantity we shall plot below when we give waveforms of the outgoing gravitational radiation .the first order part of the radiation is given by the leading term in ( [ waveform ] ) ; the terms in square brackets are second order . from the landau - lifschitz pseudo - tensor in the asymptotically flat gauge ( as discussed in gnpp ) we find that the radiated power is \right\}^2.\ ] ] ( note that the perturbation parameter that appeared in is now incorporated into the definition of the zerilli functions we have used in the paper , as can be seen in formulas ( [ psit0]-[chidott0 ] ) .we have also directly computed the `` renormalized '' second order zerilli function in ( [ secondzerdef ] ) ) .before we move on to present our results and compare with the numerical relativity simulations of the potsdam / ncsa / washu group ( see baabrps ) , it is worth pointing out , again , that the numerical relativity simulations are for `` symmetrized '' initial data , in which an infinite number of `` image charges '' is used to construct initial data representing two throats connecting two isometric asymptotically flat universes . by contrast , the problem we are solving corresponds to three asymptotically flat universes . in the limit of zero momentumthe numerical simulations correspond to the 1960 misner initial data , and our results correspond to the brill - lindquist initial data . for the range of separations we are going to discuss the discrepancies between these two types of data are insignificant .( although we are working in the `` close limit , '' we will consider sets of data far apart enough to make the extra terms arising from symmetrization very small ) , but since the problem is a multi - parametric one , it is not obvious that this is true in all the ranges of parameters we will be discussing .more careful studies will be needed if one wants higher accuracies than the ones we are going to discuss here .we have also modified the potsdam / ncsa / washu code to run for unsymmetrized data , and for limited tests the results agree very well with the symmetrized ones in the range we are considering .this situation arose due to historical reasons : the numerical code was written before our work with non - symmetrized boundary conditions , whereas perturbation theory becomes very cumbersome if one starts carrying around the extra terms due to symmetrization .one particular problem that one faces when comparing brill lindquist ( unsymmetrized ) and symmetrized data sets is that the sets are parameterized in different ways .there is therefore ambiguity in how to compare the results .abrahams and price have discussed this in some detail , and show that there are different identifications one can take that yield sensible results along a good range of parameters , so we will not repeat the discussion here .we just state the convention we are following : for one of our results with momentum parameter and throat separation , we compare a numerical relativity result with the same adm mass and same numerical value of the momentum parameter , and with a separation parameter given by here is a parameter originally introduced by misner that is commonly used to parameterize symmetrized binary black hole initial data sets , and with these choices , the radiated waveforms agree very well when . notice that the discussion of abrahams and price is only for the case. the `` best '' identification between symmetrized and unsymmetrized data could probably be a -dependent notion .we will ignore this issue here , but it clearly requires further study . in the formalism of gnpp we chose to fix the coordinates by requiring that the metric be in the regge - wheeler gauge to first and second order .this can always be done , but it turns out that the coordinates are not quite uniquely fixed .the problem is quite generic and it has to do with how perturbation theory handles time translations in situations where the background spacetime is time - translation invariant . consider an exact quantity approximated by a perturbative series expansion , and perform now a first order gauge transformation corresponding to a pure time translation , with a constant , independent of and . replacing by the above expression ( and noticing that ) , we get so we see that the `` second order term '' in the expansion of the metric depends on the origin chosen for time . if one starts with perturbations in the regge - wheeler gauge , a transformation of type ( [ ttrans ] ) leaves the perturbations in the regge - wheeler gauge , but the second - order metric is changed , and in fact depends on an _ arbitrary _ constant .this indicates that a comparison of quantities to second order in perturbation theory around stationary backgrounds can be quite misleading : the same metric can have very different second order terms depending on the origin of time chosen .worse , these terms can be quite large , and are completely artificial .it is interesting to notice that if one computes the radiated energies using the formula we discussed previously ( [ power ] ) , the results are unchanged as expected by time translations ( the additional term turns out to be a total derivative that does not affect the computation of energies ) .but we want to go beyond giving perturbative results for radiated energy .we want also to compare perturbative waveforms with those of numerical relativity . since these waveforms are second - order correct quantities given as a function of time at a particular `` observation '' radius , we must be sure that we are using the same zero of time for both waveforms , that from perturbation theory and that computed with numerical relativity . fortunately , it is not difficult to eliminate the time - shift ambiguity in the metric . to do thiswe separate the waveform given in ( [ waveform ] ) into first and second order parts and and we construct the quantity ^ 2}\ .\ ] ] we then perform the time translation , arriving at the `` physical '' value for the second order waveform equivalently , we adjust the zero of time , and hence until the integral in the numerator of ( [ c0eq ] ) vanishes .the same coordinate fixing must be done to the numerically computed waveform . to do thiswe define to be .we then adjust the zero of time so that the integral of vanishes .these observations about time - shifts are also true in the time symmetric case .we have recomputed the results of with the zero of time fixed as above and have found that the results are changed by less than . for boosted black holes , on the other hand ,this time fixing is crucial for seeing the high accuracy agreement of the perturbative and numerical relativity results .we start to summarize our results by computing the radiated energy as a function of momentum for head - on collisions of black holes released from a separation of , .the results are depicted in fig.[fig1 ] .the figure shows the characteristic `` dip '' at low values of the momentum that was first noticed in baabrps .an important difference between that paper and the present results is that here , as explained in sec.iic , we are normalizing both the numerical and the perturbative results using the same adm mass .this leads to a much better agreement for large values of the momentum than that observed in baabrps . as an example of the size of the difference , for , and for , .a remarkable fact is that first order perturbation theory agrees very well even for large values of the momentum , and second order perturbation theory confirms this fact .this at first seems puzzling since our initial data was obtained through a `` slow '' approximation in which the momentum was assumed to be small .however , as was observed in baabrps , for large values of the momentum the initial data is `` momentum dominated '' , meaning that the extrinsic curvature completely dominates the initial data .therefore the errors made in computing the conformal factor via the slow approximation become less relevant than might be supposed .the overall picture of the energy therefore is very encouraging , the approximations presented seem to be working even beyond their expected realm , and second order perturbation theory is capable of tracking this fact , playing the expected role of `` error bars . ''this approach is not without pitfalls , however . in order to illustrate these, we turn to fig.[fig2 ] , which shows a close up look at the energy picture and also includes results for black holes initially boosted _ away from each other_. the first thing we notice is , that for black holes boosted away from each other there is , as expected , no `` dip '' in the energy .the dip is a first order effect that is due to a cancellation between terms that are momentum independent and terms that are linear in momentum .the cancellation turns to addition in the case of negative ( outwards ) .we also see that first order calculations are less accurate at the dip than at higher values of the momentum .this is somewhat puzzling since our approximation should work better the smaller the momentum .what seems to be happening is that first order theory does not accurately reproduce the higher order terms that make important contributions to the energy after the leading terms cancel in order to produce the dip .this is confirmed by the fact that first plus second order results are indeed very accurate at the dip. an instructive feature of these results is that for black holes boosted away from each other a cancellation of the second order terms takes place around , .clearly one can not regard second order perturbation theory as giving error bars when it is cancelling out .moreover , it shows that second order results _ beyond _ that value of can only be taken as rough indicators .we will return to this cancellation in somewhat more detail in connection with waveforms .another issue to be mentioned is how crucial it is to have chosen the mass of the initial slice as the mass of the background spacetime used in the perturbative calculations .our previous ( first - order ) work on boosted black holes used the `` bare '' mass ( adm mass for ) for the background .this is quite visible if one compares fig.[fig1 ] with fig.2 of baabrps . in the latter, first order perturbation results appeared to disagree with the numerical results by over an order of magnitude for .that was entirely due to the poor choice of background mass . in the present paper , using the numerically computed adm mass of the initial data , we see that first ( and second ) order results differ by only from the numerical results at .let us now turn to the examination of waveforms .the numerical code of the potsdam / ncsa / washu group extracts waveforms at slightly different values of the radial variable for varying s .we took this effect into account and extracted perturbative waveforms at the same radii as was used for the numerical relativity work . in all casesthe full numerical code has a very limited range of spacetime covered in the evolution .this forces the extraction to be done in a rather small range of radii around or so . with perturbation calculationswe could have extracted much further away , but we performed the extraction at exactly the same radius as those used by the numerical code .waveforms were observed to change shape rather significantly from one extraction radius to another even in such a close range , but we observed that as long as we extracted the perturbative waveform at the same radius as the full numerical result ( as opposed to , say , extracting farther out and then shifting the result back ) the agreement was roughly independent of extraction radius .however , this starts to hint at a main problem in comparing waveforms : one needs not only to match amplitudes but it is also crucial to match the phase , at least if one is interested in high accuracy . the phase is determined by , among other things , the extraction radius . determining the extraction radius , in turn , requires knowing the adm mass ( since one measures radii in units of adm mass ) .our full numerical code for computing the adm mass , in its present implementation , is accurate to a few percent .( this could be made better with more computer power than what is presently available to us ; the runs we made had 300 radial zones and 30 angular zones . )this limits the accuracy with which we know the adm mass , and hence the accuracy with which we can determine the phases .the technique , discussed in sec.iiib , of fixing the zero of time is helpful in giving an objective way of comparing phases .let us turn to the results .we present , below , the results for the waveform as defined in ( [ waveform ] ) .this is directly comparable ( up to a time derivative ) with the output of the full numerical relativity code , which outputs a zerilli function via the radiation extraction technique of assuming that the spacetime is a perturbation of schwarzschild and reading off the perturbations from the full numerical results .our presentation of waveform comparisons starts with the most disfavorable cases and moves to more favorable ones .figure [ fig3a ] shows the comparison of waveforms for , and fig.[fig3 ] corresponds to , .as we see , there is very good overall agreement .notice that ( taking into account the `` time - shift '' gauge fixing discussed above ) our procedure in the end has _ no free parameter _ ,i.e. , phases and amplitudes are predetermined in all cases , which makes the agreement more remarkable .if one looks carefully at the curves in the inset , which enlarges the region around the second positive peak , one sees that there are slight phase and amplitude disagreements .first order results tends to overshoot the waveform , whereas adding the second order correction tends to undershoot .there are slight differences in shapes as well . for large values of the momentum, we can take second order predictions as `` error bars '' only .however , for intermediate values , it is quite clear that first plus second order calculations offer a very accurate prediction of the waveforms .the reader should exercise care when comparing the results for waveforms with those of energies .this is due to a peculiarity of the formula for the radiated power ( [ power ] ) .as discussed in gnpp , the square that appears in ( [ power ] ) involves terms that are of `` third order '' in perturbation theory .therefore , to keep things consistent , when squaring the expression in curly braces , we only keep the mixed term and omit the term that is the square of the second order part of . as a consequence , the second order correction for the radiated energy depends mostly on correlations of phases of the first and second order waveforms rather than on their amplitudes . for instance , for the case we are studying , , the second order waveforms are only slightly smaller than the numerical ones , but the computed energy is lower .we now turn our attention to the area of the dip , , . in fig.[fig5 ] we show the waveforms for the inward boosted case ( the case with a dip in the energy ) .we see that second order corrections improve the accuracy markedly .clearly there are strange effects taking place for this value of the parameter .in particular , it should be noticed how first order theory overshoots the waveforms rather significantly in the second and third positive peak of the waveform , but not in the first one . in view of the fact that the energy is given by the correlation of the first and second order waveforms, those discrepancies in the first order waveforms would seem to be responsible for the large relative error in the calculation for the energy , even to second order .this is so , in spite of the fact that second order calculations yield very accurate waveforms .figure [ fig6 ] shows the case of ( holes moving initially apart ) .as could be predicted from the energy plot , a cancellation of the second order terms is taking place . in this case , therefore , one can not regard second order corrections as `` error bars , '' since it is clear that higher order terms are important .it is worthwhile pointing out that the cancellation is highly nontrivial , the initial data having the same amplitude for both inward and outward momenta .the cancellation takes place in the evolution , with the source terms of the second order zerilli equation playing a significant role . a simple way to understand the cancellation is to break up the evolution into three separate zerilli equations with three different initial sources , proportional to , , and respectively .what one sees is that the cancellation occurs between the term and the other two , and clearly depends on the sign of ( for our simulations negative is outward pointing ) .one can then infer that there is a curve of cancellations in the , parameter space that isolates a region in parameter space where second order perturbation theory does not help .one can not reach points in that region unless one changes the relative counting of powers of and in perturbation theory .a further study of this issue could therefore yield interesting results .we have seen that the use of combined first and second order perturbation theory can give excellent results for waveforms and energies of radiation emitted in the head on collision of two equal mass , initially boosted , black holes .the results show , however , that there are some subtleties , not previously appreciated , in the use of higher order perturbation theory and in the comparison with results from numerical relativity .the following points deserve attention , especially in connection with the application of higher order perturbation theory to further problems .\a ) the comparison of perturbation results and numerical relativity results has pitfalls when comparisons are made between problems that are not identical . in our casewe compared our perturbation result for an `` unsymmetrized '' ( brill - lindquist type ) initial data , with numerical relativity results for `` symmetrized '' ( misner type ) initial data .had we been comparing with unsymmetrized initial data , the parameters for the data sets would have had identical meaning . since the data sets were not identical , a mapping of one parameter set to the other had to be imposed .one degree of freedom in this mapping was subsumed in the choice to compare cases of equal adm mass , but the remaining element of choice in the mapping is a source of uncertainty in the high accuracy comparisons we are making .( we emphasize that the choice of mapping was made before any results were considered ; there was no `` fine tuning '' to improve the comparison .the excellent agreement between the numerical and perturbative results then must be considered to be , among other things , an indication that there is no great sensitivity to the manner in which this mapping of parameters is done . )\b ) there is no unique result that is correct to second order .different ways in which details are handled will produce results that are the same to second order , but differ at higher order .these different results can have different ranges of validity and can exhibit different accuracy when compared with numerical work near the limit of validity .one example of this feature of perturbation theory is the dependence on parameterization . in our perturbative resultswe have seen another simple example : the second - order correct waveform consists of a first order and a second order piece . when radiated energy is computed by squaring this waveformone can choose simply to take the square , or to truncate the result and omit the fourth order contribution arising from the square of the second order contribution to the waveform .( we have made the latter `` conservative '' choice . ) both results , of course , are equally justifiable for the order of perturbation theory we are doing , but the results are noticeably different .\c ) in the present paper we have seen a particularly interesting example of the importance of higher order terms and the detailed way in which perturbation theory is applied . to make the comparison between symmetrized and unsymmetrized initial data we found that it is important to compare cases of equal adm mass , but the adm mass ( for fixed ) varies quickly with increasing initial momentum .if one computes this momentum dependence perturbatively the agreement of perturbation theory and numerical relativity is limited . withthe adm mass computed exactly ( i.e. , numerically ) the agreement is greatly improved .this suggests that an _ a priori _ physical understanding of the dependence on the perturbations can be a very useful guide to an efficient perturbation scheme .\d ) in addition to the numerical computation of adm mass , another useful new technical detail was developed in the present work .a method was found of fixing the zero of time in the same manner for both perturbative and numerical waveforms .this fixing of zero had not been important in previous perturbation studies , but was crucial to comparison of waveforms for initially boosted holes .\e ) perturbation analysis in the present paper was carried out for both small separation and small momentum ( `` the close slow limit '' ) .this makes it particularly difficult to unravel the sources of disagreement with numerical results when anomalous cancellations ( like the `` dip '' ) occur .although perturbation results end up in excellent agreement with numerical relativity results , a perturbative analysis based on small , but without small ( especially if it could be compared with numerical results for unsymmetrized data ) , might be useful in improving our understanding of the nature of errors .\f ) the current state of the art of numerical relativity presents limitations , both in accuracy and in range of simulations of the codes . as a consequence , we were limited to comparing waveforms which are not really in the radiation zonethis is a dangerous exercise when it comes to second order perturbation theory . in particular, the formula for the radiated power ( from which we extracted the concept of second order waveform ) assumes that one is in the radiation zone .this is true also of the extraction techniques used in the numerical codes to produce a zerilli function as output . in short : with the current limitations we can not rule out that the discrepancies we see in waveforms and energies might be within the error margins of the numerical results .a general conclusion of this work is that the synergy between numerical results and perturbative calculations will probably be one of the major tools that we will have to use to address with any accuracy the problem of the collision of two black holes in general relativity .we see this taking place right now .we wish to thank peter anninos and steve brandt for help in providing the full numerical results from the ncsa group , and for allowing us to use the potsdam / ncsa / washu code .we are grateful to john baker for several insights concerning the normalization with the adm mass .this work was supported in part by grants nsf - int-9512894 , nsf - phy-9423950 , nsf - phy-9507719 , by funds of the university of crdoba , the university of utah , the pennsylvania state university and its office for minority faculty development , and the eberly family research fund at penn state .we also acknowledge support of conicet and conicor ( argentina ) .jp also acknowledges support from the alfred p. sloan foundation .part of this work was done while con was visiting penn state with support from conicet ( argentina ) .rjg is a member of conicet ( argentina ) g. cook , ph.d .thesis , university of north carolina at chapel hill , chapel hill , north carolina , 1990 ; g. b. cook , m. w. choptuik , m. r. dubal , phys .d * 47 * , 1471 ( 1993 ) ; g. b. cook , phys .d * 50 * , 5025 ( 1994 ) .
we study the head - on collision of black holes starting from unsymmetrized , brill lindquist type data for black holes with non - vanishing initial linear momentum . evolution of the initial data is carried out with the `` close limit approximation , '' in which small initial separation and momentum are assumed , and second - order perturbation theory is used . we find agreement that is remarkably good , and that in some ways improves with increasing momentum . this work extends a previous study in which second order perturbation calculations were used for momentarily stationary initial data , and another study in which linearized perturbation theory was used for initially moving holes . in addition to supplying answers about the collisions , the present work has revealed several subtle points about the use of higher order perturbation theory , points that did not arise in the previous studies . these points include issues of normalization , and of comparison with numerical simulations , and will be important to subsequent applications of approximation methods for collisions . psfig cgpg-98/2 - 1 + gr - qc/9802063 +
it is expected that within the year , a decision will be made as to the composition of the suite of science instruments to be deployed on the next generation space telescope ( ngst ) .it is therefore a particularly good time for a discussion of the relative merits , and appropriate domains of greatest utility for the various 3-d imaging alternatives . there has been , and no doubt will continue to be , a great deal of discussion as to which approach to 3-d imaging is `` the best '' .there is no single correct answer , of course , since each type of instrument has its own strengths and weaknesses .it does not seem to be widely known that , in the limiting case of photon statistical noise dominance , the performance of a 3-d imaging spectrometer based on 2-d detector arrays is the same for all architectures ( bennett et al . 1995 ) , whether tunable filter , dispersive , or fourier transform , provided that the same degrees of freedom are measured . in the following , i will first consider the photon statistics limited case , and show the equivalence between the various architectures .i will then generalize to the performance in the case that detector read noise , dark current , and zodiacal background are included .i will consider specific parameters that are appropriate for the anticipated ngst environment .finally , i will offer a suggestion for a hybrid instrument which combines the best features of all of the 3-d architectures , and offers great potential for best meeting the ngst needs .in comparing between the various options , it is important to assume equivalent detectors . in order to obtain 3-d data using a 2-d detector array, a series of exposures must be made .consider an pixel focal plane array , having no `` gaps '' between the pixel elements .typical frames for a dispersive imaging spectrometer ( ds ) , and a tunable filter imaging spectrometer ( tf ) are indicated schematically in figure [ bennett - fig1 ] . in general, it is of course not necessary for the spatial samples observed by the ds to be contiguous , as implied by the arrangement displayed in figure [ bennett - fig1 ] . nor is it necessary for the spectral samples observed by the tf to be contiguous and non - overlapping , as is also implied by the configuration displayed in figure [ bennett - fig1 ] .indeed , in some cases , non - contiguous spectral sampling is desirable , and the tf system lends itself much more naturally to this mode of operation . on the other hand , for some questions , the ability to observe non - contiguous spatial samples is very important , and the ds approach , such as with a multi - object spectrometer ( mos ) , is better suited for such measurements . for the moment , consider the case that the same spatial and spectral samples are covered by both the tf and the ds .assume that the spectral samples represented by the various pixels along the dispersion direction in the ds correspond exactly both in terms of bandwidth and band center to the series of measurements made by the tf system , and that the spatial samples represented by the various pixels in the tf system similarly correspond exactly to the series of spatial measurements made by the ds system .in this case , if the total observation time is divided equally among the spectral samples for the tf case , and for the spatial samples in the ds case , each cell in the 3-d datacube is observed for the same exposure time , and with the same efficiency .clearly the signal to noise performance will be the same for both of these configurations .the relation between the performance of an ideal tunable filter spectrometer with an ideal fourier transform spectrometer is more subtle than that between the tunable filter and the dispersive spectrometer .one simplification , however , is that since the size of the image may be assumed the same for the ft and tf systems , it is only necessary to consider the information content of a single representative detector element obtained via either the tf or the ft system .it is helpful to consider an analogy with the use of the modulation transfer function ( mtf ) for the characterization of imaging systems .consider an `` object '' spectrum having a sinusoidal intensity variation as a function of frequency .also assume that this object spectrum is observed with a tf spectrometer having uniformly spaced filter samples , and that all of the filter samples have an equal transmission bandwidth .the `` image '' spectrum would also have a sinusoidal intensity variation as a function of frequency . in the casethat the period of the sinusoidal intensity variation is much smaller than the characteristic width of the tf spectral channels , the `` image '' spectrum modulations are greatly reduced .furthermore , if the spacing of the tf spectral samples is not sufficiently dense , the period of the modulations in the `` image '' spectrum may be altered by `` aliasing effects '' .an ft spectrometer , at each of a sequence of retardance settings , directly measures the intensity of a particular sinusoidal intensity variation in the object spectrum .the set of such measurements constitutes an interferogram . in order to compare the information content of tf spectra measured in the frequency domain with ft interferograms measured in the transform domain ,it is important to carefully consider the shape of the spectral response of the tf filters , their spacing , and the amount of spectral information content being measured . in a naive approach to a tf system, it would be assumed that the transmission function for each of the tf filters had a `` top hat '' shape , i.e. outside the spectral bandpass of a given filter the transmission would be zero , and within a given bandpass the transmission would be unity . viewed in terms of the response to sinusoidal modulations in the `` object '' spectrum , such filters have undesirable ramifications , such as contrast reversal for some modulation periods , and aliasing for others .correspondingly , the most straightforward approach to the acquisition of interferograms by an ft spectrometer , involving equal weighting of each of the retardance measurements , produces effective spectral response functions which have undesirable negative sidelobes .it is important to consider spectral transmission functions which do not have such `` sharp corners '' as the `` top hat '' shape for the tf case , and to consider tapered weighting of the interferograms for the ft case .consider a sequence of measurements of the intensity of an underlying continuous spectral intensity function , dependent on the frequency , that is transmitted through a spectral filter .for a transmission filter centered at , the observed number of photoelectrons would be given by here the units of spectral radiance are photons hz s , the exposure time for the observation is , in units of s , while the transmission function is dimensionless .also , although the integration limits extend to infinity , this is a purely formal convenience , and in this integral , as in others to follow , the integrand will always be limited to a finite range .the frequency variable , , is in units of hz .( it is sometimes convenient to use the wavenumber equivalent of the frequency , defined by , and having dimensions of cycles per cm ) .it is assumed that the quantum efficiency is unity .the peak transmission is assumed to be unity , and the effective width of the transmission filter may be defined by in the case that the spectral radiance function varies slowly over the interval for which is significant , the integral in eq .( 1 ) may be approximated by the variance in the observed number of photoelectrons , in the statistical noise limit is equal to the total number of photoelectrons detected , using the relation between the observed counts and the estimate of the underlying spectral radiance function evaluated at of eq .( 3 ) , for comparison with the ft spectrometer case , for which the noise spectrum is independent of , the dwell time is taken proportional to .( this assumed dwell time variation could of course only be used if the spectrum is known , and would not be applicable to multiple pixels , if they contain different spectra .the impact of varying spectral shape on the comparison between ft and tf spectrometers will be further discussed below . )the constant of proportionality may be determined by requiring that the sum over all channels yields the total observation time , here the factor is the spacing between the tf spectral samples . for this integration time sequencethe spectral variance becomes , with measurements made at the sample spacing this yields .measurements made at a sample spacing much finer than this produce little additional information about the continuum function , since the magnitude of sets a practical limit to the fineness of the resolution recoverable , no matter how fine the sample spacing .the intensity of the interference pattern in a dual output port michelson interferometer , , is a continuous function of the optical path difference , i.e. , the retardance , between the two mirrors , related to the continuous spectral intensity detected , , by the integral , the two output ports correspond to the two sign values , with the `` + '' sign corresponding to the output port for which the two interfering beams are in phase at zero optical path difference ( zpd ) , and the `` - '' sign corresponding to the output port with out of phase beams at zpd . as before , the product has units of counts per second .( 9 ) is valid for a perfectly compensated , perfectly efficient beam splitter .real beam splitters have dispersion and are not perfectly efficient , but these complications are easily dealt with in practice .it is convenient to form the sum and difference of the signals from the two output ports of the interferometer .these two quantities are given by the integrals , and note that the summed signal is independent of the optical path difference , and is simply given by the integrated spectral intensity .thus at each retardance setting of the interferometer the full broad band image is measured .this is because , in the absence of absorption losses , every photon entering the interferometer goes to one or the other of the exit ports .the difference signal , at the zero retardance position also becomes equal to the same full band intensity integral .this feature of the summed signal from an ft system suggests that a desirable hybrid of ft and tf may be obtained by simply having a tunable filter placed in the optical train of an imaging ft spectrometer . in this case, the sum of the two output ports of the ft spectrometer provides the unmodulated full intensity of the light that has passed through the tunable filter .in addition , higher resolution spectral imaging may be obtained at the same time . in this hybrid approach, the summed output will be called the `` panchromatic '' output of the ft , while the transform of the difference output will be called the `` spectral '' output of the ft instrument . in general, it is advantageous to have the dwell time depend on retardance in order to tailor the effective spectral line shape and maximize data collection efficiency .this is typically done for radio astronomy , but is not typically done for laboratory ftir spectroscopy .a typical interferogram would consist of a set of discrete samples of the continuous function , symmetric about the point , each observed with dwell time ..\ ] ] discrete fourier transformation results in periodogram estimates , , at integer multiples of a fixed frequency spacing , approximately related to the continuous function by .\ ] ] the approximate relation between the discrete spectral estimate and the continuous spectral function is accurate to the extent that the continuous spectral function varies sufficiently slowly in the neighborhood of the discrete sample point at .this condition is similar to that used in writing expression ( 3 ) for the tf case .the spectral sample spacing and the interferogram sample spacing are related by .the values are given by the discrete fourier transform , the inverse discrete fourier transform is the normalization used for the fourier transform pair displayed in expression ( 14 ) and ( 15 ) has been chosen to most directly reflect the continuum relation of expression ( 11 ) .it follows from the convolution theorem that the spectral line shape , , for a particular set of dwell times is proportional to a fourier transform , with this normalization , the peak of the resolution function at is equal to unity .this resolution function plays the same role as the transmission function for the tf case .just as for the tf case , an effective width for the resolution function may be defined by summing over all values , although the case of uniform integration times is simplest for the ft spectrometer , and indeed is the most common mode of operation of laboratory ftir instruments , it is not the most efficient .furthermore , for purposes of comparison with a tf spectrometer , the resolution function ( a sinc function ) has negative sidelobes , which can not be realized by a physical transmission filter function .there are many choices for the dwell time series which produce non - negative spectral line shape functions which can be physically realized as transmission filter profiles .one of the simplest is the triangular apodization series , defined by the spectral line shape that results from this weighting is a sinc - squared function . for a real interferogram ,the discrete spectrum is hermitian , i.e. , . while .the point corresponds to the nyquist frequency . for a perfectly compensated beam splitter , with 100% modulation efficiency and no noise ,the interferogram will also be symmetric .a real , symmetric interferogram produces a real , symmetric spectrum .noise in the interferogram is real , and produces a hermitian contribution to the calculated spectrum .noise in the interferogram is not necessarily symmetric , however , and thus contributes to both the real and the imaginary parts of the calculated spectrum . by virtue of the linear relation between interferogram and spectrum , and with the notation that primed quantities represent noise contributions ,the spectral noise is simply the fourier transform of the interferogram noise . for a dual ported interferometer , with focal plane detectors having equivalent noise performance characteristics , specifically having a noise variance given by the sum of a read noise term , , plus a statistical noise term , the difference interferogram measurements have the noise characteristics : in the above expressions , the angle brackets represent an ensemble average .it is assumed that the noise is uncorrelated for different samples of the interferogram .the statistical properties of the spectral noise that follow from ( 19 ) and ( 20 ) are since for finite values , the factor in expression ( 21 ) oscillates much more rapidly as a function of than the factor , it may be well approximated by 1/2 . with this approximation ,the spectral noise becomes independent of , i.e. , it is `` white '' .the variance of the measured continuum spectrum thus is given by in the case that of a 1-sided interferogram , with samples .\ ] ] the variance of the measured continuum spectrum is given by although it may appear that the decrease in the variance has come `` for free '' , there is really no greater information content , since the density of independent spectral samples is only half as great in the spectrum derived from the 1-sided interferogram .the difference in the variance between 1-sided and 2-sided interferograms can be most easily derived ( for perfectly symmetrical interferograms ) by averaging each -n interferogram sample with the + n sample , and computing the fourier transform of the resulting 1-sided interferogram .the statistical noise would be reduced by a factor of for each interferogram sample , and since only half as many readouts would be required , the readout noise would be reduced a factor of two .expression ( 24 ) , in the absence of read noise , matches expression ( 8) obtained for the tf case . expression ( 22 ) , similarly matches expression ( 7 ) for the tf case with a sampling interval , as is appropriate for the more dense sampling in frequency space . the interesting fact that the only spectral line shape parameter that enters into the spectral noise for a fourier transform spectrometer is the effective width , , is novel , to this author s knowledge ( e.g. , griffiths & de haseth , 1986 ) .the remarkable equivalence of the noise performance over all of the various types of ideal imaging spectrometers may perhaps be interpreted in terms of an `` information theory '' argument .the zodiacal light produces a substantial limiting background flux for ngst . for a 1 au orbit , thermalemission from dust dominates at wavelengths longer than about 3.5 m , while for wavelengths shorter than this , scattered sunlight produces the dominant background . an estimate of this background spectrumis displayed in figure [ bennett - fig2 ] .the zodiacal background flux is constant , to good approximation , over the range of frequencies from 3,000 to 10,000 cycles / cm , at a level of approximately photon cm s .detector noise performance levels anticipated for deployment on ngst are displayed in the table below .the impact on the performance of the various 3-d imaging systems generated by these background sources are displayed in the next section .the noise equivalent flux density , , at a particular significance level is derived from the equations by solving for the flux which produces the given significance level .the nefd for observations in the k band at 2.2 m , ( as one example ) at the 10 level , for a variety of imaging spectrometer options are displayed in figure [ bennett - fig3 ] as a function of spectral resolution . from these curves , for a particular problem of interest , it is easy to select the optimum instrumental configuration . at the lowest spectral resolution at the lowest spectral resolution ,all of the 3-d instruments converge to the performance of an =5 , band camera . at the highest spectral resolution, the ds has the best performance for spectroscopy , although only for the small number of objects that may be contained `` within the slit '' .this fact is the basis for the current pre - eminence of multi - object spectrometers and integral field units in high resolution astronomical spectroscopy . for the purpose of imaging in a very narrow , single emission line band, the tf provides an performance equivalent to that of the ds , but for every pixel in the field of view . for the purpose of obtaining complete spectra for every pixel in the field of view, the ft instrument substantially outperforms the tf or the `` mapping ds '' , ( whose performance becomes essentially equivalent to the tf ) .the point of equivalence between the imaging ft and the ds comes at the point for which the number of settings of the ds is equal to the square of the ratio in performance between the single setting ds and the imaging ft . for any resolution ,the imaging ft instrument has the advantage that not only are spectra obtained for every pixel in the field of view , but that very deep k - band imaging ( in this example , but it could be , , , etc . or the entire 0.6 - 5.5 m range ) is simultaneously acquired . in many of the design reference missions for ngst , the data for both deep imaging and spectroscopy may be acquired simultaneously .the fact that such imaging is produced for every resolution setting of the ft instrument is indicated in figure [ bennett - fig3 ] by the lowest curve labeled `` panchromatic ft '' . at the highest spectral resolution ,the relatively strong signals required imply that for many fields of interest to ngst , the angular density of observable objects will be small enough that at most one object is expected per field of view .in this situation , it is not helpful to obtain spectra for every pixel in the field of view , and the spatial multiplexing of the imaging ft is not useful . a very interesting hybrid approach ( e.g. , beer 1992 ) is possible , however , which takes advantage of the best features of all of the 3-d imaging approaches .this is the combination of an objective prism with an imaging ft spectrometer .a relatively modest dispersion across one dimension of the image plane serves to reduce the spectral bandpass acceptance that is involved in the noise term for the ft spectrometer . with a slit at an image plane, the `` panchromatic '' output of the ft spectrometer would yield the same results as an ordinary prism spectrometer , while the fourier transformed interferograms would enable much higher spectral resolution at much reduced nefd .the curve labeled `` dispersed ft '' in figure [ bennett - fig3 ] corresponds to the assumption that a prism of dispersion equal to that of caf is placed in the collimated space of an imaging ft , and that the slit width is equal to one pixel . for objects which have much higher intensity than their surroundings , slit - less objective prism style measurements are also possible .there are slight displacements of the curves for the various spectrometer , and imaging spectrometer configurations , depending on the choices for the detector performance parameters , and system efficiency values .using the ngst `` goal '' detector performance parameters instead of the `` current '' performance values slightly improves the sensitivity of the tf , the ds , and the dispersed ft , but produce very little change in the imaging ft case . on the other hand , using grating efficiencies closer to those typical of ground based telescopes , lowers the ds curve , but not the tf , the imaging ft , or the dispersed ft curves .the mapping ds curve does not take into account any in - efficiencies with the precision re - pointing between observations . in conclusion , the ability of a single instrument concept , composed of a filter wheel , programmable slit , dispersive prism , and michelson interferometer , to deliver the performance of a wide field camera , the performance of a moderate resolution , full field imaging spectrometer , and the performance of a high resolution, limited field spectrometer seems to make this choice nearly obligatory for ngst .this work was performed under the auspices of the u.s .department of energy under contract no w-7405-eng-48 .i thank my ifirs colleagues for many stimulating discussions and astronomical tutoring : j. r. graham , m. abrams , j. carr , k. cook , a. dey , r. hertel , n. macoy , s. morris , j. najita , a. villemaire , e. wishnow , and r.wurtz .i also thank j. mather for the provocative suggestion to consider the dispersed ft option .graham , m. abrams , c.l .bennett , j. carr , k. cook , a. dey , j. najita and e. wishnow , `` the performance and scientific rationale for an infrared imaging fourier transform spectrograph on a large space telescope '' , pasp , 110 , 1205 , ( 1998 ) .r. griffiths and j. a. de haseth , `` fourier transform infrared spectroscopy '' , j. wiley & sons , n.y . , ( 1986 ) , and many other references , do treat the snr performance of fourier transform spectrometers , but only in terms of a relatively poorly defined resolution parameter .
currently three imaging spectrometer architectures , tunable filter , dispersive , and fourier transform , are viable for imaging the universe in three dimensions . there are domains of greatest utility for each of these architectures . the optimum choice among the various alternative architectures is dependent on the nature of the desired observations , the maturity of the relevant technology , and the character of the backgrounds . the domain appropriate for each of the alternatives is delineated ; both for instruments having ideal performance as well as for instrumentation based on currently available technology . the environment and science objectives for the next generation space telescope will be used as a specific representative case to provide a basis for comparison of the various alternatives .
interacting particle or agent - based techniques are a central method in the physics of complex systems .this methodology heavily relies on the dynamics of the agents or the interactions between the agents , as defined on a microscopic level . in this respect , this approach is orthogonal to the traditional game theoretic framework that is based on the global utility or function of the system , as defined on a macroscopic level .such physics - inspired approaches , where agents are treated as particles in a physical system , have recently led to quantitative predictions in a wide variety of social and economic systems .current areas of interest include the distribution of income and wealth , opinion dynamics , the propagation of innovation and ideas , and the emergence of social hierarchies . in the latter example , most relevant to this study , competition is the mechanism responsible for the emergence of disparate social classes in human and animal communities .a recently introduced competition process is based on two - player competitions where the stronger player wins with a fixed probability and the weaker player wins with a smaller probability .this theory has proved to be useful for understanding major team sports and for analysis of game results data . in this study, we consider multi - player games and address the situation where the outcome of a game is completely deterministic . in our model ,a large number of players participate in the game , and in each competition , the ranked player always wins .the number of wins measures the strength of a player .furthermore , the distribution of the number of wins characterizes the nature of the standings .we address the time - evolution of this distribution using the rate equation approach , and then , solve for the long - time asymptotic behavior using scaling techniques .our main result is that there are three types of standings .when the best player wins , , there is a clear notion of player strength ; the higher the ranking the larger the winning rate .when an intermediate player wins , , the standings have two tiers .players in the lower tier are well separated , but players in the upper - tier are all equally strong .when the weakest player wins , , the lower tier disappears and all of the players are equal in strength . in this sense ,when the best player wins , the environment is most competitive , and when the worst player wins it is the least competitive .the rest of this paper is organized as follows .we introduce the model in section ii . in section iii, we analyze in detail three - player competitions , addressing situations where the best , intermediate , and worst player wins , in order .we then consider games with an arbitrary number of players and pay special attention to the large- limit in section iv .we conclude in section v.our system consists of players that compete against each other . in each competition players are randomly chosen from the total pool of players .the winner is decided based upon the ranking : the ranked player always wins the game [ fig . [ ill ] ] .let be the number of wins of the ranked player in the competition , i.e. , , then tie - breakers are decided by a coin - toss , i.e. , when two or more players are tied , their relative ranking is determined in a completely random fashion .initially , players start with no wins , . and .,scaledwidth=37.0% ]these competition rules are relevant in a wide variety of contexts . in sports competitions ,the strongest player often emerges as the winner . in social contexts and especially in politics , being a centrist often pays off , and furthermore , there are auctions where the second highest bidder wins . finally , identifying wins with financial assets , the situation where the weakest player wins mimics a strong welfare system where the rich support the poor .we set the competition rate such that the number of competitions in a unit time equals the total number of players .thence , each player participates in games per unit time , and furthermore , the average number of wins simply equals time at large times , it is natural to analyze the winning rate , that is , the number of wins normalized by time , .similarly , from our definition of the competition rate , the average winning rate equals one our goal is to characterize how the number of wins , or alternatively , the winning rate are distributed in the long time limit .we note that since the players are randomly chosen in each competition , the number of games played by a given player is a fluctuating quantity . nevertheless ,since this process is completely random , fluctuations in the number of games played by a given player scale as the square - root of time , and thus , these fluctuations become irrelevant in the long time limit .also , we consider the thermodynamic limit , .we first analyze the three player case , , because it nicely demonstrates the full spectrum of possibilities .we detail the three scenarios where the best , intermediate , and worst , players win in order .let us first analyze the case where the best player wins .that is , if the number of wins of the three players are , then the game outcome is as follows let be the probability distribution of players with wins at time .this distribution is properly normalized , , and it evolves according to the nonlinear difference - differential equation here , we used the cumulative distributions and of players with fitness smaller than and larger than , respectively .the two cumulative distributions are of course related , .the first pair of terms accounts for games where it is unambiguous who the top player is .the next pair accounts for two - way ties for first , and the last pair for three way ties .each pair of terms contains a gain term and a loss term that differ by a simple index shift. the binomial coefficients account for the number of distinct ways there are to choose the players .for example , there are ways to choose the top player in the first case .this master equation should be solved subject to the initial condition and the boundary condition .one can verify by summing the equations that the total probability is conserved , and that the average fitness evolves as in ( [ kav ] ) , . for theoretical analysis, it is convenient to study the cumulative distribution . summing the rate equations ( [ re - f - a ] ), we obtain closed equations for the cumulative distribution here , we used .this master equation is subject to the initial condition and the boundary condition .we are interested in the long time limit .since the number of wins is expected to grow linearly with time , , we may treat the number of wins as a continuous variable , .asymptotically , since and , etc . , second- and higher - order terms become negligible compared with the first order terms . to leading order, the cumulative distribution obeys the following partial differential equation from dimensional analysis of this equation , we anticipate that the cumulative distribution obeys the scaling form with the boundary conditions and .in other words , instead of concentrating on the number of wins , we focus on the winning rate . in the long time limit, the cumulative distribution of winning rates becomes stationary .of course , the actual distribution of winning rates also becomes stationary , and it is related to the distribution of the number of wins by the scaling transformation with . since the average winning rate equals one ( [ xav ] ) , the distribution of winning rates must satisfy substituting the definition ( [ phi - def ] ) into the master equation ( [ f - eq - a ] ) , the stationary distribution satisfies =0.\ ] ] there are two solutions : ( i ) the constant solution , , and ( ii ) the algebraic solution .invoking the boundary condition we find [ fig .[ n3m1-fig ] ] one can verify that this stationary distribution satisfies the constraint ( [ constraint ] ) so that the average winning rate equals one .this result generalizes the linear stationary distribution found for two player games . ) for the case , .,scaledwidth=35.0% ] initially , all the players are identical , but by the random competition process , some players end up at the top of the standings and some at the bottom .this directly follows from the fact that the distribution of winning rates is nontrivial . also , since as , the distribution of winning - rate is nonuniform and there are many more players with very low winning rates . when the number of players is finite , a clear ranking emerges , and every player wins at a different rate .moreover , after a transient regime , the rankings do not change with time [ fig .[ kta ] ] . versus time when the best player wins .shown are results of simulations with 20 players.,scaledwidth=40.0% ] we note that in our scaling analysis , situations where there is a two- or three - way tie for first do not contribute .this is the case because the number of wins grows linearly with time and therefore , the probability of finding two players with the same number of wins can be neglected .such terms do affect how the distribution of the number of wins approaches a stationary form , but they do not affect the final form of the stationary distribution .next , we address the case where the intermediate player wins , now , there are four terms in the master equation the first pair of terms accounts for situations where there are no ties and then the combinatorial prefactor is a product of the number of ways to choose the intermediate player times the number of ways to choose the best player .the next two pairs of terms account for situations where there is a two - way tie for best and worst , respectively .again , the last pair of terms accounts for three - way ties .these equations conserve the total probability , , and they are also consistent with ( [ kav ] ) . summing the rate equations ( [ re - f - b ] ) , we obtain closed equations for the cumulative distribution for clarity , we use both of the cumulative distributions , but note that this equation is definitely closed in because of the relation . taking the continuum limit and keeping only first - order derivatives , the cumulative distribution obeys the following partial differential equation with the boundary conditions and . substituting the definition of the stationary distribution of winning rates ( [ phi - def ] ) into this partial differential equation , we arrive at =0,\ ] ] an equation that is subject to the boundary conditions and .there are two solutions : ( i ) the constant solution , , and ( ii ) the root of the second - order polynomial . invoking the boundary conditions , we conclude [ fig .[ n3m2-fig ] ] as the nontrivial solution is bounded , the cumulative distribution must have a discontinuity .we have implicitly assumed that this discontinuity is located at . ) for , .,scaledwidth=35.0% ] the location of this discontinuity is dictated by the average number of wins constraint . substituting the stationary distribution ( [ phi - sol - b ] ) into ( [ constraint ] ) then .\end{aligned}\ ] ] in writing this equality , we utilized the fact that the stationary distribution has a discontinuity at and that the size of this discontinuity is .integrating by parts , we obtain an implicit equation for the location of the discontinuity substituting the stationary solution ( [ phi - sol - b ] ) into this equation and performing the integration , we find after several manipulations that the location of the singularity satisfies the cubic equation .the location of the discontinuity is therefore this completes the solution ( [ phi - sol - b ] ) for the scaling function .the size of the discontinuity follows from . versus time when the intermediate player wins . shownare results of simulations with 20 players.,scaledwidth=40.0% ] there is an alternative way to find the location of the discontinuity .let us transform the integration over into an integration over using the equality this transforms the equation for the location of the discontinuity ( [ x0-eq ] ) into an equation for the size of the jump substituting we arrive at the cubic equation for the variable , . the relevant solution is , from which we conclude . for three - player games , there is no particular advantage for either of the two approaches : both ( [ x0-eq ] ) and ( [ phi0-eq ] ) involve cubic polynomials .however , in general , the latter approach is superior because it does not require an explicit solution for .the scaling function corresponding to the win - number distribution is therefore where denotes the kronecker delta function .the win - number distribution contains two components .the first is a nontrivial distribution of players with winning rate and the second reflects that a finite fraction of the players have the maximal winning rate .thus , the standings have a two - tier structure .players in the lower tier have different strengths and there is a clear differentiation among them [ fig . [ ktb ] ] .players in the upper - tier are essentially equal in strength as they all win with the same rate .a fraction belongs to the lower tier and a complementary fraction belongs to the upper tier .interestingly , the upper - tier has the form of a condensate .we note that a condensate , located at the bottom , rather than at the top as is the case here , was found in the diversity model in ref . .last , we address the case where the worst player wins here , the distribution of the number of wins evolves according to this equation is obtained from ( [ re - f - a ] ) simply by replacing the cumulative distribution with .the closed equation for the cumulative distribution is now in the continuum limit , this equation becomes , and consequently , the stationary distribution satisfies =0.\ ] ] now , there is only one solution , the constant , and because of the boundary conditions and , the stationary distribution is a step function : for and for . in other words , .substituting this form into the condition ( [ constraint ] ) , the location of the discontinuity is simply , and therefore [ fig .[ n3m3-fig ] ] where is the heaviside step function .when the worst player wins , the standings no longer contain a lower - tier : they consist only of an upper - tier where all players have the same winning rate , . ) for the case .,scaledwidth=35.0% ]let us now consider the most general case where there are players and the ranked player wins as in ( [ rule ] ) .it is straightforward to generalize the rate equations for the cumulative distribution . repeating the scaling analysis above , eqs .( [ phi - eq - a ] ) and ( [ phi - eq - b ] ) for the stationary distribution ( [ phi - def ] ) generalize as follows : =0.\ ] ] the constant equals the number of ways to choose the ranked player times the number of ways to choose the higher ranked players again , there are two solutions : ( i ) the constant solution , , and ( ii ) the root of the - order polynomial we now analyze the three cases where the best , an intermediate , and the worst player win , in order. * best player wins ( ) : * in this case , the stationary distribution can be calculated analytically , one can verify that this solution is consistent with ( [ xav ] ) .we see that in general , when the best player wins there is no discontinuity and . as for three - player games ,the standings consist of a single tier where some players rank high and some rank low .also , the winning rate of the top players equals the number of players , . in general, the distribution of the number of wins is algebraic . *intermediate player wins ( ) : * based on the behavior for three player games , we expect here , is the solution of ( [ phi - eq - e ] ) .numerical simulations confirm this behavior [ fig .[ n4n10-fig ] ] .thus , we conclude that in general , there are two tiers . in the upper tier ,all players have the same winning rate , while in the lower tier different players win at different rates .generally , a finite fraction belongs to the lower tier and the complementary fraction belongs to the upper tier .our monte carlo simulations are performed by simply mimicking the competition process .the system consists of a large number of players , all starting with no wins . in each elemental step , players are chosen and ranked and the ranked player is awarded a win ( tied players are ranked in a random fashion ) .time is augmented by after each such step .this elemental step is then repeated . for ( top ) and ( bottom )shown are monte carlo simulation results with particles at time .the circles are the theoretical predictions for the maximal winning rate and the size of the lower tier .,title="fig:",scaledwidth=40.0% ] for ( top ) and ( bottom ) . shownare monte carlo simulation results with particles at time .the circles are the theoretical predictions for the maximal winning rate and the size of the lower tier .,title="fig:",scaledwidth=40.0% ] the parameters and characterize two important properties : the maximal winning rate and the size of each tier .thus , we focus on the behavior of these two parameters and pay special attention to the large- limit . substituting the stationary distribution ( [ phi - sol - e ] ) into the constraint ( [ constraint ] ), the maximal winning rate follows from the very same eq .( [ x0-eq ] ) .similarly , the size of the lower tier follows from eq .( [ phi0-eq ] ) . in this case , the latter is a polynomial of degree , so numerically , one solves first for and then uses ( [ phi - eq - e ] ) to obtain .we verified these theoretical predictions for the cases and using monte carlo simulations [ fig .[ n4n10-fig ] ] . for completeness , we mention that it is possible to rewrite eq .( [ phi0-eq ] ) in a compact form . using the definition of the beta function we relate the definite integral above with the combinatorial constant in ( [ constant ] ) . substituting the governing equation for the stationary distribution ( [ phi - eq - e ] ) into the equation for the size of the lower - tier ( [ phi0-eq ] )gives using the relation ( [ relation ] ) , we arrive at a convenient equation for the size of the lower tier this is a polynomial of degree .let us consider the limit and with the ratio kept constant .for example , the case corresponds to the situation where the median player is the winner . to solve the governing equation for the stationary distribution in the large- limit, we estimate the combinatorial constant using eq .( [ constant ] ) and the stirling formula .( [ phi - eq - e ] ) becomes taking the power on both sides of this equation , and then the limit , we arrive at the very simple equation , by inspection , the solution is constant , . using and employing the condition yields the location of the condensate this result is consistent with the expected behaviors as and ( see the worst player wins discussion below ) .therefore , the stationary distribution contains two steps when the number of players participating in each game diverges [ fig .[ limit - fig ] ] the stationary distribution corresponding to the number of wins therefore consists of two delta - functions : .thus , as the number of players participating in a game grows , the winning rate of players in the lower tier diminishes , and eventually , they become indistinguishable . for example, for , the quantity is roughly linear in and the maximal winning rate is roughly proportional to [ fig .[ n4n10-fig ] ] .nevertheless , for moderate there are still significant deviations from the limiting asymptotic behavior .a refined asymptotic analysis shows that and that .therefore , the convergence is slow and nonuniform ( i.e. , -dependent ) . despite the slow convergence ,the infinite- limit is very instructive as it shows that the structure of the lower - tier becomes trivial as the number of players in a game becomes very large .it also shows that the size of the jump becomes proportional to the rank of the winning player .limit . from eq .( [ limit ] ) , the points all lie on the curve .,scaledwidth=35.0% ] it is also possible to analytically obtain the stationary distribution in the limit of small winning rates , . since the cumulative distribution is small , , the governing equation ( [ phi - eq - e ] ) can be approximated by . as a result , the cumulative distribution vanishes algebraically as .this behavior holds as long as .* worst player wins ( ) : * in this case , the roots of the polynomial ( [ phi - eq - e ] ) are not physical because they correspond to either monotonically increasing solutions or they are larger than unity .thus , the only solution is a constant and following the same reasoning as above we conclude that the stationary distribution is the step function ( [ phi - sol - c ] ) .again , the upper tier disappears and all players have the same winning rate . in other words , there is very strong parity .we note that while the winning rate of all players approaches the same value , there are still small differences between players . based on the behavior for two - player games , we expect that the distribution of the number of wins follows a traveling wave form as .as the differences among the players are small , the ranking continually evolves with time .such analysis is beyond the scope of the approach above .nevertheless , the dependence on the number of players may be quite interesting .let us imagine that wins represent wealth .then , the strong players are the rich and the the weak players are the poor . competitions in which the weakest player wins mimic a strong welfare mechanism where the poor benefits from interactions with the rich . in such a scenario ,social inequalities are small .in conclusion , we have studied multi - player games where the winner is decided deterministically based upon the ranking .we focused on the long time limit where situations with two or more tied players are generally irrelevant .we analyzed the stationary distribution of winning rates using scaling analysis of the nonlinear master equations .the shape of the stationary distribution reflects three qualitatively different types of behavior . when the best player wins , there are clear differences between the players as they advance at different rates .when an intermediate player wins , the standings are organized into two tiers .the upper tier has the form of a condensate with all of the top players winning at the same rate ; in contrast , the lower tier players win at different rates .interestingly , the same qualitative behavior emerges when the second player wins as when the second to last player wins . when the worst player wins , all of the players are equal in strength. the behavior in the limit of an infinite number of players greatly simplifies . in this limit, the change from upper tier only standings to lower tier only standings occurs in a continuous fashion .moreover , the size of the upper tier is simply proportional to the rank of the winner while the maximal winning rate is inversely proportional to this parameter . in the context of sports competitions , these results are consistent with our intuition .we view standings that clearly differentiate the players as a competitive environment .then , having the best player win results in the most competitive environment , while having the worst player win leads to the least competitive environment . as the rank of the winning playeris varied from best to worst , the environment is gradually changed from highly competitive to non - competitive .this is the case because the size of the competitive tier decreases as the strength of the winning player declines . in the context of social dynamics , these results have very clear implications : they suggest that a welfare strategy that aims to eliminate social hierarchies must be based on supporting the very poor as all players become equal when the weakest benefits from competitions .our asymptotic analysis focuses on the most basic characteristic , the winning rate .however , there are interesting questions that may be asked when tiers of equal - strength players emerge . for example, the structure of the upper tier can be further explored by characterizing relative fluctuations in the strengths of the top players .similarly , the dynamical evolution of the ranking when all players are equally strong may be interesting as well .
we analyze the dynamics of competitions with a large number of players . in our model , players compete against each other and the winner is decided based on the standings : in each competition , the ranked player wins . we solve for the long time limit of the distribution of the number of wins for all and and find three different scenarios . when the best player wins , the standings are most competitive as there is one - tier with a clear differentiation between strong and weak players . when an intermediate player wins , the standings are two - tier with equally - strong players in the top tier and clearly - separated players in the lower tier . when the worst player wins , the standings are least competitive as there is one tier in which all of the players are equal . this behavior is understood via scaling analysis of the nonlinear evolution equations .
since almost all we know about astronomical objects is inferred from the radiation which they emit , the theory of radiation transport often has a key role in testing our understanding of astrophysics .although there are a variety of competitive approaches used for radiative transfer simulations , monte carlo methods are particularly well - suited for many modern astrophysical applications . in the monte carlo approach , the radiation field is discretized into quanta which represent bundles of photons . by propagating these quanta through a model of an astrophysical object , and simulating their interactions , synthetic spectra and light curvescan be obtained .this method has the particular advantage that matter - radiation interactions are always treated locally meaning that multi - dimensionality , time - dependence and large - scale velocity fields can all be incorporated readily .here we describe a new radiative transfer code ( artis ; kromer & sim 2009 ) which has been developed for application to type ia supernova ( sn ia ) explosion models .the code is based on a monte carlo _ indivisible packet _scheme described by and was developed from the grey radiative transfer code of .the code is designed to simulate time - dependent , three - dimensional radiation transport in supernova ejecta during the phase of homologous expansion .the optical display of sne ia is powered by the energy released in the radioactive decay of isotopes synthesized during the explosion , predominantly and its daughter nucleus .therefore , the code starts from an initial distribution of in the ejecta and then the subsequent radioactive decays are followed .these decays initially give rise to gamma - ray photons which , at least for early epochs when the ejecta are optically thick , are rapidly down - scattered and absorbed by photoelectric processes .this heats the ejecta .the subsequent re - emission of ultraviolet , optical and infrared emission by the ejecta is then simulated to obtain spectra and light curves. the code does not assume local thermodynamic equilibrium ( lte ) but includes an approximate non - lte ( nlte ) treatment of ionization and a detailed approach to line scattering and fluorescence .for a complete description of the code see and .to test the artis code we have performed a variety of radiative transfer simulations for the well - known w7 sn ia explosion model .although this one - dimensional model is considerably simpler than modern three - dimensional explosion models ( e.g. rpke & niemeyer 2007 ; rpke et al .2007 ) , it is known to predict spectra and light curves in reasonable agreement with observations ( e.g. jeffery et al . 1992 ; hflich 1995 ; nugent et al .1997 ; lentz et al . 2001 ; baron et al . 2006 ; kasen et al . 2006 ) and therefore provides a realistic test model for our radiative transfer simulations . for our w7 test simulations , the explosion model properties ( ejecta density , composition and initial distribution of radioactive isotopes ) were mapped to a homologously expanding 50 cartesian grid and the expansion followed for 100 logarithmically - spaced time steps spanning the time interval from 2 to 80 days after explosion .the assumption of homologous expansion in the w7 model has been tested using the stella code in a manner similar to that described by .that test showed that the density structure is affected only slightly during the first weak after explosions and that thereafter homologous expansion becomes a very good approximation .the propagation of a total of five million monte carlo energy packet quanta were simulated from their initial release by the radioactive decay of either or until , after multiple radiation - matter interactions , they escaped from the computational domain as bundles of ultraviolet , optical or infrared ( uvoir ) photons .the escaping packets were then binned by time of escape and photon frequency to construct time - dependent spectra for the model .since the w7 model is one - dimensional , it is unnecessary to bin the escaping quanta based on direction of scape but this can be readily done to obtain viewing - angle dependent spectra for multi - dimensional models .figure 1 shows a sequence of three optical spectral snapshots obtained with the artis code for the w7 model .a convenient property of monte carlo radiative transfer simulations is the ease with which the propagation histories of the monte carlo quanta can be used to understand the manner in which the spectra features are formed . for each escaping quantum, artis records the details of its last radiation - matter interaction with either a bound - bound , bound - free or free - free event .this information can then be used to identify the processes responsible for features in the spectra . in figure 1 , the areas above and below the total spectrum are shaded to indicate the atomic number of the elements which last affected the escaping quanta for each wavelength bin .this makes it easy to understand how the spectral features are formed and one can readily see how the contributions to the spectra evolve in time .for example , in figure 1 it is clear from the shading that the early phase spectra are strongly affected by intermediate - mass elements such as si and s while the later time spectra are dominated by elements of the iron group .this is a well - known consequence of the layered structure of the w7 model . at early times , the outer layers( which are rich in the products of partial nuclear burning ) are optically thick .as time passes , however , the expansion of the ejecta causes these layers to become optically thin such that radiation escapes directly from the inner region which is dominated by iron group material . for comparison , the observed spectra of a fairly normal sn ia ( sn 1994d , patat et al .1996 ) are over - plotted for the same epochs .the overall flux distribution and general properties of the observed spectral features ( e.g. the characteristic si ii line at 6355 and the ca ii infrared triplet at 8549 ) are reasonably well - reproduced by the model , suggesting that the numerical simulations capture much of the necessary physics required for the interpretation of the observations .there are , however , some clear discrepancies between the observational data and the synthetic spectra ( e.g. excess emission below in the early - time model spectra and an extra emission feature around in the later time spectrum ) but some disagreement is expected since the w7 model has not been fine - tuned to match observations of any specific sn in detail .multi - dimensional radiative transfer simulations for sne ia require significant computational resources and approximations must currently be made to make the simulations feasible .two particularly important issues are the completeness of the atomic data set used and the sensitivity of the synthetic observables to the treatment of the plasma conditions ( particularly the excitation / ionization state ) .we have therefore used the w7 model as a standard case to investigate some of these effects with artis ; photometric light curves computed from several of our numerical simulations are shown in figure 2 .we also compared our results with those of other sn radiative transfer codes to quantify the extent to which different numerical methods affect the synthetic observables . using the sedona code, showed that realistic atomic data sets containing many millions of spectral lines are required to accurately model radiation transport in sn ia .this is particularly true at nir ( near - infrared ) wavelengths where the spectrum can be significantly affected by fluorescent emission in forests of weak lines of iron - group elements .this is confirmed by our simulations with artis .figure 2 shows light curves computed with our approximate nlte treatment of ionization but adopting different atomic data sets drawn from the line lists computed by kurucz & bell ( 1995 ) and kurucz ( 2006 ) .when we use a reasonably large atomic line list ( lines ) we obtain nir light curves which are significantly brighter around their first maximum than with an atomic data set restricted to only lines .the optical light curves are much less affected although there is a tendency for the u , b and v bands to be slightly fainter owing to the flux redistributed from these bands to the nir .we note that the light curves computed with the larger atomic data set are also in quantitatively better agreement with observations ( see figure 2 ) . in figure 2we also show light curves computed with a pure lte treatment of the excitation / ionization state of the ejecta .these light curves were obtained using the smaller atomic data set ( lines ) mentioned above . for early times( up to around 30 days in the optical bands and 20 days in the nir bands ) , these light curves agree well with those obtained with our nlte implementation .this is expected since lte should be a good approximation when the radiation is strongly trapped . at later times , however , departures from lte become strong and our nlte treatment of ionization predicts significantly higher ionization states throughout much of the ejecta .this directly affects the observables causing the u , b and v band to remain significantly brighter than suggested by lte .this illustrates the sensitivity of the observations to the ejecta properties and highlights the need for a realistic treatment of the plasma conditions if detailed comparisons to observations are to be made .the agreement between the artis light curves and those obtained by sedona and stella is encouragingly good ( figure 2 ) . compared to sedona ,the artis light curves computed with the larger atomic data set agree very well in all bands up to several weeks after maximum light .the difference which manifest at later times are most likely attributable to difference in the manner in which the codes treat the plasma conditions ( see kromer & sim 2009 for further discussion ) .the current version of stella adopts an lte treatment of the plasma conditions with photon redistribution modelled using an approximate source function .as expected , its light curves are similar to those obtained with the lte implementation in artis .the stella light curves shown here were computed with an extended atomic data set containing 2.6 10 lines .this atomic data set improves aspects of the comparison with stella relative to that shown in figure 7 of kromer & sim ( 2009 ; the stella curves there used only 1.6 10 lines ) . in particular , the initial rise of the stella light curves is faster .our results obtained from the w7 model suggest that our radiative transfer simulations are able to produce realistic synthetic spectra and light curves as required for the testing of sne ia explosion models .we have already used the artis code to investigate simple aspherical toy models and will in the near future use it to compute synthetic observables for state - of - the - art hydrodynamical explosions models in order that their predictions can be directly tested against observational data .baron e. , bongard s. , branch d. , hauschildt p. h. , 2006 , apj , 645 , 480 bessell m. s. , 1990 , pasp , 102 , 1181 bessell m. s. , brett j. m. , 1988 , pasp , 100 , 1134 blinnikov s. , sorokina e. , 2002 , preprint ( arxiv : astro - ph/0212567 ) blinnikov s. i. , rpke f. k. , sorokina e. i. , gieseler m. , reinecke m. , travaglio c. , hillebrandt w. , stritzinger m. , 2006 , a&a , 453 , 229 hflich p. , 1995 , apj , 443 jeffery d. j. , leibundgut b. , kirshner r. p. , benetti s. , branch d. , sonneborn g. , 1992 , apj , 397 , 304 kasen d. , 2006 , apj , 649 , 939 kasen d. , thomas r. c. , nugent p. , 2006 ,apj , 651 , 366 , m. & sim , s. a. , mnras , 2009 , 398 , 1809 , m. , sim , s. a. & hillebrandt w. , 2009 , in aip conf .1111 , probing stellar populations out to the distant universe : cefalu 2008 , ed .g. giobbi , a. tornambe , g. raimondo , m. limongi , l. a. antonelli , n. menci , & e. brocato ( new york : aip ) , 277 krisciunas k. et al . , 2003 , aj , 125 , 166 kurucz r. , bell b. , 1995 , atomic line data , kurucz cd - rom no .smithsonian astrophysical observatory , cambridge , ma kurucz r. l. , 2006 , in eas publ .18,radiative transfer and applications to very large telescopes , ed .p. stee ( edp science : les ulis ) , p.129 lentz e. j. , baron e. , branch d. , hauschildt p. h. , 2001 , apj , 557 , 266 lucy l. b. , 2002 , a&a , 384 , 725 lucy l. b. , 2003 , a&a , 403 , 261 lucy l. b. , 2005 , a&a , 429 , 19 nomoto k. , thielemann f .- k . , yokoi k. , 1984 , apj , 286 , 644 nugent p. , baron e. , branch d. , fisher a. , hauschildt p. h. , 1997 , apj , 485 , 812 patat f. , benetti s. , cappellaro e. , danziger i. j. , della valle m. , mazzali p. a. , turatto m. , 1996 , mnras , 278 , 111 rpke f. k. , niemeyer j. c. , 2007 , a&a , 464 , 683 rpke f. k. , woosley s. e. , hillebrandt w. , 2007 , apj , 660 , 1344 i. j. , patat f. , turatto m. , 2001 , mnras , 321 , 254 sim s. a. , 2007 , mnras , 375 , 154 thielemann f .- k . , nomoto k. , yokoi k. , 1986 , a&a , 158 , 17 woosley s. e. , kasen d. , blinnikov s. , sorokina e. , 2007 , apj , 662 , 487
the theory of radiative transfer provides the link between the physical conditions in an astrophysical object and the observable radiation which it emits . thus accurately modelling radiative transfer is often a necessary part of testing theoretical models by comparison with observations . we describe a new radiative transfer code which employs monte carlo methods for the numerical simulation of radiation transport in expanding media . we discuss the application of this code to the calculation of synthetic spectra and light curves for a type ia supernova explosion model and describe the sensitivity of the results to certain approximations made in the simulations .
quantitative finance theory involves two related probability measures : a risk - neutral measure and an objective measure .the risk - neutral measure determines the prices of assets and options in a financial market .the risk - neutral measure is distinct from the objective measure , which describes the actual stochastic dynamics of markets .the conventional belief is that one can not determine an objective measure by observing a risk - neutral measure .the best known example capturing this belief is the black - scholes model , which says that the drift of a stock under a risk - neutral measure is independent of the drift of the stock under an objective measure .recently , ross questioned this belief and argued that it is possible to recover an objective measure from a risk - neutral measure under some circumstances .his model assumes that there is an underlying process that drives the entire economy with a finite number of states on discrete time this result can be of great interest to finance researchers and investors , and thus it is highly valuable to extend the ross model to a continuous - time setting , which is practical and useful in finance . in this paper, we investigate the possibility of recovering in a continuous - time setting with a time - homogeneous markov diffusion process with state space in this setting , the risk - neutral measure contains some information about an objective measure . in general, however , the model unfortunately fails to recover an objective measure from a risk - neutral measure .a key idea of recovery theory is that the reciprocal of the pricing kernel is expressed in the form for some constant and positive function for example , in the _ consumption - based capital asset model _ in and , the pricing kernel is expressed in the above form .the basis of recovery theory is finding and thus , we obtain the pricing kernel and the relationship between the objective measure and the risk - neutral measure .we will see that and satisfy the second - order differential equation thus , recovery theory is transformed into a problem of finding a particular solution pair of this particular differential equation with if such a solution pair were unique , then we could successfully recover the objective measure .unfortunately , this approach categorically fails to achieve recovery because such a solution pair is never unique .many authors have extended the ross model to a continuous - time setting and have also confronted the non - uniqueness problem . to overcome the non - uniqueness problem, all authors assumed more conditions onto their models so that the differential equation has a unique solution pair satisfying the conditions .carr and yu introduced the notion of long s discovery of the numeraire portfolio to extend the ross model to a continuous - time setting .they assumed long s portfolio depends on time and the underlying process and then they derived the above differential equation .carr and yu also assumed that the process is a time - homogeneous markov diffusion on a _ bounded _ interval with regular boundaries at both endpoints .they also implicitly assumed that is in for some measure to apply the regular sturm - liouville theory , thereby obtaining a unique solution pair satisfying these conditions .dubynskiy and goldstein explored markov diffusion models with reflecting boundary conditions .walden extended the results of carr and yu to the case that is an _ unbounded _ process .walden proved that recovery is possible if the process is _ recurrent _ under the objective measure .in addition , he showed that when recovery is possible in the unbounded case , approximate recovery is possible from observing option prices on a bounded subinterval .qin and linetsky proved that recovery is possible if is recurrent and the pricing kernel admits a hansen - scheinkman decomposition .they also showed that the ross recovery has a close connection with roger s potential approach to the pricing kernel .borovicka , hansen and scheinkman showed that the recovery is possible if the process is _ stochastically stable _ under the objective measure .they also discussed applications of the recovery theory to finance and economics .the papers of borovicka , hansen and scheinkman , qin and linetsky and walden assumed a common condition on .specifically , is _ recurrent _ under the objective measure .the mathematical rationale for this condition is to overcome the non - uniqueness problem of the differential equation . indeed ,if existent , there is a unique solution pair of the equation satisfying this condition and we will review this condition in section [ sec : recurrent_recovery ] . in this article, we investigate the possibility of recovery when the process is _ transient _ under the objective measure .we explore in this case what information is sufficient to recover .one of the main contributions is that if is known and if is _ non - attracted _ to the left ( or right ) boundary under the objective measure , then recovery is possible . to achieve this, we establish a graphical understanding of recovery theory .this topic is discussed in section [ sec : recurrence_and_transience ] and [ sec : recovery_theory ] . in section [ sec : applications ] , two examples of recovery theory are explored : the cox - ingersoll - ross ( cir ) interest rate model and the black - scholes stock model .section [ sec : conclusion ] summarizes this article .a financial market is defined as a probability space having a one - dimensional brownian motion with the filtration generated by . all the processes in this articleare assumed to be adapted to the filtration . is the objective measure of this market .we assume that there are a state variable and a positive numeraire in the market .let be an equivalent measure on the market such that each risky asset discounted by the numeraire is a martingale under measure it is customary that this measure is referred to as a risk - neutral measure when is a money market account . in this article , however ,for any given positive numeraire we say is a risk - neutral measure with respect to set the radon - nikodym derivative by which is known to be a martingale process on for using the martingale representation theorem , we can write in the stochastic differential equation form for some .it is well - known that defined by is a brownian motion under we define _ the reciprocal of the pricing kernel _ by [ assume : x ] the state variable is a time - homogeneous markov diffusion process satisfying the range of is an open interval with and are continuously differentiable on and that for it is implicitly assumed that both endpoints are unattainable because the range of the process is an open interval .[ assume : interest_rate ] the dynamics of the numeraire is determined by more precisely , follows assume that and are continuously differentiable on and is a martingale .we assume that we can extract theses four functions and from market prices data , thus they are assumed to be known ex ante .the above martingale assumption is to define a new measure by using the girsanov theorem , for example , in the proof of theorem [ thm : criterion_martingality ] .it is noteworthy that if there is a money market account with interest rate , denoted by in the market , then is equal to because is a martingale under [ assume : transition_indep ] assume that ( the reciprocal of ) the pricing kernel is _ transition independent _ in the sense that there are a positive function and a real number such that in this case , we say is a _ principal pair _ of the market .the basis of recovery theory is finding the principal pair and then obtaining the objective measure by setting the radon - nikodym derivative one important aspect in implementing the recovery approach is to decide how to choose state variables many processes can serve as a state variable and the choice of a state variable depends on the purpose of use . one way is the short interest rate investors interested in the price of bonds want to find the dynamics of under an objective measure .plenty of examples with interest rate state variables can be found in .another way is a stock market index process such as the dow jones industrial average and standard poor s ( s ) 500 .refer to for an empirical analysis of recovery theory with the state variable s 500 .we investigate how recovery theory is transformed into a problem of a differential equation . applying the ito formula to the definition of we know from, we also have by comparing these two equations , we obtain for convenience , set using notation defined by we have the following theorem .let be the principal pair of the market .then satisfies in other words , if is a solution pair of with then is a candidate pair for the principal pair of we are interested in a solution pair of with positive function there are two possibilities .* there is no positive solution for any , or * there exists a number such that it has two linearly independent positive solutions for has no positive solution for and has one or two linearly independent solutions for refer to page 146 and 149 in . in this article , we implicitly assumed the second case by assumption [ assume : transition_indep ] .it is easily checked that is a local martingale under when this is a martingale , one can _ attempt _ to recover the objective measure by setting this as a radon - nikodym derivative .let be a solution pair of with positive function suppose that is a martingale .a measure obtained from the risk - neutral measure by the radon - nikodym derivative is called _ the transformed measure _ with respect to the pair clearly , the transformed measure with respect to the principal pair is the objective measure we have the following proposition by and . [ prop : dynamics_under_p ] a process defined by is a brownian motion under the transformed measure with respect to furthermore , follows occasionally , we use the notation instead of without ambiguity . even when is not a martingale , we can consider the diffusion process corresponding to .the diffusion process defined by is called the diffusion process _ induced by _we establish the mathematical preliminaries for recurrent and transient processes .the contents of this section are indebted to , and .consider the diffusion process induced by where a measure defined by is called _ the scale measure _ of the process with respect to the pair the left boundary is _ attracting _ if )<\infty ] and is finite .( otherwise , is the unique positive solution of , so in which case we have nothing to prove ) .we may assume )<\infty. ] the general ( normalized to ) solution is expressed by )\right)\;,\ ] ] which is a positive function for and only for using we have that furthermore , can be any value in . ] by the proof of proposition [ prop : slice ] , we know that is in this is a contradiction . we now show that for with the diffusion process induced by tuple is attracted to the left boundary .let and be the functions corresponding to tuple and respectively .recall in proposition [ prop : ode ] .write and it can be easily checked that and set by direct calculation , we have because and is an equilibrium point , we know that for all by differentiating we have thus , which yields we obtain that therefore , the diffusion process induced by is attracted to the equation recall that that is , is the maximum value among all the s of the solution pair with and in this section , we explore an example such that has two linearly independent positive solutions .let be such that first , has two linearly independent positive solutions : and where it is enough to show that that is , for any fixed the equation has no positive solutions .suppose there exists such a positive solution define a sequence of functions by for by direct calculation , satisfies the following equation : by the harnack inequality stated below , we have that is equicontinuous on each compact set on thus we can obtain a subsequence such that the subsequence converges on say the limit function since is positive , the limit function is nonnegative and is a nonzero function because on the other hand , it can be easily shown that the limit function satisfies by taking limit in equation [ eqn : g_n ] . clearly there does not exist a nonzero nonnegative solution of this equation when this is a contradiction .the author appreciates srinivasa varadhan for this example .( harnack inequality ) let be a positive solution of assume that is bounded away from zero ; that is , there is a positive number such that suppose that and are bounded by a constant then for any there exists a positive number ( depending on and but on neither nor ) such that whenever this section , we focus on the function rather than the value we assume that we roughly know the behavior of for example , we know a function such that is bounded below and above or such that $ ] converges to a nonzero constant . knowing means that we have information about near the area where the process lies with high probability under the objective measure .such a function is called a _reference function _ of more generally and more formally , we define a reference function in the following way .suppose we know a reference function of from = \mathbb{e}^{\mathbb{p}}_{\xi}\left[(\phi^{-1}f)(x_{t})\right]\phi(\xi)\,e^{-\beta t } \ ; , \end{aligned}\ ] ] and by the definition of the reference function , we have that = -\beta\;. \end{aligned}\ ] ] hence , we know the value conversely , suppose we know the value we show that for any admissible pair of is a reference function .we have =e^{-\beta t}f(\xi)\ ] ] and thus =\mathbb{e}^{\mathbb{q}}_{\xi } [ g_t^{-1}f(x_{t})]\,e^{\beta t } \,\phi^{-1}(\xi)=(\phi^{-1}f)(\xi).\ ] ] this completes the proof .
recently , ross argued that it is possible to recover an objective measure from a risk - neutral measure . his model assumes that there is a finite - state markov process that drives the economy in discrete time many authors extended his model to a continuous - time setting with a markov diffusion process with state space unfortunately , the continuous - time model fails to recover an objective measure from a risk - neutral measure in general . we determine under which information recovery is possible in the continuous - time model . it was proven that if is _ recurrent _ under the objective measure , then recovery is possible . in this article , when is _ transient _ under the objective measure , we investigate what information is sufficient to recover . keywords : ross recovery , markovian pricing operators , recurrence , transience
parameter sensitivity analysis is one of the most important tools available for modelling biochemical networks .such analysis is particularly crucial in systems biology , where models may have hundreds of parameters whose values are uncertain .sensitivity analysis allows one to rank parameters in order of their influence on network behaviour , and hence to target experimental measurements towards biologically relevant parameters and to identify possible drug targets . for deterministic models, the adjunct ode method provides an efficient way to compute the local sensitivity of a model to small changes in parameters . for stochastic models , however , parameter sensitivity analysis can be computationally intensive , requiring repeated simulations for perturbed values of the parameters . here ,we demonstrate a method , based on trajectory reweighting , for computing local parameter sensitivity coefficients in stochastic kinetic monte - carlo simulations without the need for repeated simulations . sensitivity analysis of biochemical network models may take a number of forms .one may wish to determine how a model s behaviour changes as a parameter is varied systematically within some range ( a parameter sweep ) , its dependence on the initial conditions of a simulation , or its sensitivity to changes in the structure of the model itself ( alternate mode - of - action hypotheses ) . in this paper , we focus on the computation of local parameter sensitivity coefficients .these coefficients describe how a particular output of the model varies when the -th parameter of the model , , is varied by an infinitesimal amount , : where is the output of the model computed in a system with changed to . for deterministic models , where the dynamics of the variables can be described by a set of deterministic ordinary differential equations ( odes ) , differentiation of the odes with respect to shows that the sensitivity coefficients obey an _set of odes , these adjunct odes can be integrated alongside the original odes to compute the sensitivity coefficients `` on the fly '' in a deterministic simulation of a biochemical network .stochastic models of biochemical networks are ( generally ) continuous - time markov processes which are solved numerically by kinetic monte - carlo simulation , using standard methods such as the gillespie or gibson - bruck algorithms .replicate simulations will produce different trajectories ; we wish to compute how the _ average _ value of some function of the model changes with the parameter : where the averages are taken across replicate simulation runs . if one is interested in steady - state ( _ i.e. _ time - independent ) parameter sensitivities , the averages in eq .may instead be time averages taken over a single simulation run .nave evaluation of parameter sensitivities via eq .is very inefficient , since one is likely to be looking for a small difference between two fluctuating quantities .there are several existing approaches that get around this problem : spectral methods , a method based on the girsanov measure transform , and methods which re - use the random number streams . in this paper , we develop a method based on trajectory reweighting , which is simple to implement in existing kinetic monte carlo codes and provides a way to compute steady - state parameter sensitivity coefficients `` on - the - fly '' in stochastic simulations of biochemical networks .the method provides an accessible alternative to the girsanov measure transform pioneered by plyasunov and arkin .indeed several of our equations in section [ sec : tw ] are equivalent to those ref . .however , we go beyond previous work by showing in practical terms how the method can be implemented in standard stochastic simulation algorithms , extending the method to the computation of parameter sensitivities in the steady state , and showing how time - step preaveraging can be used to improve the efficiency of the calculations .the basic idea behind trajectory reweighting is as follows . in a kinetic monte - carlo simulation , for a given set of parametersany given trajectory has a statistical weight which measures the probability that it will be generated by the algorithm ; this weight can be expressed as an analytical function of the states of the system along the trajectory and of the parameter set .this analytical function also allows us to compute the statistical weight for this _ same _ trajectory , in a system with a _ different _ set of parameters : _i.e. _ its weight in the ensemble of trajectories with perturbed parameters .this allows us in principle to compute the average in eq . for the perturbed parameter set ,using only a set of trajectories generated with the unperturbed parameter set . for most applications this is inefficient , because the weight of a trajectory in the perturbed ensemble is typically very low , resulting in poor sampling .however , it turns out that trajectory reweighting does provide an effective way to compute local parameter sensitivity coefficients .more specifically , let us consider a typical implementation of the gillespie algorithm ( similar arguments apply to more recent algorithms , such as gibson - bruck ) . here, the state of the system is characterised by a set of discrete quantities , typically representing the number of molecules of chemical species .transitions between states are governed by propensity functions where labels the possible reaction channels and the quantities are parameters in the problem , typically reaction rates ( represents the -th such parameter ) .a kinetic monte - carlo trajectory is generated by stepping through the space of states in the following way .we first compute the propensity functions for all the possible transitions out of the current state .we then choose a time step ( _ i.e. _ waiting time ) from an exponential distribution , where the state - dependent mean timestep ( the mean waiting time before exiting the current state ) is .we choose a reaction channel with probability .we advance time by and update the values of according to the chosen reaction channel .we are now in a new state , and the above steps are repeated .now let us consider the statistical weight of a given trajectory generated by this algorithm . in each step, the probability of choosing the value of that we actually chose is proportional to , and the probability of choosing the reaction channel that we actually chose is equal to .we can therefore associate a weight with the whole trajectory , which is proportional to the probability of generating the sequence of steps which we actually generated : & = \textstyle{\prod_{\mathrm{steps}}}\ , a_\mu \,e^{- ( \sum a_\mu ) \delta t}\ , .\end{array } \label{eq : pdef}\ ] ] the second line follows by eliminating ( note that because eq .is not normalized , is a _weight _ rather than a true probability ) . in a typical kinetic monte - carlo simulation ,we generate multiple independent trajectories of length , for a given parameter set .the probability of generating any given trajectory in this sample will be proportional to its weight , defined in eq . .we then compute the average of some function of the state of the system by summing over the values of , at time , for these trajectories .having generated this set of trajectories , let us now suppose we wish to re - use them to compute the average which we would have obtained had we repeated our simulations for some _ other _ parameter set .it turns out that we can compute this average by summing over the same set of trajectories , multiplied by the ratio of their statistical weights for the perturbed and unperturbed parameter sets . to see this, we first recall that an average , _ e.g. _ , can be written as a sum over all _ possible _ trajectories of length , multiplied by their statistical weights : . writing the perturbed average in this way , we obtain where and are the trajectory weights ( calculated using eq . ) for the original and perturbed models respectively . in another context ,eq.([eq : pppp ] ) has been used to reweight trajectory statistics in order to sample rare events in biochemical networks ; it also forms the basis of umbrella sampling methods for particle - based monte carlo simulations .whilst in principle eq.([eq : pppp ] ) provides a completely general way to transform between trajectory ensembles with different parameter sets , in practice it is useless for any significant deviation of the parameter set from the original values , for two reasons .first , the statistical errors in the computation of grow catastrophically with the size of the perturbation , because the original trajectories become increasingly unrepresentative of the perturbed model .second , the computational cost of determining the trajectory weights for the perturbed and unperturbed parameter sets via eq .is only marginally less than the cost of computing directly by generating a new set of trajectories for the perturbed parameter set .it turns out , however , that eq .is useful for the computation of parameter sensitivity coefficients , where the deviation between the original and perturbed parameter sets is infinitesimal .let us suppose that the perturbed problem corresponds to a small change in a single parameter , such as ; the corresponding sensitivity coefficient is defined by eq . .as we show in supplementary material section [ sec : rat ] , differentiating eq . with respect to leads to the following expression for the sensitivity coefficient : where supplementary material section [ sec : rat ] also shows how to generalize this approach to higher - order derivatives . combining eq . with eq .shows that the `` weight function '' can be expressed as a sum over all steps in the trajectory : where eqs . are the key results of this paper , since they point to a practical way to compute parameter sensitivity coefficients in kinetic monte - carlo simulations . to evaluate the ( time - dependent ) parameter sensitivity , one tracks a weight function , which evolves according to eqs . andone also tracks the function of interest .the covariance between and , at the time of interest , computed over multiple simulations , then gives the sensitivity of to the parameter in question ( as in eq . ) .tracking should be a straightforward addition to standard kinetic monte - carlo schemes .moreover we note that could be any function of the variables of the system for example , if one were interested in the parameter sensitivity of the noise in particle number , one could choose .more complex functions of the particle numbers , involving multiple chemical species , could also be used ( see examples below ) .this prescription for computing parameter sensitivities presents , however , some difficulties in terms of statistical sampling .the two terms in eq .are statistically independent quantities with the same expectation value , .hence they cancel on average but the variances add .thus we expect that is a stochastic process with a zero mean , , and a variance that should grow approximately linearly with time as shown for a simple example case in supplementary material section [ sec : linear]in effect behaves as a random walk ( _ i.e. _ a wiener process ) . in terms of controlling the sampling error, this means that the number of trajectories over which the covariance is evaluated should increase in proportion to the trajectory length , since the standard error in the mean is expected to go as the square root of the variance divided by the number of trajectories . in section [ sec : ss ] , we discuss a way to avoid this problem , when computing steady - state parameter sensitivities . without loss of generalitywe can presume that the parameter will appear in only one of the propensity functions , which we call . with this presumption , eq .becomes eq . makes a direct link with the girsanov measure transform method introduced by plyasunov and arkin , being essentially the same as eq .( 31b ) in ref . .a further simplification occurs if is the rate coefficient of the -th reaction , so that is linearly proportional to .one then has and eq .becomes \label{eq : key2c}\ ] ] where counts the number of times that the -th reaction is visited .this is essentially the same as eq .( 9b ) of plyasunov and arkin s work , ref . .. suggests a very simple way to implement parameter sensitivity computations in existing kinetic monte - carlo codes .one simply modifies the chemical reaction scheme such that each reaction whose rate constant is of interest generates a `` ghost '' particle in addition to its other reaction products ( this is similar to the clock trick in ref . ) .there should be a different flavour of ghost particle for each reaction of interest , and ghost particles should not participate in any other reactions . is then simply given by the number of ghost particles associated with the -th reaction which are present at time . in section [ sec : egs ] , we use this approach to compute sensitivity coefficients using the _ unmodified _ copasi simulation package . in supplementary material section [ sec : linear ]we also exploit this trick to obtain some exact results for linear propensity functions .so far , we have discussed the computation of time - dependent parameter sensitivity coefficients , by evaluating the covariance of the weight function with the function over multiple simulation runs .often , however , one is interested in the parameter sensitivity of the _ steady - state _ properties of the system ; this is a time - independent quantity .we now discuss the computation of steady - state parameter sensitivities using trajectory reweighting .we show that in this case , first , the problem of poor sampling of for long times can be circumvented , second , one can obtain sensitivity coefficients from a single simulation run , and third , one can improve efficiency by a procedure which we call time - step pre - averaging . to compute steady - state parameter sensitivities, one might imagine that we could simply apply the method discussed in section [ sec : tw ] , taking the limit of long times , when the system should have relaxed to its steady state . however , this does not work , because the variance between trajectories of the weight function increases in time , making it impossible to obtain good statistics at long times . to circumvent this problem, we note that the right hand side of eq . is unaltered if is offset by a constant .thus we may write the parameter sensitivity in the form of a two - point time - correlation function : { } \hspace{8em}- { \langle f(n_i , t)\rangle}\,{\langle \delta w_{k_\alpha}(t , t_0)\rangle } \end{array } \label{eq : cdef}\ ] ] where and is some arbitrary reference time such that .this relation has the advantage that we may choose sufficiently small to make the variance of manageable .importantly , in steady - state conditions , we expect that the correlation function depends only on the time difference and not separately on the two times , so that , with and as .thus to calculate the sensitivity coefficient under steady state conditions , all we need to do is compute the steady - state correlation function defined in eq ., choosing a suitable `` reference '' time when the system is already in the steady - state , then take the asymptotic ( large ) value of this correlation function .we expect the correlation function to approach its asymptotic value on a timescale governed by the ( likely short ) relaxation time spectrum in the steady state , so that for most problems large values of should not be required .noting that in this method , as in section [ sec : tw ] , the averages in eq . are computed over multiple independent simulation runs , we term this approach the _ ensemble - averaged correlation function method_. from a practical point of view, this method involves the following set of steps or ` recipe ' : 1 .choose two time points and such that the system has already reached its steady state at time and where is greater than the typical relaxation time of the quantity of interest ( typically this is the same as the longest relaxation time in the system as a whole ) .2 . compute at times and and at time .3 . calculate the difference .compute also the product .repeat steps 1 - 3 for many independent simulation runs and compute the averages , and over the replicate simulations .calculate the correlation function .as long as is large enough this provides a measurement of .it turns out , however , that for steady - state parameter sensitivities , we do not need to average over multiple simulation runs we can instead compute time - averages over a single simulation run .this amounts to replacing the steady - state ensemble averaged sensitivity by the _ time averaged _ version , where ( recalling that in kinetic monte carlo , the timestep varies between steps ) . in principle , could be obtained by computing the time - averaged version of eq . : and taking the limit of large .. requires one to keep track of a precise time in the past ; since the time step is not constant in kinetic monte carlo , this is rather inconvenient to implement .fortunately , however , tracking the weight function at a precise time in the past turns out to be unnecessary .as becomes large , the stochastic differences between individual time steps cancel out and it becomes equivalent simply to compute the average where and to use the fact that as .one can quite easily keep track of , for instance by maintaining a circular history array storing over the last steps .this approach , which we denote the _ time - averaged correlation function method _, has the important advantage that one can obtain the steady state parameter sensitivity from a single simulation run .the recipe for using the time - averaged correlation function method is then : 1 .choose a time interval which is greater than the typical relaxation time of the quantity of interest .estimate the typical number of steps taken in time : .2 . for a simulation of the system in the steady state , record and every steps ( we denote each of these recordings a ` timeslice ' ) .3 . for each timeslice , compute the difference between and its value in the previous timeslice : .compute also .4 . compute the averages over all timeslices of , and .5 . calculate the correlation function .as long as is large enough this provides a measurement of .the time - averaged correlation function method is a convenient way to compute parameter sensitivities in a standard kinetic monte carlo scheme , in which both a new timestep and a new reaction channel are chosen stochastically at every step .however , choosing a new time step at every iteration is computationally expensive since it requires a random number , and is not strictly necessary for the computation of steady - state parameter sensitivity coefficients .improved efficiency can be achieved by choosing only the new reaction channel stochastically at each iteration , and replacing by the mean timestep corresponding to the current state ( note that this is state dependent since it depends on the propensity functions ) .this amounts to _ pre - averaging _ over the distribution of possible time steps for a given state of the system .it can be proved formally that if we run our simulations for a sufficiently long time , eq . is equivalent to intuitively , this relation arises because a sufficiently long trajectory , under steady state conditions , will visit each state an arbitrarily large number of times and thus thoroughly sample the distribution of waiting times in each state .one can not , however , compute the parameter sensitivity simply by evaluating the time averages in eq . or using the new definition , eq . .this is because itself depends on the parameter . instead ,a slightly more complicated expression for is required ; this is given in supplementary material section [ sec : preav ] .thus , time - step pre - averaging provides a more efficient way to compute the steady - state parameter sensitivity ( since the time does not need to be updated in the monte carlo algorithm ) , at the cost of a slight increase in mathematical complexity .we now apply the methods described above to three case studies : a model for constitutive gene expression for which we can compare our results to analytical theory , a simple model for a signaling pathway with stochastic focussing , and a model for a bistable genetic switch .the second and third examples are chosen because they exhibit the kind of non - trivial behaviour found in real biochemical networks , yet the state space is sufficiently compact that the parameter sensitivities can be checked using finite - state projection ( fsp ) methods . our implementation of the fsp methods is described more fully in supplementary material section [ sec : fsp ] . for the sensitivity of the average protein number ( top ) and the average mrna number to the model parameters , for the constitutive gene expression model in section [ sec : santmac ] .points with error bars are simulations using the ensemble - average correlation function method ; error bars are estimated by block - averaging ( 100 blocks of trajectories ; total time steps ) .open circles are simulations using the time - average correlation function method with time step pre - averaging ( plotted as a function of ) ; results are averages over 10 trajectories each of length steps ( in this case the error bars are smaller than symbols ) .solid lines are theoretical predictions from eq . .parameters are , , , , corresponding to the _ cro _ gene in a recent model of phage lambda . for these parameters and .[fig : corrn ] ] we first consider a simple stochastic model for the expression of a constitutive ( unregulated ) gene , represented by the following chemical reactions : here , m represents messenger rna ( synthesis rate , degradation rate ) and n represents protein ( synthesis rate , degradation rate ) .this model has linear propensities ( as defined in supplementary material section [ sec : linear ] ) , which implies that the mean copy numbers and of mrna and protein respectively obey the chemical rate equations from which follow the steady state mean copy numbers : for this problem , steady - state sensitivity coefficients can be computed analytically by taking derivatives of eqs . with respect to the parameters of interest .moreover , as shown in supplementary material section [ sec : linear ] , explicit expressions can also be found for the components of the correlation functions defined by eqs . and : { \langle m \delta w_{\ln\rho}\rangle}_{{\mathrm{ss}}}={\langle m \delta w_{\ln\mu}\rangle}_{{\mathrm{ss}}}=0,\\[6pt ] { \langlen \delta w_{\ln k}\rangle}_{{\mathrm{ss}}}=-{\langle n \delta w_{\ln\lambda}\rangle}_{{\mathrm{ss}}}\\ { } \hspace{1em}={\langle n\rangle}_{{\mathrm{ss}}}[{\lambda(1-e^{-\mu\delta t})-\mu(1-e^{-\lambda\delta t})}]/ ( { \lambda-\mu}),\\[6pt ] { \langle n \delta w_{\ln\rho}\rangle}_{{\mathrm{ss}}}=-{\langle n \delta w_{\ln\mu}\rangle}_{{\mathrm{ss}}}={\langle n\rangle}_{{\mathrm{ss}}}(1-e^{-\mu\delta t } ) , \end{array } \label{eq : santmaccf}\ ] ] where for notational convenience we consider the sensitivity with respect to the logarithm of the parameter value ( _ e.g. _ ) . figure[ fig : corrn ] shows the time correlation functions of eqs . and , computed over multiple stochastic simulation runs using the ensemble - averaged correlation function method , together with the analytical results of eq .( solid lines ) .the agreement between the analytic theory and simulation results is excellent .the time correlation functions converge to the expected steady state sensitivity coefficients ( horizontal lines in fig .[ fig : corrn ] ) . for the protein correlation functions ( ), this occurs on a timescale governed by the relaxation rate of protein number fluctuations , while the mrna correlation functions ( ) reach their asymptotic values on a timescale governed by the mrna decay rate .figure [ fig : corrn ] ( open circles ) also shows the same correlation functions , computed instead from a single stochastic simulation run , using the time - averaged correlation function method , with time - step pre - averaging .although this method gives correlation functions ( eqs . and ) in terms of the number of steps in the history array , rather than the time difference , these can be converted to time correlation functions by multiplying by the expected _ global _ mean time step ( the average over states of the state - dependent mean time step ) . comparing the results of the ensemble - averaged and time - averaged correlation function methods in fig .[ fig : corrn ] we see that the two methods give essentially the same results , but the time - averaged method produces greater accuracy ( smaller error bars ) , for the same total number of simulation steps .moreover , because we have used time - step pre - averaging with the time - averaged correlation function method , each simulation step is computed approximately twice as fast as in the original kinetic monte carlo algorithm , since one does not need to generate random numbers for the time steps .we now turn to a more sophisticated case study , based on the stochastic focusing model of paulsson _ et al ._ . in this biochemical network ,a input signal molecule s downregulates the production of an output signal ( or response ) molecule r. stochastic fluctuations play a crucial role , making the output much more sensitive to changes in the input than would be predicted by a deterministic ( mean - field ) model . our reaction scheme , given in eq ., contains just two chemical species , s and r. the production and degradation of s ( the input signal ) are straightforward poisson processes with rates and respectively . the production of r ( the output signal )is negatively regulated by s , and its degradation rate is set to unity to fix the time scale .thus we have where we use a michaelis - menten - like form to represent the negative regulation : , with being the copy number of the input signal molecule . taking a mean - field approach, we might suppose that the average copy numbers and should obey the chemical rate equations and that therefore the steady state copy numbers should be given by in reality , while eq . gives the correct result for the mean input signal , it is manifestly _ incorrect _ for the mean output signal .for example for we find from kinetic monte carlo simulations , as predicted by eq . , but , whereas eq .predicts .this failure of the mean - field prediction arises because of the non - linearity of the michaelis - menten - like form of the production propensity for r. , where is varied to control , with other parameters as in eq . .open circles are gillespie simulations using the time - average correlation function method with time step pre - averaging ; error bars are estimated by averaging over 10 trajectories of length steps .the history array length was .the filled circle ( blue ) is from a gibson - bruck simulation in copasi using the ensemble - average correlation function method ; error bars are from block averaging ( 10 blocks of samples ) .the thick solid line ( red ) is the numerical result from the finite state projection ( fsp ) algorithm .the dashed line is the mean - field theory ( mft ) prediction.[fig : stochf ] ] our aim is to compute the _ differential gain _ , which describes the local steepness of the signal - response relation ( ) .the gain measures the sensitivity of the system s output to its input ; this can be computed by measuring the sensitivities of and to the production and degradation rates of the signal molecule .let us suppose that the signal is varied by changing its production rate infinitesimally at fixed degradation rate ( we could have chosen instead to vary or , in principle , both and ) .the gain is then where the second equality follows from the fact that , since .we use the methods described in section [ sec : ss ] to compute the steady - state sensitivity , and hence the gain .figure [ fig : stochf ] shows the absolute differential gain computed using the time - averaged correlation function method , with time - step pre - averaging , as a function of the signal strength , as is varied ( note that in this region the actual gain is negative so ) .the results are in excellent agreement with the finite state projection method ( fsp , see supplementary material section [ sec : fsp ] ) .[ fig : stochf ] ( dashed line ) also shows the mean - field theory prediction derived from the second of eqs ., namely .stochastic focusing , as predicted by paulsson _et al _ , is clearly evident : the gain is much greater in magnitude for the stochastic model than the mean - field theory predicts , implying that fluctuations greatly increase the sensitivity of the output signal to the input signal .in this example , the parameter of interest ( ) is the rate constant of a single reaction ( production of s ) . as discussed in section [ sec : tw ] , this implies that the parameter sensitivity can be computed simply by counting the number of times this reaction is visited , which can be achieved by modifying the reaction scheme to then computing the weight function \ , .\label{eq : qtrick}\ ] ] ( which is the analogue of eq . ) , and using this to obtain the relevant time - correlation functions .this requires no changes to the simulation algorithm , making it easy to use with existing software packages . as a demonstration, we computed the differential gain for the parameters in eq ., using the open source simulation package copasi . to achieve this , we used the gibson - bruck algorithm ( as implemented in copasi ) to generate samples of and at equi - spaced time points with a spacing time units ( chosen to be longer than the expected relaxation time of the output signal , set by the decay constant for r ) . by taking the difference between successive time pointswe compute and hence the correlation function defined in eq . . the result , shown in blue in fig .[ fig : stochf ] , is in good agreement with our other calculations . and in the gardner _ et al ._ genetic switch model ( section [ sec : gardsw ] ) .parameters are , , , and .[fig : gardsw_cps ] ] in the gardner __ switch , and the sensitivity to .points ( with small error bars in the case of the sensitivity ) are from gillespie simulations using the time - average correlation function method with time step pre - averaging ; error bars are estimated by averaging over 10 trajectories of steps .the history array length was .the solid lines ( red ) are the results of fsp applied to this problem .model parameters are as in fig .[ fig : gardsw_cps].[fig : gardsw_op ] ] as a final example , we consider a model for a bistable genetic switch , of the type constructed experimentally by gardner _ et al . _ , in which two proteins u and v mutually repress each other s production . we suppose that transcription factor binding to the operator is cooperative , and can be described by a hill function ; the rate of production of protein u is then given by while the rate of production of v is given by ( here and describe the maximal production rates while and are the hill exponents , describing the degree of cooperativity ) .the units of time are fixed by setting the degradation rates of u and v to unity . for a suitable choice of parameter values , stochastic simulations of this model show switching between a u - rich state and a v - rich state , as illustrated in fig .[ fig : gardsw_cps ] ; the steady - state probability distribution for the quantity is bimodal , as shown in fig .[ fig : gardsw_op ] .we use this example to illustrate the computation of parameter sensitivities for more complicated scenarios where the system property of interest is not simply a mean copy number and the parameter of interest is not a simple rate constant .in particular we compute the sensitivity of the steady - state probability distribution to the hill exponent .our model consists of the following reaction scheme : in which proteins u and v are created and destroyed with propensities given by : let us first suppose we wish to compute using the time - averaged correlation function method , without time step pre - averaging .we use the propensity functions in eqs . to run a standard kinetic monte carlo ( gillespie ) simulation , choosing at each step a next reaction and a time step . at each simulation step, we also compute the quantity and update the weight function according to eq . , _i.e. _ if reaction 1 is chosen as the next reaction , we increment by , otherwise , we increment by ( note that it is correctly that features in this , irrespective of the chosen next reaction ) .we keep track not only of the current value of , but also of its value a fixed number steps ago . at the same time , we keep track of the function of interest ( denoted in sections [ sec : tw ] and [ sec : ss ] ) . because we are computing the parameter sensitivity of the _ distribution _ , we have a function , and a time - correlation function , for _ each _ value of . at each simulation step, we check the current value of .the function is unity if and zero otherwise ( _ i.e. _ ) . for each value of , we then compute the time correlation function as prescribed in eq . .as long as ( where is the global average time step ) is longer than the typical relaxation time of the system , should give a good estimate for .if we are instead using time step pre - averaging , we employ a slight modification of the above procedure . at each simulation step , we choose a next reaction , but we do not choose a time step . in our update rules for , we replace by the state - dependent mean timestep where . as well as keeping track of and we also need to compute at each step this quantity is then used to compute and hence using the modified algorithm given in supplementary material section [ sec : preav ] .an important technical point here concerns the relaxation time of the system , or the number of steps over which we need to remember the system s history in order that the correlation function gives a good estimate of the steady - state parameter sensitivity . for the previous examples studied , this timescale was given by the slowest decay rate ( typically that of the protein molecules ) .the genetic switch , however , shows dynamical switching behaviour on a timescale that is much longer than the protein decay rate ( see for example fig .[ fig : gardsw_cps ] ) .we therefore need to choose a value of such that is longer than the typical switching time .kinetic monte - carlo simulations ( like those in fig .[ fig : gardsw_cps ] ) show that for our model , the typical time between switching events is approximately 160 time units , while the global average time step .the typical number of steps per switching event is therefore .our chosen value of should be at least this large . in practicewe find that the correlation functions are fully converged ( to within a reasonable accuracy ) by steps ( switching events ) , but not quite converged by steps ( switching events ) .these lengthy convergence times mean that much longer simulations are needed to obtain good statistical estimates for the parameter sensitivity in this model than in the previous examples .figure [ fig : gardsw_op ] shows the steady state probability distribution together with its sensitivity , computed using the time - averaged correlation function method with time step pre - averaging , for the same parameters as in fig . [ fig : gardsw_cps ] .this method gives results in excellent agreement with fsp . has the bimodal shape typical of a stochastic genetic switch , with a large peak at and a much broader peak around , with a minimum around .the sensitivity coefficient measures how the behaviour of the switch depends on the cooperativity of binding of the transcription factor v. we see that increasing leads to an increased peak at , and a decreased peak at , in other words the switch spends more time in the v - rich state .also the minimum around decreases , suggesting that the switching frequency decreases as increases .this is confirmed by further study using the ensemble - averaged correlation function method of the sensitivity coefficient of the switching frequency to changes in ; the details of this will be presented elsewhere .in this paper , we have shown how trajectory reweighting can be used to compute parameter sensitivity coefficients in stochastic simulations without the need for repeated simulations with perturbed values of the parameters .the methods presented here are simple to implement in standard kinetic monte carlo ( gillespie ) simulation algorithms and in some cases can be used without any changes to the simulation code , making them compatible with packages such as copasi . for computation of time - dependent sensitivity coefficients ,the method involves tracking a weight function ( which depends on the derivative of the propensities with respect to the parameter of interest ) and computing its covariance with the system property of interest , at the time of interest , across multiple simulations . for computing time - independent steady - state parameter sensitivities ,we show that the sensitivity coefficient can be obtained as the long - time limit of a time correlation function , which can be computed either across multiple simulations ( ensemble - averaged correlation function method ) , or as a time average in a single simulation run ( time - averaged correlation function method ) .we further show that time step pre - averaging removes the need to choose a new time step at each simulation step , significantly improving computational efficiency . in either the time - dependent or the time - independent case ,it is a trivial matter to compute multiple sensitivity coefficients ( e.g. with respect to different parameters ) at the same time one simply tracks each of the corresponding weight functions simultaneously . in deterministic models ,parameter sensitivity coefficients can be computed by simultaneous integration of a set of _ adjunct _ odes , alongside the set of odes describing the model ( see eq . ) .we consider the trajectory reweighting approach described here to be the exact stochastic analogue of the adjunct ode method ; the integration of the adjunct odes alongside the original odes is directly analogous to the procedure of generating a trajectory weight alongside the normal trajectory in a kinetic monte - carlo scheme . indeed, one can derive an _adjunct chemical master equation _ by taking the derivative of the chemical master equation with respect to the parameter of interest ; it turns out that the trajectory reweighting scheme is essentially a stochastic solution method for the adjunct master equation . in section [ sec : stochf ] , we demonstrated the use of trajectory reweighting to compute parameter sensitivities , and hence the differential gain , for a model of a stochastic signaling network .we believe that this approach has widespread potential application to signaling pathways , because it can be implemented for any existing model without any modifications to the underlying kinetic monte - carlo simulation code . as long as a stochastic input signal is generated by a process , one can use the ghost particle trick to compute the sensitivity of any quantity of interest to the input signal ( controlled by varying the rate of the signal production reaction ) by modifying the production step to , computing the weight function from ( see eq . ) and computing the appropriate correlation function for its covariance with the system property of interest . as proof - of - principle we calculated the differential gain for the model in section [ sec : stochf ] ( see fig .[ fig : stochf ] ) , using copasi to generate the simulation data , and a standard spreadsheet package to compute the correlation function . in section [ sec : gardsw ], we used the methodology to compute the sensitivity of the probability distribution function for a bistable genetic switch , to the degree of cooperativity ( hill exponent ) of binding of one of its transcription factors .this example demonstrates that trajectory reweighting is not a panacea for all problems .the bistable genetic switch has a long relaxation time , which requires the correlation function of the weight to the computed over long times , with a corresponding need for large sample sizes to obtain good statistical sampling . while trajectory reweighting works for this example , preliminary attempts to compute the parameter dependence of the switching rate show that finite differencing may be more efficient .in fact plyasunov and arkin already discuss in which cases it may be more efficient to use finite - differencing . because of their long relaxation times , genetic switches are notoriously difficult to study in stochastic simulations . a plethora of sophisticated schemes have been developed to address this problem , some of which could perhaps be extended to incorporate trajectory reweighting .the present study considers how to compute parameter sensitivity coefficients_i.e ._ first derivatives of system properties with respect to the parameters .the same approach can , however , easily be used to compute higher derivatives , such as the hessian matrix , as discussed in supplementary material section [ sec : rat ] .this raises the possibility of combining the present methods with gradient - based search algorithms , to make a sophisticated _parameter estimation _algorithm for stochastic modeling .this would offer a novel approach to a major class of problems in systems biology . to summarise, we believe the trajectory reweighting schemes presented here are an important and useful addition to the stochastic simulation toolbox .further research should address in detail their performance with respect to existing methods and their application to challenging models such as those with long relaxation times , as well as their potential for use in more sophisticated parameter search algorithms .the authors thank mustafa khammash for detailed advice about the fsp method and assistance with its implementation .rja was supported by a royal society university research fellowship .the collaboration leading to this work was facilitated by the stomp research network under bbsrc grant bb / f00379x/1 and by the e - science institute under theme 14 : `` modelling and microbiology '' .23 natexlab#1#1bibnamefont # 1#1bibfnamefont # 1#1citenamefont # 1#1url # 1`#1`urlprefix[2]#2 [ 2][]#2 , _ _ ( , , ) . , ** , ( ) . , * * , ( ) ., , , * * , ( ) . , * * , ( ) . , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , * * , ( ) . , , _ _ ( , , ) , , , , * * , ( ) . , _ et al . _ , * * , ( ) . ,* * , ( ) . ,* * , ( ) ., , , * * , ( ) . for this modelwe find that it does not make any detectable difference whether is computed at fixed ( as here ) or fixed .this is almost certainly not generally true , and in the present case is likely connected to the fact that the noise in s is uncorrelated with the noise in p , in the sense of , , , * * , ( ) ., , , * * , ( ) . , , , * * , ( ) ., , , * * , ( ) . , , , * * , ( ) . , * * , ( ) . , , , * * , ( ) . ,* * , ( ) .here , we present a convenient way to compute the derivatives of average quantities with respect to the parameters of the model , that are required to arrive at eqs . and in the main text .we also show that this method generalizes easily to higher derivatives . noting that in the perturbed system the parameter has been changed to , we use eq . in the main text to write the average of the function in the perturbed system as where the function has the property that where .we then have taking the limit ( for an infinitesimal perturbation ) , and noting thereby that and , yields eqs . and in the main text .taking this procedure further allows the computation of higher derivatives ; one can show for instance that the hessian is { } \hspace{10em } - { \langle f\rangle}{\langle w_{k_\alpha}w_{k_\beta}\rangle } - { \langle f\rangle}{\langle w_{k_\alpha k_\beta}\rangle } + 2{\langle f\rangle}{\langle w_{k_\alpha}\rangle}{\langle w_{k_\beta}\rangle}\ , .\end{array } \label{eq : fab}\ ] ] where eq . is potentially useful for gradient search algorithms .this expression is likely to simplify in many cases for instance we expect that often vanishes for .one might also use the fact that , but it may improve the statistical sampling to retain these terms ( see discussion in main text ) .in this section we describe some exact results that can be obtained for models with linear propensity functions , in particular for the correlation functions defined in eqs . and in the main text .the analysis draws heavily on established literature results ( which we summarize below ) .more details and links to earlier literature can be found in the appendix to supplementary ref . . to fix notation ,let us suppose that the -th propensity function depends linearly on the copy numbers , namely where and are constants which we assume to be proportional to the rate consant .our aim is to compute the sensitivity coefficients .it is well known that for linear propensity functions the moment equations close successively .thus , the mean copy numbers obey where is the stoichiometry matrix ( describing the change in the copy number of the -th species due to the firing of the -th reaction ) , and .note that is usually asymmetric .for the second moments , the variance - covariance matrix , where , obeys where note that is symmetric . finally the time - ordered two - point correlation functions with obey a regression theorem this concludes our survey of the established literature results . in the main text.[fig :dwsq ] ] we now employ the ghost particle trick of section [ sec : tw ] in the main text , and suppose that reaction now creates a noninteracting species , in addition to its usual products .following eq . , the mean copy number obeys for the second moments , eq .becomes where .the second term in this can be rewritten as the stoichiometry matrix entry for consists of a ` ' for the -th reaction , and zero elsewhere , so that eq .becomes .finally we have by similar argumentation we also obtain where .armed with these results , let us now turn to the problem of computing the weight function .we write the continuous - time analogue of eq . in the main text : ( note that by substituting eq . into eq .we recover our previous observation that ) . again writing , it follows from eq .that thus , now noting explicitly the time - dependence of the various terms , we obtain exploiting the linearity of the propensity functions , the regression theorem implies hence eliminating the time integrals between this and eq . , we get eliminating between this and eq . gives finally this is an ode which give the evolution of in terms of known quantities .it can be compared with the adjunct ode that is obtained by differentiating eq . with respect to .the two odes are identical and share the same initial conditions . for this specific case , this is a direct proof of the general result in the main text , namely that .whilst this is interesting , it is not quite what we are after , which is a theory for the correlation functions defined in the main text . to find this we first note a generalisation of the regression theorem is subtracting this from eq .generates a set of coupled odes for the correlation functions , which should be solved with the initial conditions at . in steady statethese odes are the initial conditions are .this is the key result of this section , as in principle it allows for explicit calculation of the correlation functions . comparing to the adjunct ode obtained by differentiating eq . with respect to , we see that for this case we also have a direct proof of the general result claimed in the main text , that as . for the constitutive gene expression model in section[ sec : santmac ] in the main text , we solved eq . to obtain the results given in eq . in the main text .to complete the general discussion here , let us derive an expression for the variance of . from the definition in eq .we have thus the last two terms in this cancel , on account of the regression theorem .further cancellations occur when eq . for inserted , giving finally . since , and and uncorrelated , it follows that .integrating and inserting in this last expression gives .as a particular case , in steady state , .thus we do indeed see that in steady state has a zero mean and a variance that grows linearly in time , justifying our claim that it behaves essentially like a random walk . some results confirming this analysisare shown in fig .[ fig : dwsq ] .in the time step pre - averaging approach , we do not select time steps as part of our kinetic monte carlo algorithm , but instead use the state - dependent average time step in our expression for the time average of quantity , as in eq . in the main text .for the purposes of this section , we define a new notation : in which the sum is over the values of the system function multiplied by the ( state - dependent ) mean time step , computed at each step along a kinetic monte - carlo trajectory of length .note that can be computed using an algorithm that does not keep track of time but only of the choice of reaction channel .we can then rewrite eq . in the main text as when using time step pre - averaging , the correlation function eq .in the main text must be modified because the relative probability of generating a given sequence of states ( eq . in the main text )takes a different form when the algorithm does not keep track of time , and because the average time step in eq .itself usually depends on the parameter in question . in a kinetic monte carlo scheme in which the next reaction is selected as normal , but time is not tracked ,the probability of generating a given trajectory is proportional to ( i.e. the part of eq . in the main text concerning the time step distributionis discarded ) . in analogy to eq .in the main text , one can write the average for the perturbed problem in terms of averages over unperturbed trajectories : taking derivatives of eq . with respect to the parameter as described in section [ sec : rat ] above , it follows that where the fact that depends on leads to an extra term ( the first term ) in comparison to eq . in the main text .computing using eq . , it turns out that has the same form as before , but with replaced by , since the weight function in eq. behaves like a random walk , steady - state parameter sensitivities should be computed using the correlation function trick ( as in section [ sec : ss ] in the main text ) . from eq .we have : where with given by eq . in the main text , using the present eq . to generate , the parameter sensitivity coefficient itself is given by differentiating eq ., the quantity in the second term is given by setting in eqs . and . when using time step pre - averaging in combination with the time - averaged correlation function method , one computes the parameter sensitivity coefficient using eq . rather than simply taking the limit of the correlation function as note that while eq .looks formidable , its actual computation is fairly straightforward . to obtain both and ,one computes trajectory averages of the set of quantities defined by , together with and .these averages are calculated by summing the respective quantities along the trajectory and dividing by the number of steps .the master equation describes the evolution of the probability that a system is in the state at time . for the sake of compactnesswe will adopt the notation and for the states and respectively .the master equation is \label{eq : master}\ ] ] where is the transition rate from to , given by the finite state projection ( fsp ) algorithm is a numerical solution scheme for the master equation based on the idea of truncating the state space . for full details of the original fsp algorithmwe refer to the work of munksy and khammash . herewe outline the basic principles of the scheme and the small changes needed to adapt it to the computation of steady - state sensitivity coefficients .the starting point is to note that eq .is a _ linear _ode for , and may be written in the matrix form where is an infinite - dimensional sparse matrix . to make this into a tractable numerical proposition , the fsp algorithm truncates the state space to a finite size .the truncation is chosen so as to contain almost all of the probability under the conditions of interest .for the problems encountered here , a ( hyper-)rectangular truncation scheme works , , for which where .the question then is how to handle the states _ not _ included in the truncation scheme . in the original fsp algorithmthe extra states are lumped together into a single meta - state .all the transitions leaving the truncated state space are connected to this new meta - state , and all the transitions entering the truncated state space are discarded . with this approximation a sparse matrix , and one can use standard numerical methods to exponentiate the matrix and advance the probability distribution , _ i.e._ .the advantage of introducing the meta - state is that munsky and khammash can prove some sophisticated truncation theorems which provide a certificate of accuracy for the scheme . for the present problem we are interested in the steady state probability distribution .however the meta - state is an absorbing state , which frustrates the direct computation of . to avoid this , we discard _ all _ transitions which leave or enter the truncated state space whilst , obviously , retaining all the transitions contained entirely within the truncated state spacethe meta - state is then no longer needed and becomes a sparse matrix .the steady state distribution is found by solving , in other words is the right - eigenvector of belonging to eigenvalue zero . that such an eigenvector exists is a textbook argument : conservation of probability , , implies and hence where is a row - vector with entries all equal to unity . sincetherefore is a _left_-eigenvector of with eigenvalue zero , it follows under mild and non - restrictive conditions that there is a corresponding _ right_-eigenvector of with the same eigenvalue .this is the desired steady state probability distribution .well - established numerical methods exist to obtain the eigenvectors of a sparse matrix . for the present problems we have used the functionality provided in mathworks matlab . for an open - source solution , we have also had good success with the octave interface to arpack which implements an implicitly restarted arnoldi method . from a practical point of view , we find we are limited to truncated state spaces of size for matlab , and somewhat smaller for the octave interface to arpack .this effectively limits consideration to problems involving at most two state variables ( ) and motivates the choice of examples in the main text .once is found we can calculate .sensitivity coefficients like are then found by solving the fsp problem at and and using eq . in the main text , with typically being a few percent of .note that although the master equation describes a stochastic process , it is itself a _ deterministic _ ode .hence this method of computating sensitivity coefficients by finite differencing is appropriate . in the absence of truncation theorems, convergence is verified empirically .
parameter sensitivity analysis is a powerful tool in the building and analysis of biochemical network models . for stochastic simulations , parameter sensitivity analysis can be computationally expensive , requiring multiple simulations for perturbed values of the parameters . here , we use trajectory reweighting to derive a method for computing sensitivity coefficients in stochastic simulations without explicitly perturbing the parameter values , avoiding the need for repeated simulations . the method allows the simultaneous computation of multiple sensitivity coefficients . our approach recovers results originally obtained by application of the girsanov measure transform in the general theory of stochastic processes [ a. plyasunov and a. p. arkin , j. comp . phys . * 221 * , 724 ( 2007 ) ] . we build on these results to show how the method can be used to compute steady - state sensitivity coefficients from a single simulation run , and we present various efficiency improvements . for models of biochemical signaling networks the method has a particularly simple implementation . we demonstrate its application to a signaling network showing stochastic focussing and to a bistable genetic switch , and present exact results for models with linear propensity functions .
the inverse problem consisting of mapping given noisy time - series to compatible reaction networks is of importance when the possible biological mechanisms underlying the time - series are of interest .reaction networks compatible with given noisy time - series may be induced from the deterministic kinetic ordinary - differential equations ( odes ) which are compatible with the time - series . however , in order to match suitable deterministic kinetic equations with given stochastic time - series , it is important to determine the type of the deterministic stable invariant sets which are ` hidden ' in the time - series .this may be a challenging task , especially when cycles ( oscillations ) are observed in the time - series .the observed cycles can be classified as mixed ( also known at the stochastic level as _ noisy deterministic cycles _ ) , which are present in both the deterministic and stochastic models , or as purely stochastic ( also known as _ quasi - cycles _ , or noise - induced oscillations ) , present only in the stochastic model .noisy deterministic cycles may arise directly from the autonomous kinetic odes , or via the time - periodic terms present in the nonautonomous kinetic odes .quasi - cycles may arise from the intrinsic or extrinsic noise , and have been shown to exist near deterministic stable foci , and even near deterministic stable nodes . for two - species reaction systems, quasi - cycles can be further classified into those that are unconditionally noise - dependent ( but dependent on the reaction rate coefficients ) , and those that are conditionally noise - dependent .thus , a cycle detected in a noisy time - series may at the deterministic level generally correspond to a stable limit cycle , a stable focus , or a stable node . in order to detect and classify cycles in noisy time - series , several statistical methodshave been suggested .for example , in , analysis of the covariance as a function of the time - delay , spectral analysis ( the fourier transform of the covariance function ) , and analysis of the shape of the stationary probability mass function , have been suggested , first two of which rely on how long the stochastic state spends near the suspected cycle , which can be a limitation if stochastic switching is present .let us note that reaction networks of the lotka - volterra type are used as test models in , and that conditionally noise - dependent quasi - cycles , which can arise near a stable node , and which can induce oscillations in only a subset of species , have not been discussed .in addition to the aforementioned statistical methods developed for analysing noisy time - series , analytical methods for locally studying the underlying stochastic processes near the deterministic critical points and limit cycles , for a fixed reaction network , have also been developed .statistical and analytical methods for studying cycles in stochastic reaction kinetics have been often focused on deterministically monostable systems which undergo a local bifurcation near a critical point , known as the supercritical hopf bifurcation .we suspect this is partially due to the simplicity of the bifurcation , and partially due to the fact that it is difficult to find two - species reaction systems , which are more amenable to mathematical analysis , undergoing more complicated bifurcations and displaying bistability involving stable limit cycles .nevertheless , more complicated bifurcations and structures in the phase space of the kinetic odes arising in biology can be found ( see e.g. ) , and it is , thus , of importance to test the available methods on simpler test models that display some of the complexities found in the applications . in this paper , we construct two reaction systems that are two - dimensional ( i.e. they only include two chemical species ) and induce cubic kinetic equations , first of which undergoes a global bifurcation known as a convex supercritical homoclinic bifurcation , and which displays bistability involving a critical point and a limit cycle ( which we call mixed bistability ) .the second system undergoes a local bifurcation known as a multiple limit cycle bifurcation , and displays bistability involving two limit cycles ( which we call bicyclicity ) . aside from finding an application as test models for statistical inference and analysis in biology , to our knowledge ,the constructed systems are also the first examples of two - dimensional reaction systems displaying the aforementioned types of bifurcations and bistabilities .the reaction network corresponding to the first system is given by { k_1 } s_1 , \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ , r_7 : & \varnothing & \xrightarrow [ ] { k_7 } s_2 , \nonumber \\& r_2 : \ ; & s_1 & \xrightarrow [ ] { k_2 } 2 s_1 , \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ , r_8 : & s_2 & \xrightarrow [ ] { k_8 } \varnothing , \nonumber \\ & r_3 : \ ; & 2 s_1 & \xrightarrow [ ] { k_3 } 3 s_1 , \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ , r_9 : & s_1 + s_2 & \xrightarrow [ ] { k_9 } s_1 + 2 s_2 , \nonumber \\ & r_4 : \ ; & s_1 + s_2 & \xrightarrow [ ] { k_4 } s_2 , \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; r_{10 } : & 2 s_2 & \xrightarrow [ ] { k_{10 } } 3 s_2 , \nonumber \\ & r_5 : \ ; & 2 s_1 + s_2 & \xrightarrow [ ] { k_5 } s_1 + s_2 , \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ , r_{11 } : & 3 s_2 & \xrightarrow [ ] { k_{11 } } 2 s_2 , \nonumber \\ & r_{6 } : \ ; & s_1 + 2 s_2 & \xrightarrow [ ] { k_6 } 2 s_1 + 2 s_2 , \label{eq : homoclinic1net}\end{aligned}\ ] ] where the two species and react according to the eleven reactions under the mass - action kinetics , with the reaction rate coefficients denoted , and with being the zero - species . a particular choice of the ( dimension - less ) reaction rate coefficients is given by while more general conditions on these parameters are derived later as equations ( [ eq : homoclinic1coefficients ] ) and ( [ eq : homoclinic1parameters ] ) .the reaction network corresponding to the second system includes two species and which are subject the following fourteen chemical reactions : { k_1 } s_1 , \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ , r_8 : & \varnothing & \xrightarrow [ ] { k_8 } s_2 , \nonumber \\& r_2 : \ ; & s_1 & \xrightarrow [ ] { k_2 } \varnothing , \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; r_9 : & s_2 & \xrightarrow [ ] { k_9 } 2 s_2 , \nonumber \\& r_3 : \ ; & 2 s_1 & \xrightarrow [ ] { k_3 } 3 s_1 , \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ , r_{10 } : & s_1 + s_2 & \xrightarrow [ ] { k_{10 } } s_1 , \nonumber \\ & r_4 : \ ; & s_1 + s_2 & \xrightarrow [ ] { k_4 } 2 s_1 + s_2 , \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; r_{11 } : & 2 s_2 & \xrightarrow [ ] { k_{11 } } 3 s_2 , \nonumber \\ & r_5 : \ ; & 3 s_1 & \xrightarrow [ ] { k_5 } 4 s_1,\ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; r_{12 } : & 2 s_1 + s_2 & \xrightarrow [ ] { k_{12 } } 2 s_1 + 2 s_2 , \nonumber \\ & r_6 : \ ; & 2 s_1 + s_2 & \xrightarrow [ ] { k_6 } s_1 + s_2 , \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; r_{13 } : & s_1 + 2 s_2 & \xrightarrow [ ] { k_{13 } } s_1 + s_2 , \nonumber \\ & r_7 : \ ; & s_1 + 2 s_2 & \xrightarrow [ ] { k_7 } 2 s_2,\ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; r_{14 } : & 3 s_2 & \xrightarrow [ ] { k_{14 } } 2 s_2 , \label{eq : bicyclicxt2net}\end{aligned}\ ] ] where are the corresponding reaction rate coefficients .a particular choice of the ( dimension - less ) reaction coefficients is given by while the general conditions on these parameters are given later as equations ( [ eq : bicyclicxt2coefficients ] ) and ( [ eq : bicyclicxt2parameters ] ) .in figure [ fig : introduction ] , we display a representative noisy - time series generated using the gillespie stochastic algorithm , in figure [ fig : introduction](a ) for the one - dimensional cubic schlgl system , which deterministically displays two stable critical points ( bistationarity ) , in figure [ fig : introduction](b ) for the reaction network ( [ eq : homoclinic1net ] ) with coefficients ( [ eq : homoclinic1example ] ) , which deterministically displays a stable critical point and a stable limit cycle ( mixed bistability ) , and in figure [ fig : introduction](c ) for the reaction network ( [ eq : bicyclicxt2net ] ) with coefficients ( [ eq : bicyclicexample ] ) , which deterministically displays two stable limit cycles ( bicyclicity ) .several statistical challenges arise .for example , is it possible to infer that the upper stable set in figure [ fig : introduction](b ) is a deterministic critical point , while the lower a noisy limit cycle ?is it possible to detect one / both noisy limit cycles in figure [ fig : introduction](c ) ?the answer to the second question is complicated by the fact that the two deterministic limit cycles in figure [ fig : introduction](c ) are relatively close to each other . with coefficients , and reaction network with coefficients , respectively . at the deterministic level ,the phase planes of and are shown in figure , while deterministic and stochastic time - series in figures and . at the deterministic level , a critical point and a limit cycle are ` hidden ' in ( b ) , while two limit cycles are ` hidden ' in ( c ) ._ , title="fig : " ] with coefficients , and reaction network with coefficients , respectively . at the deterministic level ,the phase planes of and are shown in figure , while deterministic and stochastic time - series in figures and . at the deterministic level , a critical point and a limit cycle are ` hidden ' in ( b ) , while two limit cycles are ` hidden ' in ( c ) ._ , title="fig : " ] -5.1 cm 4.8 cm -2 mm with coefficients , and reaction network with coefficients , respectively . at the deterministic level ,the phase planes of and are shown in figure , while deterministic and stochastic time - series in figures and . at the deterministic level , a critical point and a limit cycle are ` hidden ' in ( b ) , while two limit cycles are ` hidden ' in ( c ) ._ , title="fig : " ] -5.1 cm 4.3 cm the rest of the paper is organized as follows . in section [ sec : properties ] , we outline properties of the planar quadratic ode systems , focusing on multistability , cycles and cycle bifurcations .there are two reasons for focusing on the planar quadratic systems : firstly , the phase plane theory for such systems is complete , with a variety of concrete examples with interesting phase plane configurations .secondly , an arbitrary planar quadratic ode system can always be mapped to a kinetic one using only an affine transformation - a special property not shared with cubic ( nor even linear ) planar systems .this , together with the available nonlinear kinetic transformations which increase the polynomial degree of an ode system by one , imply that we may map a general planar quadratic system to at most cubic planar kinetic system , which may still be biologically or chemically relevant . in section [ sec :constructions ] , we present the two planar cubic test models which induce reaction networks ( [ eq : homoclinic1net ] ) and ( [ eq : bicyclicxt2net ] ) , and which are constructed starting from suitable planar quadratic ode systems. we also briefly compare the deterministic and stochastic solutions of the two constructed systems .let us consider the two - dimensional second - degree autonomous polynomial odes where , are the second - degree two - variable polynomial functions , and is the vector of the corresponding coefficients .we assume that and are relatively prime and at least one is of second - degree .we allow coefficients to be parameter - dependent , , with , .let us assume that system ( [ eq : polynomial ] ) satisfies two additional properties : 1 .coefficients , i.e. and are so - called kinetic functions ( for a rigorous definition see ) .the species concentrations and are uniformly bounded in time for in the nonnegative orthant , except possibly for initial conditions located on a finite number of one - dimensional subspaces of , where infinite - time blow - ups are allowed .the subset of equations ( [ eq : polynomial ] ) satisfying properties ( i)(ii ) are referred to as the _ deterministic kinetic equations _ bounded in , and denoted we now provide definitions and summarize a set of results regarding multistability , cycles and cycle bifurcations of ( [ eq : polynomial ] ) and ( [ eq : kinetic ] ) , which are referred to as the so - called exotic phenomena in the biological context . _multistability_. system ( [ eq : polynomial ] ) is said to display multistability if multiple stable invariant sets coexist in the phase plane , for a fixed .biologically , multistability corresponds to biological switches , which may be classified into reversible or irreversible , where the former play an important role in reversible biological processes ( e.g. metabolic pathways dynamics , and reversible differentiation ) , while the latter in irreversible biological processes ( e.g. developmental transitions , and apoptosis ) .multistability can be mathematically classified into_ pure multistability _ , involving stable invariant sets of only the same type ( either only stable critical points , or only stable cycles ) , and _mixed multistability _ , involving at least one stable critical point , and at least one stable cycle .pure multistability involving only critical points is called _ multistationarity _ , while we call pure multistability involving only cycles _ multicyclicity_. mixed bistability , and bicyclicity , can be further classified into concentric and nonconcentric .concentric mixed bistability ( resp .bicyclicity ) occurs when the stable limit cycle encloses the stable critical point ( resp .when the first stable limit cycle encloses the second stable limit cycle ) , while nonconcentric when this is not the case .we now prove that ( [ eq : polynomial ] ) can have at most three coexisting stable critical points , i.e. ( [ eq : polynomial ] ) can be at most _tristationary_. on the other hand , we conjecture that ( [ eq : kinetic ] ) can be maximally bistationary , and can not display nonconcentric mixed bistability .bistationarity has been shown to exist in ( [ eq : polynomial ] ) ( even in one - dimensional cubic case , e.g. the schlgl model , see the time - series shown in figure [ fig : introduction](a ) ) .[ lemma : tristationarity ] _ the maximum number of coexisting stable critical points in two - dimensional relatively prime second - degree polynomial ode systems , with fixed coefficients , is three . _let us assume system ( [ eq : polynomial ] ) has four , the maximum number , of real finite critical points . then, using an appropriate centroaffine ( linear ) transformation , system ( [ eq : polynomial ] ) can be mapped to which is topologically equivalent to ( [ eq : polynomial ] ) , with the critical points located at , , and , with , , , and the coefficients given by the trace and determinant of the jacobian matrix of ( [ eq : tristationary ] ) , denoted and , respectively , evaluated at the four critical points , , are given by : system ( [ eq : tristationary ] ) may have three stable critical points if and only if the quadrilateral , formed by the critical points , is nonconvex , and the only saddle critical point is the one located at the interior vertex of the quadrilateral .this is the case when , , , and , in which case and are nonsaddle critical points , while is a saddle . imposing also the conditions , , , ensuring that and are stable , a solution of the resulting system of algebraic inequalities is given by , , , , .let us note that if ( [ eq : tristationary ] ) is kinetic , then it can not have three stable critical points .more precisely , requiring , , and and in ( [ eq : tristationaryjacob ] ) , implies and , which further implies , so that is unstable . _limit cycles_. cycles of ( [ eq : polynomial ] ) are closed orbits in the phase plane , and they can be isolated ( limit cycles , and separatrix cycles ) or nonisolated ( a one - parameter continuous family of cycles ) .the separatrix cycles can be generally classified into the homoclinic cycles , heteroclinic cycles , and compound separatrix cycles , consisting of a finite union of separatrix cycles that are appropriately oriented .limit cycles correspond to biological clocks , which play an important role in fundamental biological processes , such as the cell cycle , the glycolytic cycle and circadian rhythms .the maximum number of stable limit cycles in ( [ eq : polynomial ] ) is two , i.e. ( [ eq : polynomial ] ) can be at most _ bicyclic_.this follows from the fact that the maximum number of limit cycles in ( [ eq : polynomial ] ) is four , in the unique configuration , a fact only recently proved in , solving the second part of hilbert s 16th problem for the quadratic case .it also follows from that ( [ eq : polynomial ] ) may display _ mixed tristability _ , involving one stable critical point , and two stable limit cycles .we conjecture that ( [ eq : kinetic ] ) has at most three limit cycles .let us note that it was conjectured in , partially based on , that ( [ eq : polynomial ] ) bounded in the whole can have at most two limit cycles ._ cycle bifurcations_. variations of coefficients of ( [ eq : polynomial ] ) may lead to changes in the topology of the phase plane ( e.g. a change may occur in the number of invariant sets or their stability , shape of their region of attraction or their relative position ) .the variation of in ( [ eq : kinetic ] ) may be interpreted as a variation of the reaction rate coefficients due to changes in the reactor ( environment ) parameters , such as the pressure or temperature .if the variation causes the system to become topologically nonequivalent , such a parameter is called a bifurcation parameter , and at the parameter value where the topological nonequivalence occurs , a bifurcation is said to take place .bifurcations in deterministic kinetic equations occur in applications .bifurcations of cycles in the phase plane of ( [ eq : polynomial ] ) can be classified into three categories : ( i ) the andronov - hopf bifurcation , where a cycle is created from a critical point of focus or center type , ( ii ) the separatrix cycle bifurcation , where a limit cycle is created from a ( compound ) separatrix cycle , and ( iii ) the multiple limit cycle bifurcation , where a limit cycle is created from a limit cycle of multiplicity greater than one .bifurcations ( i ) and ( iii ) are examples of local bifurcations , occurring in a neighbourhood of a critical point or a cycle , while bifurcations ( ii ) are examples of global bifurcations .let us note that the maximum multiplicity of a multiple focus of ( [ eq : polynomial ] ) is three , so that at most three local limit cycles can be created under appropriate perturbations .convex homoclinic bifurcations ( defined in e.g. ) , as well as saddle - saddle bifurcations , and the saddle - node bifurcations on an invariant cycle , can occur in ( [ eq : polynomial ] ) .however , concave homoclinic bifurcations , as well as double convex , and double concave homoclinic bifurcations , can not occur in ( [ eq : polynomial ] ) as a consequence of basic properties of planar quadratic odes .a necessary condition for the existence of a limit cycle in the phase plane of ( [ eq : kinetic ] ) is that or .this implies that the induced reaction network must contain at least one autocatalytic reaction with at least three products ( i.e. three copy - numbers on the right - hand side of a reaction ) .let us note that , for a fixed kinetic ode system , multistationarity at some parameter values , is neither necessary , nor sufficient , for cycles at some ( possibly other ) parameter values . in the literature ,system ( [ eq : kinetic ] ) has been shown to display the following cycle bifurcations : andronov - hopf bifurcations , saddle - node on invariant cycle , and multiple limit cycle bifurcations leading to mixed bistability .while reaction systems displaying double andronov - hopf bifurcation , a saddle - saddle heteroclinic bifucation , and displaying a multiple limit cycle bifurcation leading to concentric bicyclicity , have been constructed , the constructed systems are not bounded in .in this section , we construct two planar cubic ode systems displaying nonconcentric bistability .the first system displays a convex homoclinic bifurcation , and mixed bistability , and is obtained by modifying the system from using the results from appendix [ app : xfactorable ] . the second system displays a multiple limit cycle bifurcation , and bicyclicity . to construct the second system , we use an existing system of the form ( [ eq : polynomial ] ) , which forms a one - parameter family of uniformly rotated vector fields , and which displays bicyclicity and multiple limit cycle bifurcation .we use kinetic transformations from to map this system , which is of the form ( [ eq : polynomial ] ) , to a kinetic one , which is of the form ( [ eq : kinetic ] ) , and we fine - tune the coefficients in such a way that the sizes of stable limit cycles differ by maximally one order of magnitude ( a task that can pose challenges ) . as differences may be observed between the deterministic and stochastic solutions for parameters at which a deterministic bifurcation occurs , we briefly investigate the constructed models for such observations .let us note that an alternative static ( i.e. not dynamic ) approach for reaction system construction , using the chemical reaction network theory or kinetic logic , provides only conditions for stability of critical points , but no information about the phase plane structures , and is , thus , insufficient for construction of the systems presented in this paper .-2 mm before and after the homoclinic bifurcation .the stable node , saddle , and unstable focus are represented as the green , blue and red dots , respectively , the vector field as gray arrows , numerically approximated saddle manifolds as blue trajectories , and the purple curve in panel ( b ) is the stable limit cycle .the parameters appearing in , and satisfying , are fixed to , , , the reactor volume is set to , and the bifurcation parameter is as shown in the panels ._ , title="fig : " ] before and after the homoclinic bifurcation .the stable node , saddle , and unstable focus are represented as the green , blue and red dots , respectively , the vector field as gray arrows , numerically approximated saddle manifolds as blue trajectories , and the purple curve in panel ( b ) is the stable limit cycle .the parameters appearing in , and satisfying , are fixed to , , , the reactor volume is set to , and the bifurcation parameter is as shown in the panels ._ , title="fig : " ] -6.4 cm 6.5 cm -2 mm before and after the homoclinic bifurcation .the stable node , saddle , and unstable focus are represented as the green , blue and red dots , respectively , the vector field as gray arrows , numerically approximated saddle manifolds as blue trajectories , and the purple curve in panel ( b ) is the stable limit cycle .the parameters appearing in , and satisfying , are fixed to , , , the reactor volume is set to , and the bifurcation parameter is as shown in the panels ._ , title="fig : " ] before and after the homoclinic bifurcation .the stable node , saddle , and unstable focus are represented as the green , blue and red dots , respectively , the vector field as gray arrows , numerically approximated saddle manifolds as blue trajectories , and the purple curve in panel ( b ) is the stable limit cycle .the parameters appearing in , and satisfying , are fixed to , , , the reactor volume is set to , and the bifurcation parameter is as shown in the panels ._ , title="fig : " ] -6.3 cm 5.8 cm consider the following deterministic kinetic equations with the coefficients given by where denotes the absolute value , and with parameters , , , , and satisfying the canonical reaction network induced by system ( [ eq : homoclinic1 ] ) is given by ( [ eq : homoclinic1net ] ) .system ( [ eq : homoclinic1 ] ) is obtained from system ( * ? ? ? * eq . (32 ) ) , which is known to display a mixed bistability and a convex supercritical homoclinic bifurcation when , .we have modified ( * ? ? ?* eq . ( 32 ) ) by adding to its right - hand side the -term from definition [ def : xft ] , thus preventing the long - term dynamics to be trapped on the phase plane axes .it can be shown , using theorem [ theorem : xfact2d ] , that choosing a sufficiently small in ( [ eq : homoclinic1coefficients ] ) does not introduce additional critical points in first quadrant of the phase space of ( [ eq : homoclinic1 ] ) . in figures [ fig : phaseplanes](a ) and [ fig : phaseplanes](b ) , we show phase plane diagrams of ( [ eq : homoclinic1 ] ) before and after the bifurcation , respectively , where the critical points of the system are shown as the coloured dots ( the stable node , saddle , and unstable focus are shown as the green , blue and red dots , respectively ) , the blue curves are numerically approximated saddle manifolds ( which at , form a homoclinic loop ) , and the purple curve in figure [ fig : phaseplanes](b ) is the stable limit cycle that is created from the homoclinic loop .let us note that the parameter , appearing in ( [ eq : homoclinic1coefficients ] ) , controls the bifurcation , while the parameter controls the saddle - node separation . in figure[ fig : homoclinic ] , we show numerical solutions of the initial value problem for ( [ eq : homoclinic1 ] ) in red , with one initial condition in the region of attraction of the node , while the other near the unstable focus .the blue sample paths are generated by using the gillespie stochastic simulation algorithm on the induced reaction network ( [ eq : homoclinic1net ] ) , initiated near the unstable focus .more precisely , in figures [ fig : homoclinic](a ) and [ fig : homoclinic](c ) we show the dynamics before the deterministic bifurcation , when the node is the globally stable critical point for the deterministic model , while in figures [ fig : homoclinic](b ) and [ fig : homoclinic](d ) we show the dynamics after the bifurcation , when the deterministic model displays mixed bistability . on the other hand ,the stochastic model displays relatively frequent stochastic switching in figures [ fig : homoclinic](a ) and [ fig : homoclinic](b ) , when the saddle - node separation is relatively small .let us emphasize that the stochastic switching is observed even before the deterministic bifurcation . in figures [ fig : homoclinic](c ) and [ fig : homoclinic](d ) , when the saddle - node separation is relatively large , the stochastic switching is less common , and the stochastic system in the phase space is more likely located near the stable node .thus , in figures [ fig : homoclinic](c ) and [ fig : homoclinic](d ) , the stochastic system is less affected by the bifurcation than the deterministic system , and , in fact , behaves more like the deterministic system before the bifurcation . in ,an algorithm is presented which structurally modifies a given reaction network under the mass - action kinetics , in such a way that the deterministic dynamics is preserved , while the stochastic dynamics is modified in a state - dependent manner . applying the algorithm on the reaction network ( [ eq : homoclinic1net ] ) , we preserve the deterministic kinetic equations ( [ eq : homoclinic1 ] ) , while decreasing the chance of finding the stochastic state near the stable node in figures [ fig : homoclinic](c ) and [ fig : homoclinic](d ) , and restoring the stochastic switching .-2 mm are shown in red , while representative sample paths generated by the gillespie stochastic simulation algorithm applied on the corresponding reaction network are shown in blue . _( a)(b ) _ the cases before and after the homoclinic bifurcation , respectively , for smaller values of , when the limit cycle and the stable node are closer together . _ ( c)(d )_ the cases before and after the homoclinic bifurcation , respectively , for larger values of .one of the deterministic solutions is initiated in the region of attraction of the node , while the other near the focus .the parameters are fixed to , , the reactor volume is set to , with and as shown in the panels._,title="fig : " ] are shown in red , while representative sample paths generated by the gillespie stochastic simulation algorithm applied on the corresponding reaction network are shown in blue ._ ( a)(b ) _ the cases before and after the homoclinic bifurcation , respectively , for smaller values of , when the limit cycle and the stable node are closer together ._ ( c)(d ) _ the cases before and after the homoclinic bifurcation , respectively , for larger values of .one of the deterministic solutions is initiated in the region of attraction of the node , while the other near the focus .the parameters are fixed to , , the reactor volume is set to , with and as shown in the panels._,title="fig : " ] -5.0 cm 4.8 cm -2 mm are shown in red , while representative sample paths generated by the gillespie stochastic simulation algorithm applied on the corresponding reaction network are shown in blue ._ ( a)(b ) _ the cases before and after the homoclinic bifurcation , respectively , for smaller values of , when the limit cycle and the stable node are closer together . _ ( c)(d )_ the cases before and after the homoclinic bifurcation , respectively , for larger values of .one of the deterministic solutions is initiated in the region of attraction of the node , while the other near the focus .the parameters are fixed to , , the reactor volume is set to , with and as shown in the panels._,title="fig : " ] are shown in red , while representative sample paths generated by the gillespie stochastic simulation algorithm applied on the corresponding reaction network are shown in blue . _( a)(b ) _ the cases before and after the homoclinic bifurcation , respectively , for smaller values of , when the limit cycle and the stable node are closer together ._ ( c)(d ) _ the cases before and after the homoclinic bifurcation , respectively , for larger values of .one of the deterministic solutions is initiated in the region of attraction of the node , while the other near the focus .the parameters are fixed to , , the reactor volume is set to , with and as shown in the panels._,title="fig : " ] -5.0 cm 4.1 cm consider the following deterministic kinetic equations with coefficients given by \sin(\theta)| , \nonumber \\k_{3 } & = |a \mathcal{t}_2 \cos(\theta ) - [ d \mathcal{t}_2 + b ( 2 \mathcal{t}_1 + x_1^ { * } + 1 ) ] \sin(\theta)| , \nonumber \\ k_{4 } & = |a \mathcal{t}_1 \cos(\theta ) - [ d ( \mathcal{t}_1 + 1 ) + 2 c \mathcal{t}_2 ] \sin(\theta)| , \nonumber \\ k_{5 } & = |b \sin(\theta)| , \nonumber \\ k_{6 } & = |- a \cos(\theta ) + d \sin(\theta)| , \nonumber \\ k_{7 } & = |c \sin(\theta)| , \label{eq : bicyclicxt2coefficients}\end{aligned}\ ] ] and if , then , , and with parameters and satisfying \mathcal{t}_2 + b ( \mathcal{t}_1 + 1 ) ( \mathcal{t}_1 + x_1^ { * } ) < 0 .\label{eq : bicyclicxt2parameters}\end{aligned}\ ] ] the canonical reaction network induced by system ( [ eq : bicyclicxt ] ) is given by ( [ eq : bicyclicxt2net ] ) . in this section ,we show that systems ( [ eq : bicyclicxt ] ) and ( [ eq : bicyclic ] ) ( see below ) , the latter of which is known to display bicyclicity and a multiple limit cycle bifurcation , are topologically equivalent near the corresponding critical points , provided conditions ( [ eq : bicyclicxt2parameters ] ) are satisfied . in figures [ fig : phaseplanes](c ) and [ fig : phaseplanes](d ) , we show the phase plane diagram of ( [ eq : bicyclicxt ] ) for a particular choice of the parameters satisfying ( [ eq : bicyclicxt2parameters ] ) , and it can be seen that the system also displays bicyclicity and a multiple limit cycle bifurcation , with figures [ fig : phaseplanes](c ) and [ fig : phaseplanes](d ) showing the cases before and after the bifurcation , respectively . in figure[ fig : phaseplanes](c ) , the only stable invariant set is the limit cycle shown in red , while in figure [ fig : phaseplanes](d ) there are two additional limit cycles - a stable one , shown in purple , and an unstable one , shown in black .the black , purple , and red limit cycles are denoted in the rest of the paper by , and , respectively . at the bifurcation point , and intersect . in order to construct ( [ eq : bicyclicxt ] ) ,let us consider the planar quadratic ode system given by where with [ lemma : bicyclicity ] the statement of the lemma follows from , and the theory of one - parameter family of uniformly rotated vector fields .in order to map the stable limit cycles of system ( [ eq : bicyclic ] ) into the first quadrant , and then map the resulting system to a kinetic one , having no boundary critical points , let us apply a translation transformation , , followed by a perturbed -factorable transformation , as defined in definition [ def : xft ] , on system ( [ eq : bicyclic ] ) , which results in system ( [ eq : bicyclicxt ] ) with the coefficients ( [ eq : bicyclicxt2coefficients ] ) .[ lemma : bicyclicityxt ] _ consider the ode systems and , and assume conditions are satisfied. then and are locally topologically equivalent in the neighborhood of the corresponding critical points .furthermore , for sufficiently small , system has exactly one additional critical point in , which is a saddle located in the neigbhourhood of ._ consider the critical point of system ( [ eq : bicyclic ] ) , which corresponds to the critical point of system ( [ eq : bicyclicxt ] ) when .the jacobian matrices of ( [ eq : bicyclic ] ) , and ( [ eq : bicyclicxt ] ) with , evaluated at , and , are respectively given by condition ( ii ) of ( * ? ? ?* theorem 3.3 ) is satisfied , so that the stability of the critical point is preserved under the -factorable transformation , but condition ( iii ) is not satisfied . in order for to remain focus under the -factorable transformation, the discriminant of must be negative : let us set in ( [ eq : templhs ] ) , leading to conditions ( [ eq : templhs ] ) and ( [ eq : templhs2 ] ) are equivalent when , since the the sign of the function on the lhs of ( [ eq : templhs ] ) is a continuous function of . from conditions ( [ eq : bicyclicxt2parameters ] ) it follows that , , and , so that ( [ eq : templhs2 ] ) is satisfied .similar arguments show that the second critical point of ( [ eq : bicyclic ] ) , located at , is mapped to an unstable focus of ( [ eq : bicyclicxt ] ) , if , and if is bounded as given in ( [ eq : bicyclicxt2parameters ] ) .consider ( [ eq : bicyclicxt ] ) with .the boundary critical points are located at , , and , with conditions ( [ eq : bicyclicxt2parameters ] ) imply that the critical point satisfies , and \mathcal{t}_2 - b ( 1 + \mathcal{t}_1 ) ( \mathcal{t}_1 + x_1^ { * } ) > 0 , \nonumber\end{aligned}\ ] ] when .when , it then follows from condition ( iv ) of ( * ? ? ?* theorem 3.3 ) that the critical point is a saddle , and from theorem [ theorem : xfact2d ] , condition ( [ eq : boundarycondition2 ] ) , that it is mapped outside of when .similar arguments show that , assuming conditions ( [ eq : bicyclicxt2parameters ] ) are true , is a saddle that is mapped to when , and that critical points are real , , and that is a saddle that is mapped outside when . finally , if conditions ( [ eq : bicyclicxt2parameters ] ) are satisfied , so are conditions ( [ eq : bicyclicconditions ] ) .we now consider the kinetic odes ( [ eq : bicyclicxt ] ) and the induced reaction network ( [ fig : nonconcentricbicyclic ] ) for a particular set of coefficients ( [ eq : bicyclicxt2coefficients ] ) .we also rescale the time according to , i.e. we multiply all the coefficients appearing in ( [ eq : bicyclicxt ] ) by . in figures [ fig : nonconcentricbicyclic](a ) and [ fig : nonconcentricbicyclic](b ) we show the numerically approximated solutions of the initial value problem for ( [ eq : bicyclicxt ] ) before and after the bifurcation , respectively . in figure[ fig : nonconcentricbicyclic](a ) , the solution is initiated near the unstable focus outside the limit cycle , and it can be seen that the solution spends some time near the unstable focus , followed by an excursion that leads it to the stable limit cycle , where is then stays forever . in figure[ fig : nonconcentricbicyclic](b ) , the solutions tend to the limit cycle or , depending on the initial condition .let us note that the critical value at which the limit cycles and intersect , at the deterministic level , is numerically found to be . in figures [ fig : nonconcentricbicyclic](c ) and[ fig : nonconcentricbicyclic](d ) we show representative sample paths generated by applying the gillespie stochastic simulation algorithm on the reaction network ( [ eq : bicyclicxt2net ] ) , before and after the bifurcation , respectively .one can notice that the stochastic dynamics does not appear to be significantly influenced by the bifurcation , as opposed to the deterministic dynamics .in figures [ fig : nonconcentricbicyclic](c ) and [ fig : nonconcentricbicyclic](d ) , one can notice pulses similar as in figure [ fig : nonconcentricbicyclic](a ) , that are now induced by the intrinsic noise present in the system .-2 mm before and after the bifurcation , where in ( b ) the trajectory initiated near the stable limit cycle is shown in purple , while the on initiated near in red .sample paths generated by the gillespie stochastic simulation algorithm applied to the induced reaction network before and after the bifuration .the parameters appearing in are fixed to , , , , , , , with the reactor volume , and as indicated in the plots .coefficients are multiplied by a constant factor of ( time - rescaling)._,title="fig : " ] before and after the bifurcation , where in ( b ) the trajectory initiated near the stable limit cycle is shown in purple , while the on initiated near in red .sample paths generated by the gillespie stochastic simulation algorithm applied to the induced reaction network before and after the bifuration .the parameters appearing in are fixed to , , , , , , , with the reactor volume , and as indicated in the plots .coefficients are multiplied by a constant factor of ( time - rescaling)._,title="fig : " ] -5.0 cm 4.8 cm -2 mm before and after the bifurcation , where in ( b ) the trajectory initiated near the stable limit cycle is shown in purple , while the on initiated near in red .sample paths generated by the gillespie stochastic simulation algorithm applied to the induced reaction network before and after the bifuration .the parameters appearing in are fixed to , , , , , , , with the reactor volume , and as indicated in the plots .coefficients are multiplied by a constant factor of ( time - rescaling)._,title="fig : " ] before and after the bifurcation , where in ( b ) the trajectory initiated near the stable limit cycle is shown in purple , while the on initiated near in red .sample paths generated by the gillespie stochastic simulation algorithm applied to the induced reaction network before and after the bifuration .the parameters appearing in are fixed to , , , , , , , with the reactor volume , and as indicated in the plots .coefficients are multiplied by a constant factor of ( time - rescaling)._,title="fig : " ] -5.0 cm 4.1 cm 4.9 mm * acknowledgments : * the authors would like to thank the isaac newton institute for mathematical sciences , cambridge , for support and hospitality during the programme `` stochastic dynamical systems in biology : numerical methods and applications '' where work on this paper was undertaken .this work was supported by epsrc grant no ep / k032208/1 .this work was partially supported by a grant from the simons foundation .tom vejchodsk would like to acknowledge the institutional support rvo 67985840 .radek erban would also like to thank the royal society for a university research fellowship .[ def : xft ] consider applying an -factorable transformation , as defined in , on ( [ eq : polynomial ] ) , and then adding to the resulting right - hand side a zero - degree term , with and vector , resulting in then , mapping to , is called a _ perturbed -factorable transformation _ if . if , the transformation reduces to an ( unperturbed ) -factorable transformation , , defined in .[ theorem : xfact2d ] _ consider the ode system with positive critical points . let us assume that is hyperbolic , and is not the degenerate case between a node and a focus , i.e. it satisfies the condition as well as conditions _( ii ) _ and _ ( iii ) _ of _ theorem _ in . then positivity , stability and type of the critical point are invariant under the perturbed -factorable transformations , for sufficiently small .assume does not have boundary critical points .consider the two - dimensional ode system with , and with boundary critical points denoted , , .assume that for and that for some then , the critical point of the two - dimensional ode system with becomes the critical point of system for sufficiently small ._ the critical points of ( [ eqn : xft ] ) are solutions of the following regularly perturbed algebraic equation let us assume can be written as the power series where are the critical points of ( [ eqn : xft ] ) with . substituting the power series ( [ eq : expansion ] ) into ( [ eq : criticalpoints ] ) , and using the taylor series theorem on , so that , as well as that , and equating terms of equal powers in , the following system of polynomial equations is obtained : _ order equation_. the positive critical points satisfy .since has no boundary critical points by assumption , critical points with , , , , satisfy , ._ order equation_. vector satisfies system which can be solved provided is a hyperbolic critical point .vector is given by from which conditions ( [ eq : boundarycondition1 ] ) and ( [ eq : boundarycondition2 ] ) follow .plesa , t. , vejchodsk , t. , and erban , r. , 2015 .chemical reaction systems with a homoclinic bifurcation : an inverse problem . submitted to _journal of mathematical chemistry _, available as http://arxiv.org/abs/1510.07205 .erban , r. , chapman , s. j. , kevrekidis , i. and vejchodsk , t. , 2009 .analysis of a stochastic chemical system close to a sniper bifurcation of its mean - field model ._ siam journal on applied mathematics _ , * 70*(*3 * ) : 9841016 .guidi , g. m. , goldbeter , a. , 1997 .bistability without hysteresis in chemical reaction systems : a theoretical analysis of irreversible transitions between multiple steady states . _ journal of physical chemistry _ , * 101 * : 93679376 .guidi , g. m. , goldbeter , a. , 1998 .bistability without hysteresis in chemical reaction systems : the case of nonconnected branches of coexisting steady states ._ journal of physical chemistry _ , * 102 * : 78137820. vilar , j. m. g. , kueh , h. y. , barkai , n. and leibler , s. , 2002 . mechanisms of noise - resistance in genetic oscillators . _proceedings of the national academy of sciences of the united states of america _ , * 99*(*9 * ) : 59885992 .dublanche , y. , michalodimitrakis , k. , kummerer , n. , foglierini , m. and serrano , l. , 2006 .noise in transcription negative feedback loops : simulation and experimental analysis ._ molecular systems biology _ , * 2*(*41 * ) : e1e12 .
theoretical results regarding two - dimensional ordinary - differential equations ( odes ) with second - degree polynomial right - hand sides are summarized , with a focus on multistability , limit cycles and limit cycle bifurcations . the results are then used for construction of two reaction systems , which are at the deterministic level described by two - dimensional third - degree kinetic odes . the first system displays a homoclinic bifurcation , and a coexistence of a stable critical point and a stable limit cycle in the phase plane . the second system displays a multiple limit cycle bifurcation , and a coexistence of two stable limit cycles . the deterministic solutions ( obtained by solving the kinetic odes ) and stochastic solutions ( obtained by generating noisy time - series using the gillespie algorithm ) of the constructed systems are compared , and the observed differences highlighted . the constructed systems are proposed as test problems for statistical methods , which are designed to detect and classify properties of given noisy time - series arising from biological applications .
in order to understand the organization and function of the human brain , it is essential to study its fiber architecture , i.e. the spatial organization of the short- and long - range nerve fibers .mapping this highly complex fiber architecture requires specific imaging techniques that resolve the orientations of the fibers not only on a high spatial resolution but also on a large field of view of up to several centimeters .the microscopy technique _3d - polarized light imaging ( 3d - pli ) _ introduced by axer et al . meets these specific requirements .it reveals the three - dimensional architecture of nerve fibers in sections of whole post - mortem brains with a resolution of a few micrometers .the orientations of the fibers are obtained by measuring the birefringence ( axes of optical anisotropy ) of unstained histological brain sections with a polarimeter .the measurement provides strong contrasts between different fiber structures and allows a label - free microscopy and reconstruction of densely packed myelinated fibers in human brains and those of other species .birefringence of brain tissue is mainly caused by the regular arrangement of lipids and proteins in the myelin sheaths .the optical anisotropy that causes birefringence ( anisotropy of refraction ) also leads to diattenuation ( anisotropy of attenuation ) . in diattenuating materials , the intensity of the transmitted light depends on the orientation of polarization of the incident light .if the diattenuation is solely caused by anisotropic absorption , it is typically called _ dichroism _ . in the literature , diattenuation and dichroismare sometimes used as synonyms . here , the term _ diattenuation _ is used to describe the overall anisotropic attenuation of light that is caused not only by absorption but also by scattering . as diattenuation leads to polarization - dependent attenuation of light, it might have an impact on the polarimetric measurement of 3d - pli and consequentially affect the derived nerve fiber orientations . in this study , we investigated the diattenuation of brain tissue and its impact on the measured 3d - pli signal for the first time .diattenuation as well as birefringence can be measured by conventional mller - matrix polarimetry or by polarization - sensitive optical coherence tomography ( ps - oct ) . while ps - oct uses the interference of the backscattered light to provide a depth profile of the sample , mller polarimetry measures the intensity of the transmitted light under a certain angle .often , incomplete mller polarimeters are used that measure only the linear birefringence and diattenuation of a sample . in the present study , a combined measurement of ( linear ) birefringence and diattenuationwas performed with an in - house developed polarimeter that analyzes the light transmitted through the sample .previous measurements that study the diattenuation of a sample were performed on non - biological phantoms ( polarizing filters , siemens star ) as well as on collagen , tendon , muscle , heart , skin , eye , and biopsy tissue of animals or humans .several studies investigated the diattenuation of the retinal nerve fiber layer ( rnfl ) which only contains unmyelinated nerve fibers . to our knowledge , the diattenuation of myelinated nerve fibers and the diattenuation of brain tissuehave not been addressed before and would need to be quantified .the diattenuation of tissue reported in the above studies was much smaller than the birefringence of the investigated samples and mostly of secondary interest . as the diattenuation might influence the measured birefringence values ,a couple of studies have been performed to estimate the error induced by diattenuation . for the 3d - pli measurement ,the question arises in how far diattenuation influences the outcome of the measurement and what are the consequences to the interpretation of the measured signal . in other studies, diattenuation has been used to quantify tissue properties ( e.g. thickness , concentration of glucose ) and to distinguish between healthy and pathological tissue ( cancerous tissue , burned / injured tissue , tissue from eye diseases ) .hence , diattenuation might also provide interesting structural information about the brain tissue and _ diattenuation imaging ( di ) _ could be a useful extension to 3d - pli .the present study was therefore designed ( a ) to quantify the diattenuation of brain tissue , ( b ) to quantify the impact of diattenuation on the measured 3d - pli signal , and ( c ) to investigate whether the diattenuation signal contains useful information about the brain tissue structure .the study design is reflected in the structure of this paper ( see [ fig : outline ] ) which is composed of a numerical study ( [ sec : numerical_study ] ) and an experimental study ( [ sec : experimental_study_on_brain_tissue ] ) .the numerical study was performed because the above literature suggests that the diattenuation signal is small and could also be caused by non - ideal optical components of the employed polarimeter .the numerical study estimates the impact of the non - ideal system parameters and the tissue diattenuation on the reconstructed fiber orientations and the measured diattenuation . in the experimental study ,the determined error estimates were taken into account to quantify the diattenuation of brain tissue and its impact on 3d - pli .the experimental study was performed exemplary on five sagittal rat brain sections . the numerical and experimental studyare presented as separate studies , each divided in methods , results , and discussion . the analytical model used for the analysis of these studiesis developed in [ sec : measurement_setups_analysis ] .the model considers not only the birefringence but also the diattenuation of brain tissue as well as non - ideal system components .the non - ideal polarization properties of the polarimeter used for the numerical study and the polarization - independent inhomogeneities used for calibrating the experimental measurements were characterized in a preliminary study presented in [ sec : characterization_lap ] . in an overall discussion at the end of this paper ( [ sec : overall - discussion ] ) , the results of the experimental study are compared to the predictions of the numerical study to validate the developed model . a list of all symbols and abbreviations used throughout this paper can be found in [ sec : symbols ] .this section introduces the physical principles and the mathematical notation used in this study . apart from birefringence and diattenuation , the mller - stokes calculus is described which will be used in the following to derive analytical expressions for the measured light intensities . in optically anisotropic media ,the refractive index depends on the direction of propagation and on the polarization state of the incident light .this anisotropic refraction , known as birefringence , can be caused by regular molecular structures , but also by orderly arranged units far larger than molecules .light that travels through a birefringent medium experiences a phase difference ( _ retardance _ ) between two orthogonal polarization components ( ordinary and extraordinary wave with refractive indices and ) , which changes the state of polarization of the light .for example , a quarter - wave retarder with transforms linearly polarized light into ( right-/left - handed ) circularly polarized light when its fast axis is oriented at an angle of ( + /-) to the direction of polarization of the incident light .previous studies have shown that myelinated nerve fiber bundles exhibit uniaxial negative birefringence ( ) and that the optic axis ( direction of optical anisotropy ) is oriented in the direction of the fiber bundle .like in all biological tissues , the birefringence of the nerve fibers is supposed to be small as compared to the refractive index of the fibers . in this case , the induced phase shift can be approximated as : where is the wavelength of the light , the thickness of the medium , the birefringence , and the out - of - plane angle of the optic axis ( i.e. the inclination angle of the nerve fibers , cf .[ fig : setups]h ) .diattenuation refers to anisotropic attenuation of light which can be caused by absorption ( dichroism ) as well as by scattering . in diattenuating materials, the transmitted light intensity depends on the polarization state of the incident light : the transmitted light intensity is maximal for light polarized in a particular direction and minimal for light polarized in the corresponding orthogonal direction .the diattenuation is defined as : the average transmittance , i.e. the fraction of unpolarized light that is transmitted through a sample , is given by : with being the intensity of the incident light .optical elements with high diattenuation are used to create linearly polarized light .an ideal linear diattenuator ( polarizer ) fulfills , i.e. the intensity of unpolarized light is reduced by one half . as diattenuation and birefringenceare usually caused by the same anisotropic structure , the principal axes of diattenuation are assumed to be coincident with the principal axes of birefringence . in this case , dichroism ( anisotropic absorption ) and birefringence ( anisotropic refraction ) can be described by the imaginary and real parts of a complex retardance .thus , diattenuation caused by dichroism ( no scattering ) is approximately proportional to .the mller - stokes calculus allows a complete mathematical description of polarized light .it is also suitable for partially polarized and incoherent light .the polarization state of light is described by a stokes vector and the optical elements of the polarimetric setup are described by mller matrices .[ [ stokes - vectors ] ] stokes vectors : + + + + + + + + + + + + + + + the stokes vector is defined in spherical coordinates as : where is the total intensity of the light beam , ] and ] ) will be denoted by and all pixels belonging to the red highlighted area ( ] as can be seen in [ fig : deltadir - xpvsdia]a , the direction angle is broadly distributed around the actual fiber direction described by .the direction angle is less broadly distributed ( see [ fig : deltadir - xpvsdia]b ) . for of the values , ( )lies within ] for regions with and within ] the distributions of and are almost uniform for ( marked by the vertical dashed lines ) .this behavior can be explained by the low signal - to - noise ratio of for small retardation values . due to the -orientation of the polarizers in the xp measurement , the transmitted light intensity approaches zero for small retardations ( cf .[ fig : setups]f ) . for larger retardation values, the signal - to - noise ratio of increases and the distribution of appears to be mostly independent of the retardation ( it depends more on the diattenuation , see [ fig : deltadir - xpvsdia]a ) .this also agrees with the observation that retardation and diattenuation are not correlated for ( see [ fig : diavsret]c ) .the distribution of is much narrower than for ( in regions with ) and also mostly independent of the retardation .based on these observations , the direction angles and were only evaluated in regions with retardation values to avoid misinterpretation .2 fourier coefficients associated with + sines + ac anterior commissure + aci anterior commissure intrabulbar part + out - of - plane inclination angle of the + fibers + modified inclination angle corrected + by the maximum measurable + retardation + inclination angle obtained from + 3d - pli + fourier coefficients associated with + cosines + c index denoting the camera + cb cerebellum + cc corpus callosum + ccd charge - coupled device + cg cingulum + cpu caudate putamen + cu cuneate fasciculus + image contrast + sample / section thickness + d index denoting the di meas . + di diattenuation imaging + diattenuation ( of brain tissue ) + diattenuation obtained from di + diattenuation of the polarizer + diattenuation of the analyzer + diattenuation for which the axis of + maximum transmittance is parallel + to the fibers + diattenuation for which the axis of + maximum transmittance is + perpendicular to the fibers + measured diattenuation ( ) + phase retardation ( ) + sum of squared differences + phase ; in - plane direction angle of the + fibers + direction angle from di meas .+ direction angle from 3d - pli meas .+ direction angle from xp meas .+ retardance of the ( quarter-)wave + retarder + ( total ) intensity of light + transmittance ( average transmitted + intensity ) + intensity of the incident light + maximum transmitted light intensity ; + maximum image intensity + minimum transmitted light intensity ; + minimum image intensity + transmitted intensity in di meas .+ transmitted intensity in 3d - pli meas. + transmitted intensity in xp meas .+ l index denoting the light source + lap large - area polarimeter + led light emitting diode + mller matrix of the ( quarter-)wave + retarder + wave length + mller matrix of the brain tissue + general mller matrix + mean of a gaussian distribution + oct optical coherence tomography + opt optic tract + p index denoting the 3d - pli meas .+ pli polarized light imaging + mller matrix of the polarizer + mller matrix of the analyzer + degree of polarization + px pixel + spherical angle of stokes vector + ] + extraordinary refractive index + ordinary refractive index + birefringence ( ) + rnfl retinal nerve fiber layer + mller matrix describing a rotation + retardation ( ) obtained + from 3d - pli + maximum measurable retardation + rotation angle of the polarizing filers + stokes vector + stokes vector for unpolarized light + standard deviation of a gaussian + distribution + average transmittance ( of the brain + tissue ) + average transmittance of the polarizer + average transmittance of the analyzer + average transmittance of the retarder + vhc ventral hippocampal commissure + wm selected white matter regions + x index denoting the xp meas . + xp crossed polars +
3d - polarized light imaging ( 3d - pli ) reconstructs nerve fibers in histological brain sections by measuring their birefringence . this study investigates another effect caused by the optical anisotropy of brain tissue diattenuation . based on numerical and experimental studies and a complete analytical description of the optical system , the diattenuation was determined to be below 4% in rat brain tissue and to have negligible impact on the fiber orientations derived by 3d - pli . furthermore , the axis of maximum transmittance was observed to be parallel to the fibers in specific brain regions and orthogonal in others . this suggests diattenuation imaging to be a promising extension to 3d - pli .
exploring the relationship between structure and function of real systems has been improved markedly in recent years , as it has become clear that the impressive function of real systems is closely related to their particular structures .examples include the high risk of epidemic outbreak in social entities shared with small - world friendship , the low threshold of particle condensation in transportation network with heterogeneous structure , and the pathological brain states accompanied by abnormal anatomical connectivity .signal transmission over long distances is one of the most essential function in nature , ranging from cell signaling in the nervous system up to human telecommunication in the engineering , but which architecture supports an efficient and robust transmission is still not fully understood .early attempts at exploring the structure - function relationship of signal transmission were focused on one - way chains . in these classical chain models ,a node at one side called source node is responsible for receiving input signals , and then the source node propagates the signals to its nearest node in single direction , and so on .it has been reported that a weak signal can be transmitted along the one - way chain without amplitude damping if the chain is embedded in noisy environments .such noise - improved signal transmission is further observed in complex networks .however , the noise - improved signal transmission relies heavily on the proper intensity of noise which it is hard to be tuned in practice .it is therefore quite important to seek a specific structure by which the transmission can be efficiently improved , instead of by the well - tuned noise . in this paper , we propose a modified one - way chain model with a y - shaped structure and study how such structure affects signal transmission in the chain . unlike the classical one - way chain with a single source node , the y - shaped one - way chain has two disconnected source nodes that receive the same input signal .we find that the y - shaped one - way chain can maintain long - distance signal transmissions without amplitude attenuation , no matter the input signal is periodic or aperiodic .we also find that the enhanced signal transmission in the y - shaped one - way chain is much effective than the noise - improved signal transmission in the classical one - way chain .these findings imply that even a small change in the structure might permit a hugely different performance in signal transmission , offering a good illustration of the relationship between structure and function . ) to receive an input signal in ( a ) and a classical one - way chain with one source node ( ) to receive the same input signal in ( b ) . represents the coupling strength . ]a y - shaped one - way chain of coupled bistable systems is shown in fig .[ fig:1](a ) , whose dynamics is described as follows : where governs the local dynamics of node , which has two stable fixed points and one unstable fixed point , denotes the coupling strength , and represents the input signal receiving by the source nodes ( ) . to model weak signal transmissions , is set as a subthreshold signal , namely , under such signal , each source node can not jump between the two stable fixed points but oscillate around one of them .when , the y - shaped one - way chain of eq .( [ eq : model ] ) can be viewed as a classical one - way chain with one source node , see fig .[ fig:1](b ) . to characterize signal transmission along the chain ,we calculate the output of node at the frequency of the input signal by where parameter determines the length of the integration interval . to achieve a stable result of , a large value of considered . besides, when the input signal is aperiodic or in a noisy environment , is averaged with realizations . from eq .( [ eq : indicator ] ) , the signal transmission along the chain is damped if for ; otherwise , the transmission is enhanced if for . in our discussions ,the chain size is used , and the initial condition of each node is randomly selected from the two stable fixed points .obviously , the two source nodes display the same dynamical behavior their initial conditions are identical , while showing different dynamical behaves if their initial conditions are nonidentical . in thisregard , eq .( [ eq : model ] ) with and with represents the classical one - way chain and y - shaped one - way chain , respectively . ) with ( black and red squares ) and with ( green and blue circles ) at and , respectively .dashed lines denote the analytical predictions of eqs .( [ eq : output - j-1 ] ) and ( [ eq : output - j-2 ] ) . ]a subthreshold periodic signal with and is firstly considered .figure [ fig:2 ] shows the transmissions of such signal for two coupling strengths , obtaining from randomly setting the initial conditions of all the nodes .it can be observed that always takes only two distinct responses at each coupling strength : damped transmission and enhanced transmission .our numerical results reveal that the former is achieved at while the latter is obtained at , irrespective of the initial conditions of the other nodes. meanwhile , fig .[ fig:2 ] shows that the enhanced signal transmission obtained at is very sensitive to the value of coupling strength .when , increases fast and saturates from .in contrast , when , increases slowly but attains a higher saturated output after . hence , the y - shaped one - way chain ( at ) has a function of enhancing signal transmission and such function is purely generated by the simple y - shaped structure .the above observations raise two questions : ( i ) how does the coupling strength impact on the enhanced output and ( ii ) which node has the best efficiency of enhancing signal transmission in the y - shaped one - way chain ? to answer these questions , we compare the dependencies of on between three nodes , see fig . [fig:3](a ) .a common feature in this figure is the same critical coupling strength below ( ) or far beyond ( ) which the output . in between , the enhanced output emergences and a maximum output appears at an optimal coupling strength .moreover , the intermediate region of with enhanced is expanded as increases . during this process, the values of and are changed accordingly . as shown in fig .[ fig:3](b ) , is an increasing function of which satisfies . in fig .[ fig:3](c ) , seems to be a constant ( ) before , and then grows with obeying a linear relationship .based on these quantities , we define to measure the signal transmission efficiency of node as the results of for three coupling strengths are given in fig .[ fig:3](d ) .it can be observed that displays a bell - shaped curve at each coupling strength . in particular , when , the curve of has a peak at , suggesting that node has the best efficiency of signal transmission .interestingly , when , the best transmission efficiency is gained by node since the peak height at is higher than at .however , when , the peak of is shifted to , accompanied by a decline in the peak height .the variations of indicate that the coupling strength regulates the efficiency of signal transmission and an intermediate coupling strength enables some node to have a higher transmission efficiency . ) with .( a ) versus for node ( square ) , ( circle ) and ( triangle ) .dashed lines denote the analytical results of eqs .( [ eq : output - j-1 ] ) and ( [ eq : output - j-2 ] ) .( b ) the maximum output versus with a fit line .( c ) optimal versus with a fit line .( d ) transmission efficiency versus for ( square ) , ( circle ) , and ( triangle ) . ] . left panels with : ( a ) , ( b ) , and ( c ) ; right panels with : ( d ) , ( e ) , and ( f ) .initial condition is used . ] to give a deep insight of the enhanced signal transmission , fig .[ fig:4 ] shows the spectra of for nodes , , and .when , can be seen as a delta function of which is zero everywhere except at the input frequency , where it is a sharp peak , see fig .[ fig:4](a ) . except for the peak at , also shows a lower peak at the harmonic frequency , see fig .[ fig:4](b ) .such multiple peaks can be found for , see fig .[ fig:4](c ) .in addition , when , the spectra of are similar to that of , see figs .[ fig:4](d)-(f ) .the main difference is that , there are more peaks at other harmonic frequencies emerge for .the emergence of lower peaks at harmonic frequencies means that the output signal is not a pure sine ( cosine ) wave but a sum of a set of sine ( cosine ) waves .however , as the peaks at harmonic frequencies are relatively lower than the peaks at , the output at the input frequency gives a reliable measurement of signal transmission . ) with ( black and red square ) and with ( green and blue circles ) for and , respectively .parameter is considered . ] since noise is ubiquitous in nature , we examine the robustness of the enhanced signal transmission in the y - shaped one - way chain to external noise perturbation . hence ,each bistable system in eq .( [ eq : model ] ) becomes noisy , i.e. , , where denotes the noise perturbation . we here consider as the white and spatially uncorrelated noise with and , where parameter controls the intensity of noise . for a given coupling strength , fig .[ fig:5 ] shows the transmissions of the input signal in two noisy environments . in the case of , also displays two distinct responses : damped transmission at and enhanced transmission at . in the case of ,the transmission at is not damped but slightly enhanced now , which is consistent with the noise - improved signal transmission as observed in .moreover , such noise - improved transmission at displays the same behavior to the transmission of , implying that the enhanced signal transmission by the y - shaped structure is reduced for large . the phenomenon shown in fig .[ fig:5 ] can be understood as follows . for small ,the two source nodes approximate if their initial conditions are identical .accordingly , eq . ( [ eq : model ] ) consisted of noisy bistable systems can be treated as the classical one - way chain so that it displays a similar transmission to the case of . for large ,the noise perturbation is sufficient that it can trigger the source nodes jump between their two stable fixed points .therefore , the signal transmission is independent of the initial conditions of the source nodes , which results in the same transmission between and . .left panels with : ( a ) , ( b ) , and ( c ) ; right panels with : ( d ) , ( e ) , and ( f ) .parameter and initial condition are used . ]fixed , we explore the dependency of on for three nodes chosen from fig .[ fig:5 ] .the results are displayed in fig .[ fig:6 ] . for ,the curve of can be viewed as a delta function with a sharp peak at the input frequency , see figs .[ fig:6](a)-(c ) . for , also resembles a delta function except small at which , see figs .[ fig:6](d)-(f ) .the common peak at shown in fig .[ fig:6 ] suggests that the input frequency is the main frequency of the output signals and thus the output at is the dominant output . ) at ( square ) , ( circle ) , and ( triangle ) , respectively .upper panels for : ( a ) , ( b ) , ( c ) and ; middle panels for : ( d ) , ( e ) , ( f ) and ; lower panels for : ( d ) , ( e ) , ( f ) and .insets are the enlarged views of signal transmissions . ] in addition , the same transmission at large shown in fig .[ fig:5 ] motivates us to figure out the critical noise intensity at which the signal transmission is irrelevant to the initial conditions of the source nodes .to this end , we compare the evolutions of with between and for several values of and , see fig .[ fig:7 ] .when , decays with except a slight rise around , see figs .[ fig:7](a)-(c ) .when , suddenly increases from until attaining a local maximum at , exhibiting the same performance to the case of for large , see figs .[ fig:7](d)-(f ) .when or varies , the value of remains constant , which indicates that is the critical noise intensity at which the signal transmission in the y - shaped one - way chain is not sensitive to the initial conditions of the source nodes . besides , figs .[ fig:7](d)-(f ) ( insets ) also show that may exhibit two resonant peaks for suitable , forming double resonant - like phenomena .further , figs .[ fig:7](g)-(i ) ( insets ) depict the evolutions of for the classical one - way chain , by setting and in eq .( [ eq : model ] ) . in these figures, shows a resonant - like dependency on for each pair of and , where the resonant peak is at .when , exhibits a similar evolution to the cases of and .this implies that is another critical noise intensity , above which the difference in signal transmission between the y - shaped one - way chain and classical one - way chain is small . making use of these two critical intensities, we may divide the signal transmission in the y - shaped one - way chain into three regions : region i ( ) , region ii ( ) , and region iii ( ) [ see figs . [fig:7](a)-(c ) ] .specifically , region i corresponds to the y - shaped structure - improved transmission , region ii corresponds the structure - noise - improved transmission , and region iii corresponds the noise - improved transmission , respectively . among them , the y - shaped structure - improved transmission ( region i ) is robust to noise perturbation , especially at large since the decay rate is slow .in addition , the y - shaped structure - improved transmission is much more effective than the noise - improved transmission . .left panels with : ( a ) , ( b ) , ( c ) , and ( d ) ; right panels with : ( e ) , ( f ) , ( g ) , and ( h ) .parameter and are set . ]the actual signals are usually irregular ones , it is necessary to check the robustness of the y - shaped structure - improved transmission to input signal irregularity . herethe irregular input signal is generated by setting the periodic signal with a time - varying initial phase , i.e. , .for simplicity , the initial phase is set to be varied as a wiener process .thus , is a gaussian white noise with and .when , the periodic signal becomes an aperiodic signal and its regularity decreases with . to illustrate it ,we show the output spectrum of the aperiodic signal at [ see fig .[ fig:8](a ) ] and at [ see fig .[ fig:8](e ) ] , respectively . in both spectra, there is a highest peak at , where the peak height is lower and peak width is wider at , demonstrating that the regularity of the aperiodic signal is decreased with .we next investigate whether these two aperiodic signals can be effectively transmitted in the y - shaped one - way chain .fixing , figs .[ fig:8](b)-(d ) depict the output spectra for three nodes , , and at .it is obvious that each output spectrum can be considered as an enlarged version of fig .[ fig:8](a ) , where the output spectra of and show larger enlarged ratios than that of .similarly , such enlarged versions can also be observed in figs . [ fig:8](e)-(h ) for the case of . comparing with that of ,the enlarged ratio and fidelity are reduced at . from these observations, it can be concluded that the y - shaped structure - improved transmission works well for irregular signals .we now analyze the underlying mechanism of the y - shaped structure - improved signal transmission . to avoid the effect of noise , we only discuss eq .( [ eq : model ] ) subjected to a periodic input signal ( ) in absence of noise ( ) .because the input signal is subthreshold , the source nodes oscillate with small amplitudes around the stable fixed points , their solutions can be approximately obtained as where depending on the initial condition , , and denotes some phase shift .when , the dynamical equation of node becomes without the periodic signal , has three fixed points for : and in which and are stable fixed points while is unstable ; for , has one stable fixed point . when is not great , the signal is subthreshold , the solutions of the node approximate and where is some phase shift . when , the latter solution indicates a larger oscillation around than the former around .however , the stability of the fixed point decreases as approaches , the large oscillation is thus unsustainable and it will move to the vicinity of , leading to a small oscillation governed by the former solution . inserting into the equation of node , we can obtain the stable fixed points of the node as well as the subsequent nodes by repeatedly using the same method .we find that these nodes possess the same stable fixed point or , depending on or . in this way ,the dynamical equation of node can be written as where denotes the signal from node and represents some phase shift . when the signal is subthreshold , the solution of eq .( [ eq : eq - j-1 ] ) approximately satisfies with some phase shift . inserting this solution into eq .( [ eq : indicator ] ) , the output is given by eq . ( [ eq : output - j-1 ] ) satisfies the condition for , thereby supporting the damped transmission of eq .( [ eq : model ] ) at .on the other hand , the damped transmission at can be explained by the overdamped motion of a particle in a potential and periodic forcing .for this reason , the potential in eq .( [ eq : eq - j-1 ] ) is and the periodic forcing is . when , is an asymmetrical potential and its asymmetry is determined by the value of . for illustration ,figs . [ fig:9](a)-(c ) display the potential for , and .when , has two wells , where the well located at ( or ) is deeper than the other one at ( or ) , see fig .[ fig:9](a ) .this indicates that the large oscillations around ( or ) are more stable .when is increased to , turns into an v - shaped potential with a single well at ( or ) , see fig .[ fig:9](b ) . as shown in fig .[ fig:9](c ) , further increasing to will result in a more steep v - shaped potential . clearly , under the same forcing of ) , the asymmetrical potential of allows the particle to generate a relatively large oscillation inside it in contrast to the potentials of and .however , as ) is weak and the motion is overdamped , the oscillation around ( or ) gets even smaller ( ) .altogether , the transmission of eq .( [ eq : model ] ) decreases with and when . for diffent .upper panels for : ( a ) , ( b ) , and ( c ) .solid lines correspond to and the dashed lines correspond to .lower panels for : ( d ) , ( e ) , and ( f ) . ]when , eq . ( [ eq : second layer-1 ] ) can be rewritten as without the periodic signal , has two stable fixed points for and has one stable fixed point for . for a subthreshold signal , the solutions of approximate and where is some phase shift .based on the solutions of , we can acquire the stable fixed points of the subsequent nodes .we find that the stable fixed points of these nodes are for and for . in the former case ,the dynamical equation of node can be rewritten as where is some phase shift .( [ eq : eq - j-2-a ] ) has the same form as eq .( [ eq : eq - j-1 ] ) , so their solutions and the corresponding outputs are similar .this means the signal transmission is damped for no matter the initial condition is or . in the latter case ,i.e. , , the dynamics equation of node is its solution is where is some phase shift . inserting eq .( [ eq : solution - j-2 ] ) into eq .( [ eq : indicator ] ) , the output is eq .( [ eq : output - j-2 ] ) satisfies the condition for , which coincides with the enhanced signal transmissions at . in fig .[ fig:2 ] and fig .[ fig:3](a ) , we compare the analytical results of eqs .( [ eq : output - j-1 ] ) and ( [ eq : output - j-2 ] ) with the numerical results and find a good agreement between them for small .the reason is that , the above analyses are based on the perturbation theory , i.e. , assuming oscillates around the stable fixed point with a small amplitude . because the oscillation of is weak for small ,the theory gives a better approximation to as well as .in addition , from eq .( [ eq : output - j-2 ] ) , the optimal can be derived as , which fits well with the numerical results ( ) shown in fig . [ fig:3](b ) .analogously , the enhanced signal transmission at and can also be understood by the overdamped motion of a particle in a potential and periodic forcing . as shown in eq .( [ eq : eq - j-2-b ] ) , the periodic forcing is , and the potential is which is a symmetrical function with a minimum at . in fig .[ fig:9](d ) , the potential for is plotted .it is a u - shaped curve with a flat bottom , which is quite different from the v - shaped well shown in fig . [fig:9](c ) .in addition , fig . [ fig:9](e ) plots the potential for . it can be seen that the bottom of the u - shaped becomes narrow and such narrow u - shaped potential transforms into a v - shaped curve as , see fig . [ fig:9](f ) .in contrast , the u - shaped potential can permit the particle to gain a wider oscillation inside it than the v - shaped potentials .this explains why the signal transmission is largely enhanced at and .we finally analyze the mechanism of the resonant - like phenomena shown in fig .[ fig:7 ] .firstly , we explain the single resonant - like dependency for the classical one - way chain with one source node , i.e. , and are set in eq .( [ eq : model ] ) . when , the oscillation of the source node is small , restricting in one of the two stable fixed points .when is increased to , the oscillation of the source node can jump to the other stable fixed point by noise perturbation , see fig . [fig:10](a ) . because the perturbations are not sufficient , the jumping rate is small and the oscillation may stay there for a long time until the next jumping .thus the oscillation of the source node is still small at .continue increasing to , the jumping rate between the two stable fixed points is obviously improved , which increases the oscillation amplitude , see fig .[ fig:10](b ) .when , the jumping rate is sharply improved , so the oscillation is no longer centered on the stable fixed points but on , see fig .[ fig:10](c ) .however , further increase in will increase the randomness of the oscillation ( not shown here ) . considering all of these factors ,the source node can only generate a large output at , showing a resonant peak over there . through one - way coupling, the output of the source node will propagate to the subsequent nodes ( ) , which results in the stochastic resonance phenomena as observed in figs . [fig:7](g)-(i ) .secondly , we explain the double resonant - like dependency for the y - shaped one - way chain with .as mentioned above , when , there is a small probability that a single source node may jump to the other stable fixed point , remaining there for a long time until it jumps back to the initial fixed point . in this way , the two source nodes in the y - shaped one - way chain may occasionally oscillate in different stable fixed points for long time intervals , although given the same initial condition , see fig .[ fig:10](d ) .considering that the signal transmission is largely enhanced if the two source nodes oscillate in different stable fixed points [ see sec .iv b ] , the transmission in the y - shaped one - way chain will be sometimes largely enhanced at . by increasing to ,the time intervals for the two source nodes simultaneously oscillating at different fixed points reduce dramatically [ see fig . [ fig:10](e ) ] , indicating a decrease in signal transmission .these are the reasons why shows the first local peak at .when is increased to , there is no obvious interval between two continuous jumps , see fig .[ fig:10](f ) . during this process ,the collective behavior of the two source nodes is analogous to the individual or , i.e. , the two source nodes can be seen as a single one .this analogy in dynamics implies that the y - shaped one - way chain shows a similar signal transmission to that of the classical one - way chain for large . as a result ,the signal transmission in the y - shaped one - way chain is also largely enhanced at , resulting in the second peak over there .obviously , both the single and double resonant - like dependencies are the stochastic resonance phenomena , since the signal transmissions are improved by noise .however , as the specific y - shaped structure allows the two source nodes oscillate in distinct fixed points for small noise , we thus refer the enhanced signal transmission in the region as structure - noise - improved transmission [ see figs . [ fig:7](d)-(f ) ] . of the source node(s ) .left panels for one source node : ( a ) , ( b ) , and ( c ) ; right panels for two source nodes with given the same initial condition : ( d ) , ( e ) , and ( f ) .red and green lines denote of the two source nodes , blue lines denote the their collective dynamics . ]in conclusions , we have studied the signal transmission in a y - shaped one - way chain and found an extraordinarily of such specific structure to improve signal transmission .we have also studied the robustness of the y - shaped structure - improved transmission to the noise perturbation and input signal regularity .we hope our findings may contribute to understand the structure - function relationship of real systems and be useful to design highly efficient artificial devices , such as switchers and amplifiers .x.l . was supported by the nnsf of china under grant no .11305078 , the research fund of jiangsu normal university under grant no .12xlr028 , and the priority academic program development of jiangsu higher education institutions ( papd ) .m.t . was supported by the nnsf of china under grant no .h.l . was supported by the nnsf of china under grant no .
it has been found that noise plays a key role to improve signal transmission in a one - way chain of bistable systems [ zhang _ et al_. , phys . rev . e 58 , 2952 ( 1998 ) ] . we here show that the signal transmission can be sharply improved without the aid of noise , if the one - way chain with a single source node is changed with two source nodes becoming a y - shaped one - way chain . we further reveal that the enhanced signal transmission in the y - shaped one - way chain is regulated by coupling strength , and that it is robust to noise perturbation and input signal irregularity . we finally analyze the mechanism of the enhanced signal transmission by the y - shaped structure . * the realization of transmitting weak signals over a long range is essential in engineering . stochastic resonance has been proposed as an important mechanism to support such function , where a weak signal can be transmitted faraway without amplitude attenuation by embedding the nonlinear system that responsible for transporting the signal in a noisy environment . subsequently , the nonlinear systems with complex structures are found to have a higher level of utilizing stochastic resonance for transmitting signals , as compared to the nonlinear systems with simple and regular structures . however , the intensity of noise is not easy to be controlled in practice , which reduces the implementation of stochastic resonance . it is an important question to ask whether there exists a specific structure by which the signal transmission can be enhanced without the help of noise . for this reason , we here propose a one - way chain with a y - shaped structure through modifying the classical one - way chain model from having a single source node to having two disconnected source nodes . our results show that such a slight change in the structure may enable a largely enhanced signal transmission in the one - way chain . besides this , the enhanced signal transmission by the y - shaped structure is much effective than by stochastic resonance . these findings may contribute to the design of highly efficient artificial devices . *
let us consider a stationary random process with zero mean and the background trend ( some known mean function with unknown regression parameters , where ) then and where ( ) is a given vector and ( ) is a given matrix . the unbiasedness constraint on the estimation statistics produces the system of equations in the unknowns for white noise ^ 2\ } = \sigma^2+\sigma^2 \omega^i_j \rho_{ii } \omega^i_j \quad \left ( e\{[\hat{\epsilon}-\epsilon]^2\}=\sigma^2+\sigma^2 \omega ' \lambda \omega \right ) \ , \ ] ] where ( ) is the identity auto - correlation matrix , the minimization constraint ^ 2\ } } { \partial \omega^i_j } = 2\sigma^2\rho_{ii}\omega^i_j + 2\sigma^2 f_{ik}\mu^k_j = 0 \ , \ ] ] where ^ 2\ } = \sigma^2 + \sigma^2 \omega^i_j \rho_{ii } \omega^i_j + 2\sigma^2\underbrace{(\omega^i_j f_{ik } - f_{jk})}_0 \mu^k_j \ , \ ] ] let us add the equations in the unknowns to the system equivalent to substituting this term into the unbiased system we get and the kriging weights now , we can write the kriging estimator where the least - squares estimator is the best linear unbiased estimator for ^ 2\ } = \lim_{n \rightarrow \infty } e\{[\hat{v}_j - f_{jk}\beta^k]^2\ } = 0 \ , \ ] ] where ^ 2\ } = \sigma^2 \omega^i_j \rho_{ii } \omega^i_j = - \sigma^2 \omega^i_j f_{ik } \mu^k_j = - \sigma^2 f_{jk}\mu^k_j = \sigma^2 f_{jk } ( f_{ki } \rho^{ii } f_{ik})^{-1 } f_{kj } \ . % ~~ % \left(e\{[\hat{v}- f'\beta]^2\ } % = % \sigma^2 f ' ( f ' \lambda^{-1 } f)^{-1 } f \right ) \ .\ ] ]since for constant bias - noise mean ( ) and the precession of the estimation statistics can not be compared to zero value ^ 2\ } = \frac{\sigma^2}{n}\ ] ] let us introduce the bias - noise mean with non - zero slope ( ) and to find the best linear unbiased estimator for any ^ 2\ } = e\{[\hat{v}_j - f_{jk}\beta^k]^2\ } = 0\ ] ] we have to fulfill \left [ \begin{array}{cc } n & n\overline{i } \\ & \\n\overline{i } & n\overline{i^2 } \\\end{array } \right]^{-1 } \left [ \begin{array}{c } 1 \\ j \\\end{array } \right ] = \frac { j^2 - 2m_nj+m_{sn } } { n\sigma^2_n } = 0\ ] ] at in where : , , , ; and we get simple mean and variance where charged by the imaginary error. 000 f. tulajter , _ mean squared errors of prediction by kriging in linear models with ar(1 ) errors _ , acta math . univ .comenianae vol .lxiii , 2(1994 ) 247254 .t. suso , _ modern statistics by kriging _ , arxiv : cs.na/0609079 .
the aim of the paper is to derive the complex - valued least - squares estimator for bias - noise mean and variance .
the main goal of this paper is to provide a mathematical survey of various pde models for fluttering plates and associated results , directed at an applied math and engineering readership .the focus here is on _ modeling concerns _ , and we provide a presentation and exposition of mathematical results when available and their relationship with known numerical and experimental results concerning flutter .we do not focus on proofs here , but try to give descriptions of results and a general intuition about why they hold .in addition we mention a handful of mathematical models which are of recent interest in aeroelasticity _ and also _ represent fertile ground from the point of view of mathematical analysis .the _ flutter pheonmenon _ is of great interest across many fields .flutter is a sustained , systemic instability which occurs as a feedback coupling between a thin structure and an inviscid fluid when the natural modes of the structure couple " with the fluid s dynamic loading .when a structure is immersed in a fluid flow certain flow velocities may bring about a bifurcation in the dynamics of the coupled _ flow - plate _ system ; at this point stable dynamics may become oscillatory ( limit cycle oscillations , or lco ) or even chaotic .a static bifurcation may also occur , known as _ divergence _ or buckling .the above phenomena can occur in a multitude of applications : buildings and bridges in strong winds , panel and flap structures on vehicles , and in the human respiratory system ( snoring and sleep apnea ) .recently , flutter resulting from _ axial flow _( which can be achieved for _ low flow velocities _ ) has been studied from the point of view of energy harvesting .flutter considerations are paramount in the supersonic and transonic regime , with the renewed interest in supersonic flight . from a design point of view, flutter can not be overlooked , owing to its potentially disastrous effects on the structure due to sustained fatigue or large amplitude response .the field of _ aeroelasticity _ is concerned with ( i ) producing models which describe the flutter phenomenon , ( ii ) gaining insight into the mechanisms of flow - structure coupling , ( iii ) predicting the behavior of a flow - structure system based on its configuration , and ( iv ) determining appropriate control mechanisms and their effect in the prevention or suppression of instability in the flow - structure system . herewe consider flow - plate dynamics corresponding to both _subsonic _ and _ supersonic _ flow regimes and a wide array of structural boundary conditions and flow - plate coupling conditions .flow - structure models have attracted considerable attention in the past mathematical literature , see , e.g. , and the references therein .however , the majority of the work ( predominantly in the engineering literature ) that has been done on flow - structure interactions has been devoted to numerical and experimental studies , see , for instance , , and also the survey and the literature cited there .many mathematical studies have been based on linear , two dimensional , plate models with specific geometries , where the primary goal was to determine the _ flutter point _( i.e. , the flow speed at which flutter occurs ) .see also for the recent studies of linear models with a one dimensional flag - type structure ( beams ) .this line of work has focused primarily on spectral properties of the system , with particular emphasis on identifying aeroelastic _eigenmodes _ corresponding to the associated possio integral equation ( addressed classically by ) .we emphasize that these investigations have been _ linear _ , as their primary goal is to predict the flutter phenomenon and isolate aeroelastic modes .given the difficulty of modeling coupled pdes at an interface , theoretical results have been sparse .additionally , flutter is an inherently nonlinear phenomenon ; although the flutter point ( the flow velocity for which the transition to periodic or chaotic behavior occurs ) can be ascertained within the realm of linear theory , predicting the magnitude of the instability requires a nonlinear model of the structure ( and potentially for the flow as well ) .the results presented herein demonstrate that flutter models can be studied from an infinite dimensional point of view , and moreover that meaningful statements can be made about the physical mechanisms in flow - structure interactions _ strictly from the pde model_. the challenges in the analysis involve ( i ) the mismatch of regularity between two types of dynamics : the flow and the structure which are coupled in a hybrid way , ( ii ) the physically required presence of unbounded or ill - defined terms ( boundary traces in the coupling conditions ) , and ( iii ) intrinsically non - dissipative generators of dynamics , even in the linear case .( the latter are associated with potentially chaotic behavior . )one of the intriguing aspects of flow - structure dynamics is that the type of instability , whether static ( divergence ) or dynamic ( lco ) , depends both on the plate boundary conditions and on the free - stream ( or unperturbed ) flow velocity .for example , one observes that if a ( two - dimensional ) panel is simply supported or clamped on the leading _ and _ trailing edge it undergoes divergence in subsonic flow but flutters in supersonic flow ; conversely , a cantilevered panel clamped at one end and free along the others flutters in subsonic flow and may undergo divergence in supersonic flows .this paper primarily addresses the interactive dynamics between a nonlinear plate and a surrounding potential and [ visc - flow ] . ]. the description of physical phenomena such as flutter and divergence will translate into mathematical questions related to existence of nonlinear semigroups representing a given dynamical system , asymptotic stability of trajectories , and convergence to equilibria or to compact attracting sets .interestingly enough , different model configurations lead to an array of diverse mathematical issues that involve not only classical pdes , but subtle questions in non - smooth elliptic theory , harmonic analysis , and singular operator theory . for more details concerning the mathematical theory developed forthe flutter models discussed below , see .* notation : * for the remainder of the text denote for or , as dictated by context .norms are taken to be for the domain .the symbol will be used to denote the unit normal vector to a given domain , again , dictated by context .inner products in are written , while inner products in are written .also , will denote the sobolev space of order , defined on a domain , and denotes the closure of in the norm denoted by or .we make use of the standard notation for the boundary trace of functions , e.g. , for , =\phi \big|_{z=0} ] are retained up to the third order .thus , standard , linear piston theory ( or law of plane sections ) replaces the _ acceleration potential of the flow _ , ] in the rhs of the reduced plate equation .the following overview is given in : piston theory initially was obtained as an asymptotic expansion of the exact expression [ for the flow contribution ] as . in later studiesit was shown that piston theory is valid starting from ... coupled mode ( coalescence ) [ classical ] flutter is observed for values of beyond this critical value , whereas single mode flutter can occur in the range . "formally , we could arrive at the aforementioned , standard ( linear ) piston theory model by utilizing the reduction result in theorem [ rewrite ] ; a bound exists : for sufficiently greater than 1 .the constants depend only on the diameter of .this indicates that the delay term , decays ( in some sense ) as increases .hence , it is reasonable to guess that can be neglected in the case of large speeds .so we arrive at the following model where ] _ ( where is perhaps ) .above ) as .the result , however , is only valid for arbitrarily small time intervals , and hence does not provide information about the behavior of solutions for arbitrary . ][ nonpist ] we note that , as mentioned above , some classical piston - theoretic analyses retain polynomial terms in ] will lead to a model with nonlinear , monotone ( cubic - type ) damping in ( see also some discussion in ) .such damping has already been considered in wave and plate models , and is known to induce stronger stability properties in mitigating effects of nonconservative terms in the equation .it would be interesting to study this situation in the context of the piston model under consideration .in addition to the classical , piston - theoretic model discussed above ( in and remark [ nonpist ] ) , there are other _piston theories_. for instance , if we a priori assume that we deal with `` low '' frequencies regimes for the plate , then we can use the following expression for the aerodynamic pressure ( see , e.g. , and the references therein ) : +uu_x\right).\ ] ] in the supersonic case ( ) .we note that this term has a fundamentally different structure , and its contribution to the dynamics is markedly different than the rhs of .it is commonly recognized that the damping due to the aerodynamic flow is usually substantially larger in _ magnitude _ than the damping due to the plate structure . however , as we can see , the aerodynamic damping can be negative as well as positive . for instance , in the case we obtain negative damping .this has a definite impact on the dynamics , producing instabilities in the system .we also note that for very large ( ) the model in coincides with the standard one described by ( [ plate - stand ] ) . in a recent paper ,the piston - theoretic approach is re - examined utilizing an additional term in the asymptotic expansion in the inverse mach number .the result is an aeroelastic pressure on the surface of the plate of the form the reference discusses this model in the context of an extensible beam and two dimensional flow , but indicates that the model may be generalized .additionally , the derivation shown in is valid for values only slightly larger than .this may provide a key for studying low supersonic flow - plate interactions . from a mathematical point of view , one may also notice that the formula in ( [ pistoon ] ) indicates a cancellation between the stabilizing part of the flow , represented by , and the additional integral term .the force in equation is the result of expanding the full linear potential flow theory in powers of frequency for fixed ( i.e. , mach number ) , while equation ( [ pistoon ] ) is the result of expanding the full potential flow theory in in powers of ( or inverse mach number ) .these contribute very different effects to the dynamics , unless higher order terms are retained in frequency and .a configuration which supports multiple models of recent interest is now described .the principal component of this configuration is the existence of large portion of the plate boundary which is free . as a general remark, we note that changes in the configuration ( e.g. , plate boundary conditions ) can have immense impacts on the overall character of dynamics .this mathematical observation is confirmed experimentally as well .taking a free - clamped plate boundary ( as given by ( fc ) ) allows one to consider ( in generality ) the situation of a _ cantilevered wing _ , as well as a _ mostly clamped panel _ , or a flap / flag in so called _ axial flow_. when one considers a free plate coupled to fluid flow ( unlike the clamped case ) the key modeling issues correspond to ( i ) the validity of the structural nonlinearity and ( ii ) the aerodynamic theory near the free plate boundary ( which may exhibit in - plane displacement ) . in this configuration , a natural _ flow _ interface boundary condition arises and is called _ kutta - joukowsky flow condition _ ( kjc ) , as described above and in put , the ( kjc ) corresponds to a zero pressure jump off the wing and at the trailing edge .the ( kjc ) has been implemented in numerical aeroelasticity as mechanism for ... removal of a velocity singularity at some distinguished point on a body in unsteady flow " . from an engineering standpoint , the ( kjc )is required to provide a unique solution for the potential flow model for a lifting surface , and gives results in correspondence with experiment , i.e. the pressure difference across the trailing edge is zero and the pressure difference at the leading edge is a near maximum .studies of viscous flow models in the limit of very high reynolds numbers lend support to the ( kjc ) .[ axial ] the configuration above arises in the study of airfoils . in this case, we refer to _ normal flow _( along the -axis ) .another related configuration referred to as _ axial flow _ , which takes the flow occurring along axis .physically , the orientation of the flow can have a dramatic effect on the occurrence and magnitude of the oscillations associated with the flow - structure coupling .this manifests itself , specifically , in the choice of nonlinearity modeling the plate ( or possibly beam ) equation .the kutta - joukowsky conditions ( kjc ) described above are _ dynamic and mixed _ in nature .these mixed type flow boundary conditions are taken to be accurate for plates in the clamped - free configuration .the configuration below represents an attempt to model oscillations of a plate which is _mostly free_. the dynamic nature of the flow conditions corresponds to the fact that the interaction of the plate and flow is no longer static at the free edge . in this casewe take the free - clamped plate boundary conditions , and the mixed flow boundary conditions : where are complementary parts of the boundary , and , represent moments and shear forces , given by ( and earlier here ) and extend in the natural way into the remainder of the plane ) the implementation of free - clamped plate boundary conditions is extremely important in the modeling of airfoils . however , treating coupled fluid - plate problems that involve a free plate is technically challenging .this is due to the loss of sufficient regularity of the boundary data imposed for the flow ( the failure of the lopatinski condition ) .clamped boundary conditions assumed on the boundary of the plate allow for smooth extensions to of the ( nc ) conditions satisfied by the flow . in the absence of these , one needs to approximate the original dynamics in order to construct sufficiently smooth functions amenable to pde calculations .preliminary calculations indicate that ( as the physics dictate ) free plate boundary conditions are in fact _ more compatible _ with the ( kjc ) flow conditions . with regard to ( i ) above , we note that when we are in the case of _ normal flow _ ( as opposed to _ axial flow _ ( as in remark [ axial ] ) the scalar von karman model is largely still viable .additionally , any configuration where the free portion of the ( fc ) plate boundary conditions is small " with respect to the clamped portion will satisfy the hypotheses for the theory of large deflections , and hence , von karman theory is applicable .arguably , the kutta - joukowsky boundary conditions for the flow ( [ kjc ] ) are the _ most important _ when modeling an airfoil immersed in a flow of gas .not surprisingly , these boundary conditions are also the most challenging from mathematical stand point .this was recently brought to the fore in an extensive treatise .various aspects of the problem in both subsonic and supersonic regime have been discussed in in the context of mostly one dimensional structures .the aim of this section is to address the mathematical problem posed by ( kjc ) , by putting them within the framework of modern harmonic analysis .we consider the toy " problem of ( kjc ) conditions coupled with clamped plate boundary conditions below . indeed , the recently studies show how the flow condition ( kjc ) interacts with the clamped plate in subsonic flows _ in order to develop a suitable abstract theory _ for this particular flow condition .the resulting papers give well - posedness of this fluid - structure interaction configuration . though the analysis is subsonic for ( kjc ) , utilizing the flow energy from the supersonic panel ( as above in theorem [ th : supersonic ] ) is effective in the abstract setup of this problem .in fact , even in the subsonic case , the analysis of semigroup generation proceeds through the technicalities developed earlier for supersonic case . the key distinction from the analysis of the clamped flow - plate interaction ( owing to the dynamic nature of the boundary conditions ) is that a _ dynamic flow - to - neumann map _ must be utilized to incorporate the boundary conditions into the abstract setup .the regularity properties of this map are critical in admitting techniques from abstract boundary control , and are determined from the zaremba elliptic problem .the necessary trace regularity hinges upon the invertibility of an operator which is analogous ( in two dimensions ) to the finite hilbert transform . andthis is a critical additional element of the challenging harmonic analysis brought about by the ( kjc ) .when the problem is reduced in dimensionality to a beam structural model , this property can be demonstrated and our analysis has parallels with that in ( and older references which are featured therein ) .specifically , one must invert the finite hilbert transform in for ; in higher dimensions , this brings about nontrivial ( open ) problems in harmonic analysis and the theory of singular integrals . from the mathematical point of viewthe difficulty lies , again , at the level of the linear theory . in order to deal with the effects ofthe _ unbounded _ traces ] defines the correct duality pairing in the tangential direction on .it is also at this point where we use the fact that satisfies on .thus simply supported and clamped boundary conditions imposed on the structure fully cooperate with this regularity .the principal result of ( see also ) reads as follows : [ t1 ] with reference to the model ( [ flowplate2 ] ) , with : assuming the trace regularity condition [ le : ftr0 ] holds for the aeroelastic potential , there exists a unique finite energy solution which exists on any ] , and the fourth order symmetric tensor is given by ,\ ] ] where has the meaning of the poisson modulus , and young s modulus . in the case of full vk systemwe need also to take into account compatibility of tangential ( in - plane ) movements of the plate with the corresponding dynamics of the gas flow .this leads to the following model for the viscous flow with states ( for more detail , see ) : & v_t+ u v_{x } -\mu_f\delta v -(\mu_f+{{\lambda}}_f){{\nabla}}{{\rm div\ , } } \ , v + { { \nabla}}p = 0~~ \quad { \rm in}~ { \mathbb{r}}^3_+ \times{\mathbb{r}}_+,\label{flu - eq1u}\end{aligned}\ ] ] where is the pressure and is a small perturbation of the gas velocity field here and are ( non - negative ) viscosity coefficients ( which vanish in the case of invisid fluid ) .we need also supply and with appropriate boundary conditions .we can choose non - slip boundary conditions , for instance .they where is the outer normal , is a tangent direction ( see also some discussion in where the explanation concerning the term on the boundary is given ) .aerodynamical impact in the full von karman system is described by the forces and where is the stress tensor of the fluid .we need also equip the equations above with initial data for all variables . in the case of *invisid compressible * fluid ( and ) we can introduce ( perturbed ) velocity potential ) which satisfies the equation with the boundary conditions moreover , in this case , the pressure has the form and thus ~~\mbox{and}~~ p_2({{\bf{x}}},t)= 0.\ ] ] to the best of our knowledge the fluid - structure model in the setting described above was not studied yet at mathematical level .we only note that over the past 25 years there have been a handful of treatments which address the pde theory of the full von karman plate system .we do not here provide an extensive overview of well - posedness and stability analyses of the full von karman equation ; the references address : well - posedness of classical solutions , well - posedness of energy type solutions for , and stability in the presence of thermal effects or boundary dissipation ( respectively ) . at present, well - posedness of energy - type solutions for is an open problem .recently , the analyses in successfully demonstrated well - posedness and the existence of attractors for certain fluid - structure models involving the full von karman equations ( in the case ) .the fluid - plate models under consideration in these references make use of the full von karman plate model coupled with a three dimensional incompressible fluid in a bounded domain ( this corresponds the case when equation is neglected and equation is taken with and ) .an elastic membrane bounds a portion of the fluid - filled cavity .this model is motivated by blood flowing through a large artery , and in the modeling of the liquid sloshing phenomenon ( fluid in a flexible tank ) .it assumes a homogeneous , viscous , incompressible fluid modeled by stokes flow in . in , well - posedness of energy - type solutionswas shown along with the existence of a compact attractor in the presence of frictional damping .the damping is critical ( ball s method ) for obtaining the attractor . in some situations , a general model with a velocity fluid field of the general form ( like in and with the nonzero viscosity coefficients and )may arise .this is a case when it necessary to take into account viscosity effects near the oscillating plate , which can be important for transonic flows , and possibly for hypersonic flows neglecting in - plane displacements in the model of the previous subsection , we arrive at the system : & \text { in } \omega\times ( 0,t),\\ u(0)=u_0;~~u_t(0)=u_1 & \text { in } \omega,\\ u={\partial_{\nu}}u = 0 & \text { on } \partial\omega\times ( 0,t),\\ p_t+u p_{x}+{{\rm div\ , } } \ , v=0 & { \rm in}~~ { \mathbb{r}}^3_+ \times ( 0,t ) , \\ v_t+ u v_{x } -\mu_f\delta v -(\mu_f+{{\lambda}}_f){{\nabla}}{{\rm div\ , } } \ , v + { { \nabla}}p = 0 & { \rm in}~ { \mathbb{r}}^3_+ \times ( 0,t ) \\ v(0)=v_0;~~p(0)=p_0 & \text { in } { \mathbb{r}^3}_+ \\ v = \left(0;0 ; \big[(\partial_t+u\partial_x)u ( { { \bf{x}}})\big]_{\text{ext}}\right ) & \text { on } { \mathbb{r}^2}_{\{(x , y)\ } } \times ( 0,t ) . \end{cases}\ ] ] the mathematical theory for this model is not yet well - developed .some mathematical results on well - posedness and long - time dynamics are available for this case when a fluid fills a bounded or tube type domain and . see and references therein . for the corresponding model in the incompressible case ( ) and with refer to . in the case of _axial flow _, mentioned in remark [ axial ] , a beam or plate is clamped on the _ leading edge _ and free elsewhere is described . to provide a clear picture of the dynamics ,consider the figure at the beginning of section [ recent ] with the over - body flow occurring in the _ -direction _ , as opposed to being in the direction of the plate s chord-direction . in the engineering literature .] this axial configuration , owing to lco response to low flow velocities , is that which has been considered from the point of view of energy harvesting .this would be accomplished by configuring an axially oriented flap on land or air vehicles with a piezo device which could generate current as it flutters . from appropriate structural equations of motion here correspond to those of a thin pipe conveying fluid , and many aspects of such dynamics mirror those we are investigating .expressions for the kinetic and potential energy of the beam and plate are given in .note that structural nonlinearities occur in both the inertia ( kinetic energy ) and stiffness ( potential energy ) terms for the structural equations of motion .the recent theoretical and experimental work of tang and dowell has been encouraging . in this work a new nonlinear structural model has been used based upon the inextensibility assumption and the comparison between theory and experiment for the lco response has been much improved over earlier results .the study of a linear aerodynamic model , combined with the new nonlinear structural model , is worthy of more rigorous mathematical attention .much of the technical discussion presented above excludes the sonic velocity ; in practice , the vicinity of is known as the _ transonic regime_. indeed , for the analysis provided above for the panel configuration breaks down in the essential way . while numerical work predicts appearance of shock waves , to our best knowledge no mathematical treatment of this problem is available at present . in the literature , it is noted that the flow model discussed above in is not accurate in the regime $ ] . ] ; indeed , the flow equations require additional specificity when the flow velocity nears the transonic regime .we note that the base model in becomes interesting , as a _ degenerate wave equation _ appears when .this will clearly produce diminished regularity in the direction .however , the references suggest that in the transonic regime a fully nonlinear fluid must be considered . from : ... neglecting either the structural or the fluid nonlinearities can lead to completely erroneous stability predictions . "the reference indicates that in the case of the standard panel configuration the appropriate flow equation due to the _ local mach number _ effect for transonic dynamics is of the form : here is replaced with indicating that .this nonlinear fluid equation introduces new mathematical challenges which must be addressed . in the case of supersonic or subsonic flows ,the definite sign of the spatial flow operator associated to the variable can be exploited .additionally , the nature of the dynamics become quasilinear in the flow potential . thus , near the transonic barrier the flow equation becomes -degenerate and quasilinear .experimentally , one notes that near there are many peculiarities , including the possibility of hysteresis in , and various flow instabilities , including shocks .such shocks may actually induce flutter .in fact , recent numerical investigations investigate the emergence of _ single mode flutter _ in low supersonic speeds , and the possibility of stabilizing this mode via structural damping .however , the results in indicate that shocks emanating from the flow may actually induce more complex coupled mode ( coalescence ) flutter . also in the transonic range , viscous fluid boundary layer effects are apt to be more important .it is essential that interaction between structural nonlinearities and the aerodynamic nonlinearities be accounted for . in , numerical studies of the interaction between structural and aerodynamic nonlinearities have been investigated .according to this study , so called _ traveling wave flutter _occurs in the transonic regime .in fact , for a simply supported panel , at of altitude , with a thickness - chord ratio of traveling wave flutter has been recorded in the form of wave packets ; these packets change shape and evolve in time during the movement from the leading to the trailing edge . near the panel there has been evidence of shocks forming , moving in concert with the panel motion , and in some cases _ inducing _ flutter . from the mathematical point of view , one notes that a key issue is the regularity of flow solutions in the -direction .however , as becomes degenerate , a term providing additional information in the direction of the flow appears .specifically , in energy calculations , the quasilinearity gives rise to a conserved quantity in the flow energy of the form so .\ ] ] this quantity , although conserved , has a bad " sign , which is a potential indicator for shock phenomena .it seems clear that the analytical techniques will differ for and .moreover , if well - posedness can be established , owing to physical results , we anticipate that the qualitative properties of these flow ( and hence flow - structure ) solutions will be quite distinct . from the point of view of quasilinear theory, the term provides dominant control of the operator , and can not be neglected in this transonic case .in fact , the corresponding pde problem can be viewed from the point of view of the tricomi operator changing from elliptic to hyperbolic , depending on the direction of the change in deflection .this is an exciting question to study mathematically ; the inroads provided by numerics and experiment here are instrumental guiding forces .in conclusion we recall shortly several observations made in which , on one hand confirm experimentally and numerically some of the mathematical findings and , on the other hand , raise open questions and indicate new avenues for mathematical research . for more detailed discussionwe refer to our paper . 1 .* stability induced by the flow in reducing dynamics to finite dimensions * : rigorous mathematical analysis reveals that the stabilizing effect of the flow reduces the structural dynamics to a _finite dimensional _ setting .it is remarkable that such conclusion is obtained directly from mathematical considerations as it is not definitively arrived at by either numerics or experiment .* non - uniqueness of final nonlinear state * : it is possible that there exist several _ locally _ stable equilibria in the global attractor of the system .this explains why the _ buckled plate _ in an aerodynamic flow does not have a final , unique nonlinear state .* surprisingly subtle effects of boundary conditions * : as the structure s boundary conditions are changed , so is the dynamic stability / instability of the system .experimental and numerical and theoretical studies confirm this .* limitations of vk theory and new nonlinear plate theory * : when the leading edge is clamped and the side edges and trailing edge are free , the vk theory can be no longer accurate . however , a novel , improved nonlinear plate theory has been developed and explored computationally and correlated with experiment .this provides a challenging opportunity for mathematical analysis .* aerodynamic theory * : it may be helpful to prove , mathematically , some of the results related to the justification of the recent developments in piston theory .this issue is also important for modeling the pressure due to solar radiation , which has a similar mathematical form to that of piston theory .it has been of recent interest in the context of interplanetary transportation using solar sails .the authors would like to dedicate this work to professor a.v .balakrishnan , whose pioneering and insightful work on flutter brought together engineers and mathematicians alike .dowell was partially supported by the national science foundation with grant nsf - eccs-1307778 .i. lasiecka was partially supported by the national science foundation with grant nsf - dms-0606682 and the united states air force office of scientific research with grant afosr - fa99550 - 9 - 1 - 0459 .webster was partially supported by national science foundation with grant nsf - dms-1504697 .balakrishnan , nonlinear aeroelasticity , continuum theory , flutter / divergence speed , plate wing model , free and moving boundaries , lecture notes , _ pure .a. math ._ , 252 , _ chapman & hall _ , fl , 2007 , pp .223244 .balakrishnan and m.a .shubov , asymptotic behaviour of the aeroelastic modes for an aircraft wing model in a subsonic air flow , _ proc .ser . a math .phys . eng ._ , 460 ( 2004 ) , pp.10571091 .chueshov , asymptotic behavior of the solutions of a problem on the aeroelastic oscillations of a shell in hypersonic limit , _ teor .funktsii funktsional .i prilozhen ._ 51 , ( 1989 ) , 137141 ( in russian ) ; tranaslation in _ j. mathematical sci . _ , 52 ( 1990 ) , pp .35453548 .i. chueshov and i. lasiecka , generation of a semigroup and hidden regularity in nonlinear subsonic flow - structure interactions with absorbing boundary conditions ._ 3 ( 2012 ) , pp .127 .i. chueshov , i. lasiecka and j. t. webster , flow - plate interactions : well - posedness and long - time behavior , _ discrete contin .s , special volume : new developments in mathematical theory of fluid mechanics _ , * 7 * ( 2014 ) , pp . 925965i. chueshov and i. ryzhkova , well - posedness and long time behavior for a class of fluid - plate interaction models , in : _ ifip advances in information and communication technology _ , vol.391 , ( 25th ifip tc7 conference , berlin , sept.2011 ) , d. hmberg and f. trltzsch ( eds . ) , springer , berlin , 2013 , pp.328337 .crighton , the kutta condition in unsteady flow , _ ann .fluid mech ._ , 17 , 1985 , 411445 .e. dowell , nonlinear oscillations of a fluttering plate , i and ii , aiaa j. , 4 , ( 1966 ) , pp .12671275 ; and 5 , ( 1967 ) , pp .18571862 .hodges , g.a .pierce , introduction to structural dynamics and aeroelasticity , _ cambridge univ . press _ , 2002 .gibbs and e.h .dowell , membrane paradox for solar sails , _aiaa j. _ , 52 ( 2014 ) , pp .29042906 .l. huang , viscous flutter of a finite elastic membrane in a poiseuille flow , _j. fluid ._ , 15 , 7 , 2001 , 10611088 .r. a. ibrahim , liquid sloshing dynamics : theory and applications , _ cambridge university press _ , 2005 .ilushin , the plane sections law in aerodynamics of large supersonic speeds , _ prikladnaya matem . mech ._ , 20 ( 1956 ) , no.6 , pp . 733755 ( in russian ). h. koch and i. lasiecka , hadamard well - posedness of weak solutions in nonlinear dynamic elasticity - full von karman systems ._ evolution equations , semigroup and functional analysis _ , vol 50 , birkhauser , 2002 , 197212 .d. xie , m. xu , h. dai , and e.h .dowell , proper orthogonal decomposition method for analysis of nonlinear panel flutter with thermal effects in supersonic flow , _ j. sound and vibration _ , 337 ( 2015 ) , pp . 263283 .
a variety of models describing the interaction between flows and oscillating structures are discussed . the main aim is to analyze conditions under which structural instability ( _ flutter _ ) induced by a fluid flow can be suppressed or eliminated . the analysis provided focuses on effects brought about by : ( i ) different plate and fluid boundary conditions , ( ii ) various regimes for flow velocities : subsonic , transonic , or supersonic , ( iii ) different modeling of the structure which may or may not account for in - plane accelerations ( full von karman system ) , ( iv ) viscous effects , ( v ) an assortment of models related to piston - theoretic model reductions , and ( iv ) considerations of axial flows ( in contrast to so called normal flows ) . + the discussion below is based on conclusions reached via a combination of rigorous pde analysis , numerical computations , and experimental trials .
many studies have shown that complex networks play an important role in characterizing complicated dynamic systems in nature and society .this is because the nodes of a complex network represent the elements , and the edges represent and simplify the complexity of their interactions so that we can better focus on the topological relation between two elements in a complex system .recently , complex networks have attracted the attention of a lot of researchers from different fields .based on the self - similarity of fractal geometry , song _ et al . _ generalized the box - counting algorithm and used it in the field of complex networks .they found that many complex networks are self - similar under certain length - scales .the fractal dimension has been widely used to characterize complex fractal sets . because the metric on graphs is notthe same as the euclidian metric on euclidian spaces , the box - counting algorithms to calculate the fractal dimension of a network is much more complicated than the traditional box - counting algorithm for fractal sets in euclidian spaces .song _ et al . _ developed some algorithms to calculate the fractal dimension of complex networks .then kim _ et al . _ proposed an improved algorithm to investigate the skeleton of networks and the fractal scaling in scale - free networks . proposed an algorithm based on the edge - covering box counting to explore self - similarity of complex cellular networks .later on , a ball - covering approach and an approach defined by the scaling property of the volume were proposed for fractal dimensions of complex networks .the features of topology and statistics , the fractality and percolation transition , fractal transition in complex networks , and properties of a scale - free koch networks have turned out to be hot topics in recent years . as a generalization of fractal analysis , the tool of multifractal analysis ( mfa ) may have a better performance on characterizing the complexity of complex networks in real world . mfa has been widely applied in a variety of fields such as financial modelling , biological systems , and geophysical data analysis . in recent years , mfa also has been successfully used in complex networks and seems more powerful than fractal analysis .lee and jung found that mfa is the best tool to describe the probability distribution of the clustering coefficient of a complex network .some algorithms have been proposed to calculate the mass exponents and to study the multifractal properties of complex networks .based on the compact - box - burning algorithm for fractal analysis of complex networks which is introduced by song _ , furuya and yakubo proposed a _compact - box - burning _( cbb ) algorithm for mfa of complex networks and applied it to show that some networks have multifractal structures . proposed a modified fixed - size box - counting algorithm to detect the multifractal behavior of some theoretical and real networks . improved the modified fixed - size box - counting algorithm and used it to investigate the multifractal properties of a family of fractal networks introduced by gallos _we call the algorithm in ref . the _ improved bc _ algorithm .recently , we adopted the improved bc to study the multifractal properties of the recurrence networks constructed from fractional brownian motions . in order to easily obtain the generalized fractal dimensions of real data , tl _ et al . _ introduced a sandbox algorithm which is originated from the box - counting algorithm .they pointed out that the sandbox algorithm gives a better estimation of the generalized fractal dimensions in practical applications .so far , the sandbox algorithm also has been widely applied in many fields .for example , yu _ used it to perform mfa on the measures based on the chaos game representation of protein sequences from complete genomes . in this article, we employ the sandbox ( sb ) algorithm proposed by tl _ for mfa of complex networks .first we compare the sb algorithm with the cbb and improved bc algorithms for mfa of complex networks in detail by calculating the mass exponents of some deterministic model networks .we make a detailed comparison between the numerical and theoretical results of these model networks .then we apply the sb algorithm to study the multifractal property of some classic model networks , such as scale - free networks , small - world networks , and random networks .it is well known that the fixed - size box - covering algorithm is one of the most common and important algorithms for multifractal analysis . for a given measures with support set in a metric space , we consider the following partition sum ^{q},\ ] ] , where the sum runs over all different nonempty boxes of a given size in a box covering of the support set . from the definitionabove , we can easily obtain and .the mass exponents of the measure can be defined as then the generalized fractal dimensions of the measure are defined as and where .the linear regression of /(q-1) ] , where is the diameter of the network .3 . rearrange the nodes of the entire network into random order . more specifically , in a random order , nodes which will be selected as the center of a sandbox are randomly arrayed .4 . according to the size of networks , choose the first 1000 nodes in a random order as the center of 1000 sandboxes , then search all the neighbor nodes by radius from the center of each sandbox .5 . count the number of nodes in each sandbox of radius , denote the number of nodes in each sandbox as .calculate the statistical average ^{q-1 } \rangle ] over all 1000 sandboxes of radius .7 . for different values of , repeat steps ( ii ) to ( vi ) to calculate the statistical average ^{q-1 } \rangle ] for linear regression .we need to choose an appropriate range of ] against and then choose the slope as an approximation of the mass exponents ( the process for estimating the generalized fractal dimensions is similar ) . for the improved bc and cbb algorithms, we need to cover the entire network by repeating a large number of same steps and then to find the minimum possible number of boxes by performing many realizations. then we can choose an appropriate range of ] ) . as an example, we show the linear regressions of the ^{q-1 } \rangle) ] to fit these data points . from fig .4 , we can observe the apparent power law behaviors for the 7th generation -flower network with and .so we selected the linear fit scaling range of as $ ] to calculate the mass exponents . in fig . 5, we show the mass exponents of the -flower network calculated by the sb , cbb and improved bc algorithms . from fig . 5, we can see that the numerical results obtained by the three algorithms are consistent with the theoretical results . for the minimal model network, we started with a star structure of 5 nodes as in ref . and then generated the 5th generation network with and .we calculated the mass exponents by the three algorithms .the numerical results are shown in fig .6 . from fig. 6 , we can see that the numerical results obtained by the three algorithms agree well with the theoretical results .in addition , we also generated the generalized version of the minimal model with , , and .here we only constructed the 5th generation of the generalized network .the numerical results are shown in fig .7 . from fig. 7 , we can see that the numerical results obtained by the three algorithms have good agreement with the theoretical results .= 8.5 cm = 8.5 cm = 8.5 cm = 8.5 cm it is hard to evaluate the performance of the three algorithms only according to the above three figures . in order to further quantify the performance of these algorithms , we introduce the relative standard error as in ref . . based on the relative standard error , we can determine the goodness of the numerical results of the mass exponents obtained from the three algorithms compared with the analytical results for the three deterministic model networks .the relative standard error can be defined as where and the and represent the analytical and numerical results of the mass exponents respectively ; is the average of the .the goodness of fit is indicated by the result .we summarize the corresponding relative standard error between the analytical and numerical results of the mass exponents in table i. from table i , we see that the relative standard errors for these three methods are all rather small .this result indicates all these three algorithms can give correct numerical results .in addition , we compare the consumed cpu time of the three algorithms for mfa of the networks generated from the three network models . the results are given in table ii . from table ii , we can cbb algorithm takes a substantial amount of computation time and memory resources , while sb algorithm consumes the least amount of cpu time and memory resources .this is to say that the sb algorithm has an overwhelming advantage in consuming cpu time and memory resources .therefore the sb algorithm can be considered as the most effective , feasible , and accurate algorithm to calculate the mass exponents and then to study the multifractal properties of complex networks among the three algorithms ..the relative standard error of the three algorithms for mfa of the networks generated from the three network models . [ cols="^,^,^,^",options="header " , ]wang _ et al . _ applied the modified fixed - size box - counting algorithm to study the multifractal properties of some theoretical model networks and real networks , such as scale - free networks , small - world networks , and random networks .all of these complex networks have been widely used in various studies . in this section ,we want to adopt the sb algorithm to detect the multifractal behavior of these networks .based on the growth and preferential attachment characteristics of many complex networks in real world , barabsi _ et al . _ proposed a ba model to explain the generating mechanism of power law distributions . in this paper , we use the ba model to generate scale - free networks and then investigate their multifractality . here ,we start with an initial network with nodes , and its three nodes are connected each other . at each step , we add one node to this initial network . then this new node is connected to existing nodes with probability .we denote the probability that the new node is connected to node as . the probability is defined as , where is the degree of node . in this paper, we respectively generated 100 scale - free networks with 6000 nodes , 8000 nodes , and 10000 nodes .the sb algorithm is then used to calculate their average mass exponents and average generalized fractal dimensions , which are shown in fig .8 . from fig .8 , we find that the and curves of scale - free networks are convex .so multifractality exists in these scale - free networks .= 8.5 cm based on the random rewiring procedure , watts and strogatz introduced the ws small - world network model which is a transition from a completely regular network to a completely random graph .small - world networks not only retain the high clustering coefficient of regular networks , but also capture the short average path length of random graphs .newman and watts proposed a nw model which is a modified version of the original ws model . in the nw model, the shortcuts are added between randomly chosen pairs of nodes , but no connections are removed from the regular lattice . the nw model can be described as follows .firstly , we start with a regular graph .we consider the nearest - neighbor coupled network containing nodes .each node of the coupled network is connected to its nearest - neighbors by undirected edges , where is an even integer .secondly , we randomly add some new connections to the coupled network . with probability , we connect the pair of nodes chosen randomly . in this paper, we first generated the three nearest - neighbor coupled networks containing 6000 nodes , 8000 nodes , and 10000 nodes , respectively .and we set so that each node of these networks is connected to its 4 nearest - neighbors .then we added a connection between pairs of nodes of the three coupled networks with probability , , and , respectively . for each case, we generated 100 small - world networks using the nw model .next , we applied the sb algorithm to calculate their average mass exponents and average generalized fractal dimensions .the and curves are plotted in fig .9 . from fig . 9 , we find that the and curves of small - world networks are not straight lines , but the fluctuation of these curves is very small .this means that the multifractality of these small - world networks is not obvious .= 8.5 cm the erds - rnyi ( er ) random graph model is one of the classical network models for generating a completely random network .we start with isolated nodes . for every possible pair of nodes ,we connect them by an undirected connection with probability . in this paper, we considered the three er random graph models containing 6000 nodes , 8000 nodes , and 10000 nodes .then we connected all possible node pairs of the three er models with probability , , and , respectively .for each case , we generated 100 random networks by using the er model and extract the largest connected component from each random network .next , we used the sb algorithm to calculate the average mass exponents and average generalized fractal dimensions of these largest connected components . in fig .10 , we show the and curves . as we can see from fig .10 , the and curves of random networks are close to straight lines .so this is to say , the multifractality almost does not exist in these random networks .in this work , we employed the sandbox ( sb ) algorithm proposed by tl _ et al . _ , for multifractal analysis ( mfa ) of complex networks .first we compared the sb algorithm with two existing algorithms of mfa for complex networks : the compact - box - burning ( cbb ) algorithm proposed by furuya and yakubo , and the improved box - counting ( bc ) algorithm proposed by li _ , by calculating the mass exponents of some deterministic model networks ( networks generated from the -flower , the minimal model , and the generalized version of the minimal model ) .we made a detailed comparison between the numerical results and the theoretical ones of these model networks .the comparison results show that the sb algorithm is the most effective , feasible , and accurate algorithm to calculate the mass exponents and to explore the multifractal behavior of complex networks .in addition , we applied the sb algorithm to study the multifractality of some classic model networks , such as scale - free networks , small - world networks , and random networks .our numerical results show that multifractality exists in scale - free networks , that of small - world networks is not obvious , and it almost does not exist in random networks .this project was supported by the natural science foundation of china ( grant no .11371016 ) , the chinese program for changjiang scholars and innovative research team in university ( pcsirt ) ( grant no .irt1179 ) , the lotus scholars program of hunan province of china .
complex networks have attracted much attention in diverse areas of science and technology . multifractal analysis ( mfa ) is a useful way to systematically describe the spatial heterogeneity of both theoretical and experimental fractal patterns . in this paper , we employ the sandbox ( sb ) algorithm proposed by tl _ et al . _ ( _ physica a _ , 159 ( 1989 ) 155 - 166 ) , for mfa of complex networks . first we compare the sb algorithm with two existing algorithms of mfa for complex networks : the compact - box - burning ( cbb ) algorithm proposed by furuya and yakubo ( _ phys . rev . e _ , 84 ( 2011 ) 036118 ) , and the improved box - counting ( bc ) algorithm proposed by li _ et al . _ ( _ j . stat . mech . : theor . exp . _ , 2014 ( 2014 ) p02020 ) by calculating the mass exponents of some deterministic model networks . we make a detailed comparison between the numerical and theoretical results of these model networks . the comparison results show that the sb algorithm is the most effective and feasible algorithm to calculate the mass exponents and to explore the multifractal behavior of complex networks . then we apply the sb algorithm to study the multifractal property of some classic model networks , such as scale - free networks , small - world networks , and random networks . our results show that multifractality exists in scale - free networks , that of small - world networks is not obvious , and it almost does not exist in random networks . = -18 mm = -0.1 cm = -0.1 cm * key words * : complex network ; multifractal analysis ; sandbox algorithm ; box - counting algorithm
gender inequality manifests in objective phenomena , such as salary differences and inequality in academia , but it also contains a large subjective component that is not trivial to measure .the philosophical works of simone de beauvoir describe how gender inequality is formulated on top of the concept that women are less rational and more emotional than men .this points to the subjective and subconscious component of gender inequality , which prevents many individuals from assessing their own gender biases in their everyday behavior . to a great extent ,gender inequality is exercised but not consciously reflected upon , creating a pattern of biases that everyone experiences but nobody names .gender bias , as part of the culture and ideology of a society , manifest in subconscious behavior and in the fictions created and consumed by that society .the emerging field of _ culturomics _ aims at a quantitative understanding of culture at a large scale , first being applied to cultural trends in literature , but also applicable to other traces of human culture such as voting in song contests , search trends , and movies . in 1985, alison bechdel published the comic strip named `` the rule '' , in which a female character formulates her rules to be interested in watching a movie : _ it has to contain at least two women in it , who talk to each other , about something besides a man _ . while intended as a punchline about gender roles in commercial movies , it served as an inspiration to critically think about the role of women in fiction. such formulation of a test for gender biases became popularly know as _ _ the bechdel test __ , and is usually applied to analyze tropes in mass media . whether a movie passes the test is nothing more than an anecdote , but the systematic analysis of a set of movies can reveal the gender bias of the movie industry .the bechdel test is used by swedish cinemas as a rating to highlight a male bias , in a similar manner as it is done with violence and nudity .previous research using this test showed gender bias when teaching social studies , and motivated the application of computational approaches to analyze gender roles in fiction .the current volume of online communication creates a record of human interaction very similar to a massive movie , in which millions of individuals leave digital traces in the same way as the characters of a movie talk in a script .this resemblance between reality and fiction is the base of the theory of behavioral scripts , used to analyze subconscious biases through patterns of social interaction .for example , linguistic coordination appears in both movie scripts , and twitter dialogues .furthermore , bidirectional interaction in twitter has been proved useful for computational social science , testing theories about the assortativity of subjective well - being , about cognitive constraints like dunbar s number , and about conventions and social influence in twitter .gender roles in emotional expression appear in myspace dialogues , and gender - aligned linguistic structures have been found in facebook status updates .one of our aims is to compare the patterns of dependence across genders in movies and online dialogues , assessing the question of whether our everyday life , as pictured in our online interaction , would be close to passing the bechdel test or not .large datasets offer the chance to analyze human behavior at a scale and granularity difficult to achieve in experimental or survey studies . at the scale of whole societies , digital tracesallow the analysis of cultural features , such as future orientation through search trends , and the similarity between cultures through song contest votes . at the level of individual behavior ,online datasets allowed the measure of intrinsic biases , such as that we tend to use more positive than negative words , that we have a tendency to share information with strong emotional content , and that apparently irrelevant actions , such as facebook likes , reveal relevant patterns of our personality .we introduce a quantitative extension of the bechdel test , to measure female and male independence in the script of a movie and the digital dialogues of a population .we calculate these metrics based on the amounts of dialogues between individuals of the same gender that do not contain references to the other gender . in our approach , we keep a symmetric analysis of male and female users and characters , quantifying asymmetries without any presumed point of view .we combine these metrics with information about geographic location , personal profile , and movies viewed by groups twitter users , to take one step further in understanding the conditions under which gender biases appear .we test the role of climate in the gender stereotypes of a culture , as suggested by previous works that found a relation between the future orientation and the geographic location of a culture . similarly as the ability to plan ahead is encouraged by adverse climate , it can also encourage gender biases in which males behave more independently and less emotionally attached .we test this theory , known as the _ disposable male _ , through the hypothesis that male independence increases with distance to the equator .additionally , we explore the relation between economic factors and female independence , measuring the relation between the average income of states of the us and the gender independence of its female twitter users .the centralized nature of movies plays a key role in the persistence of gender inequality in a society , which is part of the concept of cultural hegemony .this case is particularly important in movies aimed at children , which are known to have a gender bias that disempowers female characters . on the other hand , the lack of central control in the content of online media offers the chance of gender unbiased interaction , as conjectured by cyberfeminist theories . in this article, we quantify the presence of these biases in dialogues from movies and twitter , testing if these patterns prevail in online communication or are only present in mass media .to assess the questions mentioned above , we require three main datasources : references from social media to movies , movie script information , and dialogues involving the users that shared information about the movies .furthermore , we need a set of tools to process these datasets in order to identify the gender of users and actors , to detect gender references from social media messages and movie lines , and to group them into dialogues over which we can analyze gender asymmetries . in the following ,we outline our datasources and the methods we used for our analysis . from an initial dataset of youtube videos , we extracted a set of 16,142 videos with titles that contain the word `` trailer '' or `` teaser '' , among the categories of movies and entertainment . removing trailer related terms, we matched these videos to titles of movies in the open movie database ( omdb ) , selecting the pairs in which the string similarity between the titles was above 80% .after a manual inspection of the results , our dataset contained 704 trailers for 493 movies in 2,970 twitter shares .in addition , we had the amount of views , likes , and dislikes for these trailers , as retrieved in june , 2013 .we downloaded scripts from the internet movie script database as text files with the content of each movie , similarly to previous works . to disambiguate the title of each movie on the site, we first automated a search through omdb , matching the title of each movie with the imdb identifier of the first result , filtering when title string similarity was above 80% and the year of release differed in no more than one year .once each movie script was in text form , we identified character lines and scene cuts using the standard syntax used to process screenwriting markup languages , as used by screenwriting software like fountain .we constructed dialogues between characters following the sequence of their lines , and using the scene cuts as additional explicit separators between dialogues .second , we developed a set of python scripts to download the bechdel test results form bechdeltest.com , in a similar manner as previously done for visualization purposes .we gathered bechdel test data for 470 movies with an unique imdb identifier , processing the information in bechdeltest.com to extract two values : a test result indicating how many rules of the test are passed by a movie ( from 0 to 3 ) , and an amount of disagreements in the comments of the users of the site . using the service provided by gnip , we collected a set of public tweets from the period between june 1st and 6th , 2013 . using the list of youtube videos mentioned above, we found a set of 536,835 users that shared at least one of the trailers in our dataset . from among those users , we extracted their location information , which we matched with yahoo !placemaker to select those in the us . as a result, we got 69,606 _ ego _ users in the us that shared at least one trailer .for each one of these users , we retrieved their history of tweets with a limitation of 3,200 tweets per user , and their lists of followers and followees with the limitation of 5,000 users in each list . from this set of friendships , we identified those that interacted with the users in our database if they exchanged at least 10 mentions .filtering by us location again , we extended this set of users with an additional 107,645 _ alter _ users , for which we also got up to 3,200 of their most recent tweets .this composes a track of the recent history of interactions between more than 170,000 users , over which we constructed dialogues based on the more than 300 million tweets they wrote .we complement this twitter dataset with a set of myspace messages provided in previous research , covering conversations of users in the us and the uk .we constructed a network of dialogues between twitter users , based on the mentioning functionality of @-tags . for each tweet , we know its text , the user that created it , the time of creation , and the users that were mentioned in it .the full track of tweets between two users composes a set of dialogues , in which the users talked about different topics at various moments in time .there is no explicit sign in the tweets between two users that indicates the beginning and the end of a dialogue , which motivates the application of heuristics and semantic - based machine learning methods ., scaledwidth=95.0% ] for our case , we use the theory of bimodality in human communication , which states that there are two modes in the interaction between pairs of people : an intra - burst mode in which messages follow each other in a dialogue after very short periods , and an inter - burst mode of long silences between dialogues . figure [ fig : discussionexample ] shows an example of the timestamps of the tweets exchanged between a pair of twitter users .the head of the distribution of times between messages of a pair of users follows a power - law close to for time intervals in $ ] , and the tail is closer to an exponential distribution of the form for .note that this bimodal definition is different than a power - law with exponential cutoff , which is typically used to control for finite - size effects .the cutoff time gives us an estimate of the time scale of correlations inside a dialogue .we estimated its value in myspace and twitter through a modified version of the maximum likelihood technique for power - laws , correcting for the fact that the fit is to the head and not the tail of the distribution .this method minimizes the kolmogorov - smirnov criterion of distribution equality giving power - law exponents of in twitter and in myspace , very close to previous empirical studies of this kind of distributions in irc channels .this method allowed us to compute estimates of the cutoff value between both distributions , which are of 9.1 hours in twitter and 7.7 hours in myspace .we used the time cutoffs to separate dialogues , applying the rule that times of silence longer that the cutoff indicate the end of a dialogue . as a result ,our twitter dataset is composed of 2,240,787 dialogues and the myspace dataset of 3,263 dialogues .an illustration of a subnetwork of the resulting data is figure [ fig : citynetwork ] , where we show the dialogues between twitter users that declared to live in ann arbor , michigan .= 0.06 and =0.36 .[ fig : citynetwork],scaledwidth=70.0% ] we retrieved demographic information from the twitter profile of each user , looking for keywords that signal personal information , as done in previous research .this way , we identify the users as likely father , mother , or student if they include terms related to parenthood or studying in their profile .their location information allowed us to find their city and state within the us , which we matched versus the list of the 100 largest cities in the us to identify urban and rural users .we identified the gender of a user through first name matching against the history of names in the us , classifying the gender of a user in the same way as previously done for twitter and for authors of research papers . for each movie , we gathered the cast list from imdb , and then we looked for the terms `` actor '' and `` actress '' in the imdb profile of the actors playing each character , which determines their gender .this data is not only useful to determine the genders of the characters in our movie dataset , but also serve as a ground truth to evaluate the dictionary technique we used for twitter users . in total, we found 4,970 actors and 2,486 actresses , with 154 unknown ( writers and directors ) .we applied the above gender detection technique , which gave us the unknown gender class 33% of the time .note that these actors and actresses do not need to live in the us , and that can have artistic names like _ `` snopp dogg '' _ , who appears in our dataset .precision values are 0.894 for detecting males and 0.844 for detecting females , and recall values are 0.582 for males and 0.595 for females .the similarity in these values indicates that this tool does not introduce a bias that changes the ratio of male and female users .nevertheless , improvements are possible , not only in the lexica , but also introducing machine learning techniques that find more complicated naming patterns .we modify the above gender detection technique to find which dialogues include male and female references .first , we modified the gender lexica used above , filtering out english dictionary words , like `` faith '' , and toponyms .second , we disambiguated the gender of names that can appear in both genders by the frequency of appearance in each gender in the us .if a name is used for a gender at least than 5 times more often than for the other gender , we assign it to the lexicon of the gender with the highest frequency , or we remove it otherwise .second , we add common feminine words like `` her '' , and masculine words like `` him '' . for each dialogue composed by a set of tweets ,this technique will classify as containing references to males , females , both , or none depending on the presence of common words and names associated each gender . in our study, we apply this technique for a very particular subset of dialogues , aiming only at the decision whether male - male dialogues contain female references , and whether female - female dialogues contain male references . to provide an initial validation beyond intuition, we set up a small experiment to estimate the quality of the method .we extracted a random set of 100 dialogues between male twitter users that were classified as containing female references and 100 as not containing them , and a random set of 100 dialogues between female users that were classified as containing female references and 100 as not containing them .manual annotation using third party evaluations gave us an approximation of the ground truth , under the impossibility of surveying the actual users involved in the dialogues ..validation of detection of the presence ( 1 ) or absence ( 0 ) of gender references in dialogues.[tab : genderreferenceres ] [ cols="^,^,^,^,^,^",options="header " , ] , scaledwidth=95.0% ] the myspace dataset was constructed in a balanced way , downloading dialogues from similar amounts of male and female users .the left panel of figure [ fig : gendindep ] shows that the value of above 0.5 and of below 0.5 reveals a pattern of disassortativity in which users of different genders tend to interact , revealing the use of myspace for dating at the time of the crawl .the high ratio of male users in twitter is captured by the dialogue imbalance metric , which shows the large likelihood for a dialogue to involve a male .for this reason , both and are above 0.5 , indicating that the large male bechdel score of twitter is due to this size difference .the right panel of figure [ fig : gendindep ] shows the male and female gender independence ratios , for both myspace datasets ( uk and us ) and twitter . while in myspace these independence ratios were similar for males and females , the male independence in twitter is significantly larger than its female counterpart .this shows that the asymmetry between the independence of male and female users of twitter remains , even when we control for the amount of gender - aligned dialogues .this implies a clear bias towards male independence in twitter , despite of not being centrally controlled by any agent .this is not the case for myspace , where tests of equality of proportions did not allow us to conclude any differences in and .this points to the limited size of the myspace dataset , which is not a limitation for our twitter dialogues dataset .we require large datasets to measure and , which is the reason why they are not applicable to individual movies or small samples from social media .after name matching between youtube trailers and movies , our twitter dataset contains 1,741 trailer shares made my male users and 588 by female users .out of those movies , we have bechdel test information for 662 shares from male users and 294 shares from female users . for a subset of those movies we also have script information and bechdel scores , accounting for 264 shares from males and 86 shares from females . from those users ,we compute and in their ego network , if they participate in at least 25 dialogues . in the following ,we show the relation between the gender biases present in a movie , measured through bechdel test results and bechdel scores , and the ratios of each gender of the users and their dialogue imbalance .the female bechdel score of the movies in shares from female users are in general larger than those shared by male users . a wilcoxon test comparing both distributions rejects the hypothesis that they are the same ( ) .the distance between medians is , which is a relevant size in the scale , indicating that shares from female users are about movies with 45.5% higher than in shares by male users .the opposite pattern with male bechdel scores was not significant ( ) to conclude that male or female twitter users shared movies with higher .the numeric value of the bechdel test ( ) of movies in shares from female users had a larger value than those on the shares by male users ( ) , revealing that women share movies that pass more rules of the bechdel test .furthermore , a share from a female user is more likely to be about a movie that passes all three rules of the bechdel test .we computed the ratio of shares about movies that pass the test from all shares from women , and the same among the shares from men .a test on these ratios indicates that women are more likely to share movies that pass the bechdel test ( ) , by an increase of 0.12 , 30% more than the likelihood for male users .this relation can be seen in the left panel of figure [ fig : twittermovies ] , where we show the fraction of shares about movies with certain bechdel test value , over all the shares from male and female users .the most striking difference is at 3 , pointing to the higher chances that female users share movies that pass the test . ,title="fig:",scaledwidth=47.0% ] , title="fig:",scaledwidth=47.0% ] we computed the dialogue imbalance of the users sharing each movie trailer , and compared them across genders and across movies that pass or not the test .the dialogue imbalance of female users that share movies that pass the bechdel test is lower than for female users sharing movies that do not pass the test ( ) .this was not the case for male users ( ) , showing no relation between bechdel test results and the behavior of male twitter users . across genders ,the difference between and for the users that share movies that pass the bechdel test is significant ( ) , and of magnitude 0.42 on the total dialogue imbalance scale .this difference is not significant among males and females that share movies that do not pass the test ( ) , indicating that there is a shift away from interaction with men in the population of women that share movies that pass the test .the right panel of figure [ fig : twittermovies ] shows this effect , where and diverge when computed over the dialogues in users that shared movies that pass the bechdel test .this analysis shows that there are relations between the behavior of female twitter users and the movies they consume and share , but male users do not show any variation with respect to the portrayal of female roles in these movies .the information displayed in twitter profiles allowed us to extract two variables of the personal life of a user : if they mentioned in their profile that they are mothers or fathers and whether they are students .we use this information to analyze the gender independence present in dialogues between users of a particular kind , in comparison with the rest of dialogues present in twitter . , scaledwidth=90.0% ]there is no significant difference in when comparing dialogues between mothers and the rest of female - female dialogues . on the other hand, fathers showed a significantly higher male gender independence , as shown in the left panel of figure [ fig : parentsstudents ] .this indicates that publicly articulated dialogues between fathers tend to mention women less often .the gender independence in the discussions between pairs of two male students and two female students have higher values of and , when compared to dialogues between users not identified as students .the right panel of figure [ fig : parentsstudents ] shows that the difference between female students and the rest is very large , to the point of not having any significant difference with between male students .the gender asymmetries we find in the twitter population are not evident among students , suggesting that gender roles are less prominent within that subset of the population .the location information of each twitter user allows us to build the networks for all the users located in each state of the united states .an example of this kind of filtering is figure [ fig : citynetwork ] , where only the users located in ann arbor are displayed . this way , for each state we have a set of dialogues , which we use to measure male and female gender independence .figure [ fig : statemaps ] show and of each state in the continuous united states , as well as gender asymmetry computed as .it is noticeable that most of the states have positive asymmetry , with the exception of hawaii , mississippi , montana , and north dakota ., scaledwidth=47.0% ] given the location of each user , we tag them as urban if they live in one of the largest 100 cities of the united states , and rural if they live in a smaller city , as explained in the analytical setup section .figure [ fig : urban ] shows and for discussions between urban users and between rural users .both male and female urban users show larger gender independence , but this difference is stronger between urban and rural females .nevertheless , is still significantly lower than in the urban twitter population . versus state average income .the right panel shows the scatter plot of and latitude of the largest city of the state .red dashed lines show linear regression trends .[ fig : asymmetrydeps],scaledwidth=90.0% ] finally , we investigate the relation between other economic and geographic factors with both and . from the national census, we gathered average income of each state , and the gini index of income inequality . for each state, we also located the latitude and longitude of the largest city , measured in seconds west from the greenwich meridian and north from the equator , in order to test the role of economy and climate in female and male independence . has a correlation coefficient with average income of and , stronger than with any other metric including male independence . to evaluate if this result is a confound with any other metric , we calculated partial pearson correlation coefficients where is average income and is each of the other variables ( , gini index , latitude and longitude ) .all partial correlations were negative and significant , being the least significant when controlling for gini index ( ) .this shows that the dialogues between females in us states with higher income are more likely to include male references .the left panel of figure [ fig : asymmetrydeps ] shows the scatter plot of these values .the outliers of figure [ fig : asymmetrydeps ] call for a closer analysis of the statistical properties and robustness of our results . a shapiro - wilk test of normality on does not allow us to reject the null hypothesis that it is normally distributed ( ) , but the opposite is true for average income ( ) .for this reason , we replicate the above correlation analysis by computing spearman s correlation coefficient , which tests for a monotonous relation , by reducing the leverage of outliers through a rank transformation of the data .spearman s correlation between and average income was significant and negative ( , ) .male independence is correlated with latitude , with a correlation coefficient of ( ) .the same partial correlations analysis as with reveals positive and significant correlations when controlling for all the other factors .the least significant of these is when controlling for average income ( , ) , showing that latitude is the most related factor with male independence .as shown in the right panel of figure [ fig : asymmetrydeps ] , states that are farther from the equator have male dialogues that are less likely to contain female references .normality tests for and latitude did not provide evidence that they are not normally distributed , as shapiro - wilk tests did not reject the null hypothesis for both ( ) and latitude ( ). nevertheless , we tested the statistical robustness of our finding by computing spearman s correlation between and latitude , finding a significant positive correlation of ( ) .we presented a study that combines data from movie scripts , trailers , and casts , and twitter and myspace users , including their profile information and the dialogues among them .we designed a set of metrics to measure gender biases in the sets of dialogues in movies and social media , to explore the relations between gender roles in fiction and reality .starting from an equal approach to male and female independence in movies , we verified the existence of a generalized bias in which female characters are shown as dependent on male characters .furthermore , the trailers of male biased movies are more popular , and the movies shared by twitter users are related to their profiles and patterns of interaction . while we did not find a difference for male users , female users are more likely to share movies with high female bechdel scores , and to interact less with with male users if they share movies that pass the bechdel test .this indicates that female twitter users are attracted to movies in which women are shown less dependent on men , but also that the audiences might be starting to be aware of the results of the bechdel test itself .we compare the gender biases in twitter and myspace with our metrics for movies , finding that twitter contains a male bias not only in amount of users , but also in a lower degree of female gender independence .this points to the possibility that some design decisions of twitter might create undesired effects , such as hindering female users to engage in the community in the same way as males do .in addition , the biases present in public dialogues in twitter are not radically different from those in movies .the decentralized nature of twitter has not led to a gender unbiased interaction with respect to mass media , and the asymmetric pattern of lower female gender independence is also present in everyday online public interaction .this similarity between reality and fiction can be explained by two mechanisms : i ) the gender roles present in fiction , including movies , influence our behavior and gender bias , or ii ) movies reflect patterns of gender dependence present in real life .it is also possible that there is a combination of both mechanisms , in which a feedback loop makes movies reflect certain gender bias in everyday life , but also perpetuate gender inequality through the influence of movies in human culture . in any case ,such subconscious biases are a component of ideology and contribute to the creation of inequality at a large scale , in the same way as very small racial preferences can lead to segregation .we find that certain personal factors are related to the gender independence of twitter users .this is particularly strong as we did not find evidence for a presence of gender bias among students , and we found urban users to be more gender independent that rural ones , especially for women .we calculated gender independence values across states in the us , finding a generalized pattern of asymmetry towards lower female independence .we found a significant correlation between male independence and latitude that is consistent with the theory of the disposable male , which predicates that males behave more independently due to the presence of adverse conditions , including climate .however , this result is constrained to us states , and further work on a wider range of societies is necessary to understand the relation between climate and gender asymmetries . in addition, we found a negative relation between female gender independence and average income , counter intuitively to the concept that female emancipation increases the workforce and the productivity of a society .note that our finding is structural , not measuring changes in average income or gdp due to changes in female independence , and that they are consistent with the observation that richer countries show larger gender bias in scientific production .one possible explanation lies in the difference between the political discourse regarding gender independence and the subjective behavior of a society : labeling oneself as liberal in gender policies does not necessarily imply an absence of a gender bias in everyday behavior . on the other hand , this result also points to the role of twitter in society , as publicly articulated dialogues might emphasize certain ideals of gender equality in the places where they are needed the most .all these questions are empirically testable in future research , in particular if focused on individual income and education levels , world views , and gender biases quantified in observed behavior .the authors would like to acknowledge gnip for providing data on twitter activity .dg was funded by the swiss national science foundation ( cr21i1_1464991 ) .michel , j .- b . ; shen , y. k. ; aiden , a. p. ; veres , a. ; gray , m. k. ; pickett , j. p. ; hoiberg , d. ; clancy , d. ; norvig , p. ; orwant , j. ; pinker , s. ; nowak , m. a. ; aiden , e. l. ( 2011 ) ._ science _ * 331(6014 ) * , 17682 .schwartz , h. a. ; eichstaedt , j. c. ; kern , m. l. ; dziurzynski , l. ; ramones , s. m. ; agrawal , m. ; shah , a. ; kosinski , m. ; stillwell , d. ; seligman , m. e. p. ; ungar , l. h. ( 2013 ) . _ plos one _ * 8(9 ) * , e73791 .
the subjective nature of gender inequality motivates the analysis and comparison of data from real and fictional human interaction . we present a computational extension of the bechdel test : a popular tool to assess if a movie contains a male gender bias , by looking for two female characters who discuss about something besides a man . we provide the tools to quantify bechdel scores for both genders , and we measure them in movie scripts and large datasets of dialogues between users of myspace and twitter . comparing movies and users of social media , we find that movies and twitter conversations have a consistent male bias , which does not appear when analyzing myspace . furthermore , the narrative of twitter is closer to the movies that do not pass the bechdel test than to those that pass it . we link the properties of movies and the users that share trailers of those movies . our analysis reveals some particularities of movies that pass the bechdel test : their trailers are less popular , female users are more likely to share them than male users , and users that share them tend to interact less with male users . based on our datasets , we define gender independence measurements to analyze the gender biases of a society , as manifested through digital traces of online behavior . using the profile information of twitter users , we find larger gender independence for urban users in comparison to rural ones . additionally , the asymmetry between genders is larger for parents and lower for students . gender asymmetry varies across us states , increasing with higher average income and latitude . this points to the relation between gender inequality and social , economical , and cultural factors of a society , and how gender roles exist in both fictional narratives and public online dialogues . march 25th , 2013
often semiparametric estimators are asymptotically equivalent to a sample average .the object being averaged is referred to as the influence function .the influence function is useful for a number of purposes .its variance is the asymptotic variance of the estimator and so it can be used for asymptotic efficiency comparisons .also , the form of remainder terms follow from the form of the influence function so knowing the influence function should be a good starting point for finding regularity conditions . in addition , estimators of the influence function can be used to reduce bias of a semiparametric estimator .furthermore , the influence function approximately gives the influence of a single observation on the estimator .indeed this interpretation is where the influence function gets its name in the robust estimation literature , see hampel ( 1968 , 1974 ) .we show how the influence function of a semiparametric estimator can be calculated from the functional given by the limit of the semiparametric estimator .we show that the influence function is the limit of the gateaux derivative of the functional with respect to a smooth deviation from the true distribution , as the deviation approaches a point mass .this calculation is similar to that of hampel ( 1968 , 1974 ) , except that the deviation from the true distribution is restricted to be smooth .smoothness of the deviation is necessary when the domain of the functional is restricted to smooth functions .as the deviation approaches a point mass the derivative with respect to it approaches the influence function .this calculation applies to many semiparametric estimators that are not defined for point mass deviations , such as those that depend on nonparametric estimators of densities and conditional expectations .we also consider regularity conditions for validity of the influence function calculation .the conditions involve frechet differentiability as well as convergence rates for nonparametric estimators .they also involve stochastic equicontinuity and small bias conditions .when estimators depend on nonparametric objects like conditional expectations and pdf s , the frechet differentiability condition is generally satisfied for intuitive norms , e.g. as is well known from goldstein and messer ( 1992 ) .the situation is different for functionals of the empirical distribution where frechet differentiability is only known to hold under special norms , dudley ( 1994 ) .the asymptotic theory here also differs from functionals of the empirical distribution in other ways as will be discussed below .newey ( 1994 ) previously showed that the influence function of a semiparametric estimator can be obtained by solving a pathwise derivative equation .that approach has proven useful in many settings but does require solving a functional equation in some way .the approach of this paper corresponds to specifying a path so that the influence can be calculated directly from the derivative .this approach eliminates the necessity of finding a solution to a functional equation .regularity conditions for functionals of nonparametric estimators involving frechet differentiability have previously been formulated by ait - sahalia ( 1991 ) , goldstein and messer ( 1992 ) , newey and mcfadden ( 1994 ) , newey ( 1994 ) , chen and shen ( 1998 ) , chen , linton , and keilegom ( 2003 ) , and ichimura and lee ( 2010 ) , among others .newey ( 1994 ) gave stochastic equicontinuity and small bias conditions for functionals of series estimators .in this paper we update those using belloni , chernozhukov , chetverikov , and kato ( 2015 ) .bickel and ritov ( 2003 ) formulated similar conditions for kernel estimators .andrews ( 2004 ) gave stochastic equicontinuity conditions for the more general setting of gmm estimators that depend on nonparametric estimators . in section 2we describe the estimators we consider .section 3 presents the method for calculating the influence function . in section 4we outline some conditions for validity of the influence function calculation .section 5 gives primitive conditions for linear functionals of kernel density and series regression estimators .section 6 outlines additional conditions for semiparametric gmm estimators .section 7 concludes .the subject of this paper is estimators of parameters that depend on unknown functions such as probability densities or conditional expectations .we consider estimators of these parameters based on nonparametric estimates of the unknown functions .we refer to these estimators as semiparametric , with the understanding that they depend on nonparametric estimators. we could also refer to them as `` plug in estimators '' or more precisely as `` plug in estimators that have an influence function . ''this terminology seems awkward though , so we simply refer to them as semiparametric estimators .we denote such an estimator by , which is a function of the data where is the number of observations . throughout the paperwe will assume that the data observations are i.i.d .we denote the object that estimates as , the subscript referring to the parameter value under the distribution that generated the data .some examples can help fix ideas .one example with a long history is the integrated squared density where has pdf and is -dimensional .this object is useful in certain testing settings . a variety of different estimators of have been suggested .one estimator is based on a kernel estimator of the density given by is a bandwidth and is a kernel .an estimator can then be constructed by plugging in in place of in the formula for as estimator of and other estimators have been previously considered by many others .we use it as an example to help illustrate the results of this paper .it is known that there are other estimators that are better than one of these is is a symmetric kernel .gine and nickl ( 2008 ) showed that this estimator converges at optimal rates while it is well known that does not .our purpose in considering is not to suggest it as the best estimator but instead to use it to illustrate the results of this paper .another example is based on the bound on average consumer surplus given in hausman and newey ( 2015 ) . herea data observation is where is quantity of some good , is price , and is income . for object of interest is .\]]from hausman and newey ( 2015 ) it follows that this object is a bound on the weighted average over income and individuals of average equivalent variation for a price change from to when there is general heterogeneity .it is an upper ( or lower ) bound for average surplus when is a lower ( or upper ) bound for individual income effects . here is a known weight function that is used to average across income levels .one estimator of can be obtained by plugging - in a series nonparametric regression estimator of in the formula for . to describe a series estimator let be a vector of approximating functions such as power series or regression splines .also let ^{t} ] is given by will be nonsingular with probability approaching one under conditions outlined below .we can then plug in this estimator to obtain use this estimator as a second example .this paper is about estimators that have an influence function .we and others refer to these as asymptotically linear estimators .an asymptotically linear estimator is one satisfying=0,e[\psi ( z_{i})^{t}\psi ( z_{i})]<\infty .\label{inf}\]]the function is referred to as the influence function , following terminology of hampel ( 1968,1974 ) .it gives the influence of a single observation in the leading term of the expansion in equation ( [ inf ] ) .it also quantifies the effect of a small change in the distribution on the limit of as we further explain below . in the integrated squared density example the influence function is well known to be .\]]this formula holds for the estimators mentioned above and for all other asymptotically linear estimators of the integral of the square of an unrestricted pdf . in the consumer surplus example the influence function is,\delta ( x)=f_{0}(x)^{-1}w(x).\]]as will be shown below .in this section we provide a method for calculating the influence function . the key object on which the influence function dependsis the limit of the estimator when has cdf we denote this object by .it describes how the limit of the estimator varies as the distribution of a data observation varies .formally , it is mapping from a set of cdf s into the real line, in the integrated squared density example where all elements of the domain are restricted to be continuous distributions with pdfs that are square integrable . in the average surplus example ] and exist and is continuously distributed with pdf that is positive where is positive .we use how varies with to calculate the influence function .let denote a cdf such that is in the domain of for small enough and approaches a point - mass at as .for example , if is restricted to continuous distributions then we could take to be continuous with pdf for a bounded pdf with bounded support and denoting a possible value of . under regularity conditions given below the influence functioncan be calculated as .\label{inf func}\]]the derivative in this expression is the gateaux derivative of the functional with respect to `` contamination '' to the true distribution thus this formula says that the influence function is the limit of the gateaux derivative of as the contamination distribution approaches a point mass at .for example , consider the integrated squared density where we let the contamination distribution have a pdf for a bounded kernel . then ^{2}d\tilde{z}\right\ } |_{t=0 } \\ & = & \int 2[f_{0}(\tilde{z})-\beta _ { 0}]g_{z}^{h}(\tilde{z})d\tilde{z}.\end{aligned}\]]assuming that is continuous at the limit as is given by = 2\lim_{h\longrightarrow 0}\int f_{0}(\tilde{z})g_{z}^{h}(\tilde{z})d\tilde{z}-2\beta _ { 0}=2[f_{0}(z)-\beta _ { 0}].\]]this function is the influence function at of semiparametric estimators of the integrated squared density . thus equation ( [ inf func ] ) holds in the example of an integrated squared density . as we show below , equation ( [ inf func ] ) , including the gateaux differentiability , holds for any asymptotically linear estimator satisfying certain mild regularity conditions .equation ( [ inf func ] ) can be thought of as a generalization of the influence function calculation of hampel ( 1968 , 1974 ) .that calculation is based on contamination that puts probability one on .if is the domain of then the influence function is given by the gateaux derivative problem with this calculation is that not be in the domain for many semiparametric estimators .it is not defined for the integrated squared density , average consumer surplus , nor for any other that is only well defined for continuous distributions .equation ( [ inf func ] ) circumvents this problem by restricting the contamination to be in . the influence functionis then obtained as the limit of a gateaux derivative as the contamination approaches a point mass , rather than the gateaux derivative with respect to a point mass .this generalization applies to most semiparametric estimators .we can relate the influence function calculation here to the pathwise derivative characterization of the influence function given in van der vaart ( 1991 ) and newey ( 1994 ) .consider as a path with parameter passing through the truth at it turns out that this path is exactly the right one to get the influence function from the pathwise derivative .suppose that has pdf and has density so that the likelihood corresponding to this path is .the derivative of the corresponding log - likelihood at zero , i.e. the score , is where we do not worry about finite second moment of the score for the moment . as shown by van der vaart ( 1991 ), the influence function will solve the equation \\ & = & \int \psi ( \tilde{z})\left [ \frac{g_{z}^{h}(\tilde{z})}{f_{0}(\tilde{z})}-1\right ] f_{0}(\tilde{z})d\tilde{z}=\int \psi ( \tilde{z})g_{z}^{h}(\tilde{z})d\tilde{z}.\end{aligned}\]]taking the limit as then gives the formula ( [ inf func ] ) for the influence function when the influence function is continuous at . in this way be thought of as a path where the pathwise derivative converges to the influence function as approaches a point mass at .we give a theoretical justification for the formula in equation ( [ inf func ] ) by assuming that an estimator is asymptotically linear and then showing that equation ( [ inf func ] ) is satisfied under a few mild regularity conditions .one of the regularity conditions we use is local regularity of along the path this property is that for any , when are i.i.d . with distribution \overset{d}{\longrightarrow } n(0,v),v = e[\psi ( z_{i})\psi ( z_{i})^{t}].\]]that is , under a sequence of local alternatives , when is centered at then has the same limit in distribution as for .this is a very mild regularity condition .many semiparametric estimators could be shown to be uniformly asymptotically normal for in a neighborhood of would imply this condition .furthermore , it turns out that asymptotic linearity of and gateaux differentiability of at are sufficient for local regularity .for these reasons we view local regularity as a mild condition for the influence function calculation . for simplicitywe give a result for cases where is a continuous distribution with pdf and includes paths where has pdf and is a bounded pdf with bounded support .we also show below how this calculation can be generalized to cases where the deviation need not be a continuous distribution .theorem 1 : _ suppose that _ _ _ is asymptotically linear with influence function __ is continuous at _ _ _ _ and _ _ _ _ is continuously distributed with pdf __ _ _ that is bounded away from zero on a neighborhood of _ _ . if __ __ is locally regular for the path __ _ _ then equation ( [ inf func ] ) is satisfied .furthermore , if _ _ __ is differentiable at _ _ _ _ with derivative __ _ _ _ _ is locally regular . _ _this result shows that if an estimator is asymptotically linear and certain conditions are satisfied then the influence function satisfies equation ( _ [ inf func ] _ ) , justifying the calculation of the influence function .furthermore , the process of that calculation will generally show differentiability of and so imply local regularity of the estimator , confirming one of the hypotheses that is used to justify the formula . in this way this result provides a precise link between the influence function of an estimator and the formula in equation ( _ [ inf func ] _ ) .this result is like van der vaart ( 1991 ) in showing that an asymptotically linear estimator is regular if an only if its limit is pathwise differentiable .it differs in some of the regularity conditions and in restricting the paths to have the mixture form with kernel density contamination .such a restriction on the paths actually weakens the local regularity hypothesis because only has to be locally regular for a particular kind of path rather than a general class of paths .although theorem 1 assumes is continuously distributed the calculation of the influence function will work for combinations of discretely and continuously distributed variables . for such cases the calculation can proceed with a deviation that is a product of a point mass for the discrete variables and a kernel density for the continuous variables . more generally , only the variables that are restricted to be continuously distributed in the domain need be continuously distributed in the deviation .we can illustrate using the consumer surplus example .consider a deviation that is a product of a point mass at some and a kernel density centered at .the corresponding path is is the distribution corresponding to .let be the marginal pdf for along the path .multiplying and dividing by and using iterated expectations we find that \tilde{x}=\int f_{t}(\tilde{x})^{-1}w(\tilde{x})e_{f_{t}}[q|\tilde{x}]f_{t}(\tilde{x})dx = e_{f_{t}}[f_{t}(x_{i})^{-1}w(x_{i})q_{i}].\]]differentiating with respect to gives (\tilde{x})e[q|\tilde{x}]f_{0}(\tilde{x})d\tilde{x } \\ &= & q\int \delta ( \tilde{x})g^{h}(\tilde{x})d\tilde{x}-\int \delta ( \tilde{x})e[q|\tilde{x}]g^{h}(\tilde{x})d\tilde{x}.\end{aligned}\]]therefore , assuming that is continuous at we have).\]]this result could also be derived using the results for conditional expectation estimators in newey ( 1994 ) .the fact that local regularity is necessary and sufficient for equation ( _ [ inf func ] _ ) highlights the strength of the asymptotic linearity condition .calculating the influence function is a good starting point for showing asymptotic linearity but primitive conditions for asymptotic linearity can be complicated and strong .for example , it is known that asymptotic linearity can require some degree of smoothness in underlying nonparametric functions , see bickel and ritov ( 1988 ) .we next discuss regularity conditions for asymptotic linearity .one of the important uses of the influence function is to help specify regularity conditions for asymptotic linearity .the idea is that once has been calculated we know what the remainder term for asymptotic linearity must be . the remainder termcan then be analyzed in order to formulate conditions for it to be small and hence the estimator be asymptotically linear . in this sectionwe give one way to specify conditions for the remainder term to be small .it is true that this formulation may not lead to the weakest possible conditions for asymptotic linearity of a particular estimator .it is only meant to provide a useful way to formulate conditions for asymptotic linearity . in this sectionwe consider estimators that are functionals of a nonparametric estimator taking the form is some nonparametric estimator of the distribution of .both the integrated squared density and the average consumer surplus estimators have this form , as discussed below .we consider a more general class of estimators in section 6 . since adding and subtracting the term and both converge in probability to zero then will be asymptotically linear . to the best of our knowledge little is gained in terms of clarity or relaxing conditions by considering rather than and separately , so we focus on the individual remainders .the form of the remainders and are motivated by being a derivative of with respect to .the derivative interpretation of suggests a linear approximation of the form the equality follows by =0. ] and =o_{p}(n^{-1/2}). ] is the kernel bias for the convolution of the influence function and the true pdf .it will be under smoothness , kernel , and bandwidth conditions that are further discussed below .the term ] for the series estimator for consumer surplus let ^{t}\hat{\sigma}^{-1}p^{k}(x) ] is a series bias term that will be under conditions discussed below .the term ] which is when is bounded and ^{2}=o_{p}(1). ] is the root integrated squared error .consequently is not bounded in probability and so does not converge in probability to zero .this problem can be addressed by specifying that converges at some rate and that satisfies a stronger condition than frechet differentiability .one condition that is commonly used is that . this condition will be satisfied if is twice continuously differentiable at or if the first frechet derivative is lipschitz .if it is also assumed that converges faster than then assumption a1 will be satisfied .a more general condition that allows for larger is given in the following hypothesis .assumption 2 : for some _ _ and _ _ .this condition separates nicely into two parts , one about the properties of the functional and another about a convergence rate for . for the case 2 has been previously been used to prove asymptotic linearity , e.g. by ait - sahalia ( 1991 ) , andrews ( 1994 ) , newey ( 1994 ) , newey and mcfadden ( 1994 ) , chen and shen ( 1998 ) , chen , linton , and keilegom ( 2003 ) , and ichimura and lee ( 2010 ) among others . in the example of the integrated squared density ^{2}dz = o(\left\vert f - f_{0}\right\vert ^{2}) ] thus assumption 2 will be satisfied with when converges to faster than in the integrated squared error norm .the following result formalizes the observation that assumptions 1 and 2 are sufficient for asymptotic linearity of .theorem 2 : _ if assumptions 1 and 2 are satisfied then _ _ _ is asymptotically linear with influence function _ _ . an alternative set of conditions for asymptotic normality of given by ait - sahalia ( 1991 ) .instead of using assumption 1 ait - sahalia used the condition that converged weakly as a stochastic process to the same limit as the empirical process .asymptotic normality of then follows immediately by the functional delta method .this approach is a more direct way to obtain asymptotic normality of the linear term in the expansion .however weak convergence of requires stronger conditions on the nonparametric bias than does the approach adopted here .also , ait - sahalia s ( 1991 ) approach does not deliver asymptotic linearity , though it does give asymptotic normality .these conditions for asymptotic linearity of semiparametric estimators are more complicated than the functional delta method outlined in reeds ( 1976 ) , gill ( 1989 ) , and van der vaart and wellner ( 1996 ) .the functional delta method gives asymptotic normality of a functional of the empirical distribution or other root - n consistent distribution estimator under just two conditions , hadamard differentiability of the functional and weak convergence of the empirical process .that approach is based on a nice separation of conditions into smoothness conditions on the functional and statistical conditions on the estimated distribution .it does not appear to be possible to have such simple conditions for semiparametric estimators .one reason is that they are only differentiable in norms where is not bounded in probability .in addition the smoothing inherent in introduces a bias that depends on the functional and so the weakest conditions are only attainable by accounting for interactions between the functional and the form of in the next section we discuss this bias issue .in this section we consider primitive conditions for assumption 1 to be satisfied for kernel density and series estimators .we focus on assumption 1 because it is substantially more complicated than assumption 2 .assumption 2 will generally be satisfied when is sufficiently smooth and converges at a fast enough rate in a norm .such conditions are quite well understood .assumption 1 is more complicated because it involves both bias and stochastic equicontinuity terms .the behavior of these terms seems to be less well understood than the behavior of the nonlinear terms .assumption 1 being satisfied is equivalent to the linear functional being an asymptotically linear estimator .thus conditions for linear functionals to be asymptotically linear are also conditions for assumption 1 . for that reason it suffices to confine attention to linear functionals in this section . also , for any linear functional of the form we can renormalize so that for . ] conditions for a linear functional of a kernel density estimator to be asymptotically linear were stated though ( apparently ) not proven in bickel and ritov ( 2003 ) . here we give a brief exposition of those conditions and a result .let be an vector and have pdf .as previously noted , for we have to make sure that the stochastic equicontinuity condition holds we assume : assumption 3 : _ is bounded with bounded support , _ _ _ _ _ _ _ is continuous almost everywhere , and for some _ _ , <\infty . ] and as described earlier .the stochastic equicontinuity term will be small if ^{2}/n\overset{p}{\longrightarrow } 0. ] and ] of the population projection of on the second term satisfies =-e[\{\delta ( x_{i})-\gamma _ { \delta } ^{t}p^{k}(x_{i})\}\{d_{0}(x_{i})-p^{k}(x_{i})^{t}\gamma \}]\]]where the equality holds by being orthogonal to in the population .as pointed out in newey ( 1994 ) , the size of this bias term is determined by the product of series approximation errors to and to .thus , the bias of a series semiparametric estimator will generally be smaller than the nonparametric bias for a series estimate of for example , for power series if and are continuously differentiable of order and respectively , is r - dimensional , and the support of is compact then by standard approximation theory , \right\vert \leq ck^{-(s_{d}+s_{\delta } ) /r}\ ] ] as discussed in newey ( 1994 ) it may be possible to use a that is optimal for estimation of and also results in asymptotic linearity .if and is chosen to be optimal for estimation of then thus , root - n consistency of is possible with optimal number of terms for when the number of derivatives of is more than half the dimension of .turning now to the regularity conditions for asymptotic linearity , we follow belloni et . al .( 2015 ) and impose the following assumption that takes care of the stochastic equicontinuity condition and the random bias term . :assumption 5 : _ is bounded _ , <\infty , ] __ are bounded and bounded away from zero uniformly in __ _ _ , there is a set _ _ _ with _ _ _ and _ _ _ and _ that__}\leq c_{k}, ] , _ and _ belloni et .2015 ) give an extensive discussion of the size of , and for various kinds of series approximations and distributions for . for power series assumptions 5 and 6are satisfied with , and tensor product splines of order , assumptions 5 and 6 are satisfied with , and theorem 4 : _ if assumptions 5 and 6 are satisfied then for _ ] and it will follow that then asymptotic linearity of will follow from asymptotic linearity of . with an additional stochastic equicontinuity condition like that of andrews ( 1994 ) , asymptotic linearity of will follow from asymptotic linearity of functionals of for let ] and differ only when is different than . assuming that is continuous in in an appropriate sense we would expect that should be close to zero when is close to as long as is close to in large samples in that sense , i.e. is consistent in the right way , then we expect that the following condition holds .assumption 7 : . this condition will generally be satisfied when the nonparametrically estimated functions are sufficiently smooth with enough derivatives that are uniformly bounded and the space of function in which lie is not too complex ; see andrews ( 1994 ) and van der vaart and wellner ( 1996 ) . under assumption7 asymptotic linearity of will suffice for asymptotic linearity of . to see thissuppose that is asymptotically linear with influence function then under assumption 7 and by =0, ] can be viewed as a correction term for estimation of .it can be calculated from equation ( [ inf func ] ) applied to the functional .assumptions 1 and 2 can be applied with for regularity conditions for asymptotic linearity of here is a result doing so theorem 5 : _ if _ _ _ , _ _ _ _ , _ _ _ _ is continuously differentiable in a neighborhood of _ _ _ _ with probability approaching _ _ _ _ for any _ __ we have __ _ _ _ _ _ _ is nonsingular , assumptions 1 and 2 are satisfied for _ _ ] .alternatively , assumption 7 can be used to show that the gmm estimator is asymptotically equivalent to the estimator studied in section 4 . for brevity we do not give a full set of primitive regularity conditions for the general gmm setting .they can be formulated using the results above for linear functionals as well as frechet differentiability , convergence rates , and primitive conditions for assumption 7 .in this paper we have given a method for calculating the influence function of a semiparametric estimator .we have also considered ways to use that calculation to formulate regularity conditions for asymptotic linearity .we intend to take up elsewhere the use of the influence function in bias corrected semiparametric estimation .shen ( 1995 ) considered optimal robust estimation among some types of semiparametric estimators . further work on robustness of the kinds of estimators considered here may be possible . other work on the influence function of semiparametric estimatorsmay also be of interest .* proof of theorem 1 * : note that in a neighborhood of ^{1/2} ] is mean - square differentiable and is continuous in on a neighborhood of zero for all small enough .also , by for all and on a neighborhood of it follows that for all and small enough and hence .then by theorem 7.2 and example 6.5 of van der vaart ( 1998 ) it follows that for any a vector of observations that is i.i.d . with pdf is contiguous to a vector of observations with pdf .therefore , when are i.i.d . with pdf . next by continuous at , is bounded on a neighborhood of therefore for small enough , and hence is continuous in in a neighborhood of . also , for note that suppose are i.i.d . with pdf and .adding and subtracting terms , that . also , for large enough , the lindbergh - feller condition for a central limit theorem is satisfied .furthermore , it follows by similar calculations that therefore , by the lindbergh - feller central limit theorem , .therefore we have if and only if that is differentiable at with derivative .then bounded .next , we follow the proof of theorem 2.1 of van der vaart ( 1991 ) , and suppose that eq .( [ mean converge ] ) holds for all consider any sequence .let be the subsequence such that for and for by construction , so that eq ( [ mean converge ] ) holds .therefore it also holds along the subsequence , so that \longrightarrow 0.\]]by construction is bounded away from zero , so that /r_{m}\longrightarrow 0 ] next , by continuity almost everywhere of in assumption 3 it follows that as with probability one ( w.p.1 ) .also , by assumption 3 is finite w.p.1 , so that by having bounded support and the dominated convergence theorem , w.p.1 , , for small enough it follows by the dominated convergence theorem that \longrightarrow 0 ] let so that by assumption 6 , \longrightarrow 0. ] , so that .let so that .note that /n=\bar{\gamma}^{t}(\tilde{\gamma}-\gamma ) , \tilde{\gamma}=\hat{\sigma}^{-1}\sum_{i=1}^{n}p^{k}(x_{i})d_{0}(x_{i})/n\]]let and be defined by the equations/\sqrt{n}+r_{1n}(\bar{\gamma})=r_{1n}(\gamma ) + r_{2n}(\bar{\gamma}).\]]by eqs .( 4.12 ) and ( 4.14 ) of lemma 4.1 of belloni et .al . ( 2015 ) and by assumption 5 we have that =o(1), ] , so that by the cauchy - schwarz inequality , \right\vert = \sqrt{n}\left\vert e[\{\delta ( x_{i})-p^{k}(x_{i})^{t}\gamma _ { \delta } \}\{d_{0}(x_{i})-p^{k}(x_{i})^{t}\gamma \}]\right\vert \leq \sqrt{n}c_{k}^{\delta } c_{k}\longrightarrow 0.\]]then the conclusion follows by the triangle inequality and eq .( [ serbias ] ) . __ * proof of theorem 5 : * as discussed in the text it suffices to prove that is asymptotically linear with influence function . by assumption 7it follows that , by the conclusion of theorem 1 and we have the triangle inequality it follows that+o_{p}(n^{-1/2}).q.e.d.\ ] ]ait - sahalia , y. ( 1991 ) : `` nonparametric functional estimation with applications to financial models , '' mit economics ph .d. thesis .
often semiparametric estimators are asymptotically equivalent to a sample average . the object being averaged is referred to as the influence function . the influence function is useful in formulating primitive regularity conditions for asymptotic normality , in efficiency comparions , for bias reduction , and for analyzing robustness . we show that the influence function of a semiparametric estimator can be calculated as the limit of the gateaux derivative of a parameter with respect to a smooth deviation as the deviation approaches a point mass . we also consider high level and primitive regularity conditions for validity of the influence function calculation . the conditions involve frechet differentiability , nonparametric convergence rates , stochastic equicontinuity , and small bias conditions . we apply these results to examples . * jel classification : * c14 , c24 , h31 , h34 , j22 * keywords : * influence function , semiparametric estimation , bias correction .